Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
36,695,687 | 2016-04-18T13:40:00.000 | -1 | 0 | 1 | 0 | python,python-3.x | 36,695,749 | 3 | false | 0 | 1 | It just means that you import all(methods, variables,...) in a way so you don't need to prefix them when using them. | 1 | 1 | 0 | I thought it meant everything in the module. But in tkinter I would have to specifically import things like messagebox, colorchooser and filedialog despite having a "from tkinter import *" command. So exactly what does "import *" mean? | What exactly does " from module import * " mean? | -0.066568 | 0 | 0 | 11,100 |
36,698,532 | 2016-04-18T15:38:00.000 | 1 | 0 | 1 | 0 | python,algorithm | 36,698,730 | 4 | false | 0 | 0 | I had an assignment similar to this at one point. Try using an A* variant. Construct a graph of possible 'neighbors' for a given word and search outward using A* with the distance heuristic being the number of letter needed to change in the current word to reach the target. It should be clear as to why this is a good heuristic-it's always going to underestimate accurately. You could think of a neighbor as a word that can be reached from the current word only using one operation. It should be clear that this algorithm will correctly solve your problem optimally with slight modification. | 1 | 4 | 0 | I'd like to compute the edits required to transform one string, A, into another string B using only inserts and deletions, with the minimum number of operations required.
So something like "kitten" -> "sitting" would yield a list of operations something like ("delete at 0", "insert 's' at 0", "delete at 4", "insert 'i' at 3", "insert 'g' at 6")
Is there an algorithm to do this, note that I don't want the edit distance, I want the actual edits. | Algorithm to compute edit set for transforming one string into another? | 0.049958 | 0 | 0 | 437 |
36,699,408 | 2016-04-18T16:24:00.000 | 1 | 0 | 1 | 0 | python-3.x | 36,699,576 | 3 | false | 0 | 0 | A simple way to achieve this is to have a state file that you write to after each step. When you start up the script, you can read from the file and then proceed to the desired state. | 1 | 0 | 0 | Actually I am doing/want to do in the single python script:
1. Using Python I am writing some data to the Disk
2. Doing Reboot
3. Verifying the data
4. Other steps
Here after steps2, I want script should run after reboot on step 3 using the same script.
I'm fine if need to run the script manually again but when I run it should directly goto Step3. | Is there any way to run the python script continuously on reboot | 0.066568 | 0 | 0 | 300 |
36,701,007 | 2016-04-18T17:52:00.000 | 0 | 0 | 0 | 0 | python,rpc,ethernet,lan | 36,701,148 | 2 | false | 0 | 0 | If the set up is proving a LAN interface and exposes a server then it is possible to read the value using socket programming. If not then a web service can be run on the setup to read the value and serve it on the network | 1 | 0 | 0 | I have a software that runs on my PC. There is a test setup elsewhere in the network which has temperature and pressure measuring cards. That setup also has a windows OS.
Both my PC and the setup are connected to the LAN. Now I want to write a temperature measuring test on my PC using my software (written in python) which would access the remote setup.
How can I achieve this? Previously I was running my software on the setup itself , using windll utitlty to initialize the cards and now I want to separate it out. | Use hardware remotely | 0 | 0 | 0 | 49 |
36,701,182 | 2016-04-18T18:02:00.000 | 0 | 1 | 0 | 0 | python,logging,pytest,xdist,pytest-xdist | 36,989,662 | 2 | false | 0 | 0 | pytest-sugar does it for example
at the sprint in june we hope to enhance the api further | 1 | 0 | 0 | I'm using pytest with the xdist plugin to run a large suite of tests. These tests can take a few hours to run, so I'd like to see certain information while they are running. The items I'd like to see are the errors when tests fail, how many tests are still left, and more. To do this, I'd like to have a setup where detailed errors go to one file while basic info like how many tests are left will go to another file. Is there a pytest plugin that would allow this or a way to hook up the internal pytest logger to do this?
Thanks for your time. | Can pytest xdist tests log to the same configuration? | 0 | 0 | 0 | 567 |
36,704,224 | 2016-04-18T21:02:00.000 | 0 | 0 | 1 | 0 | python,floating-point,formatting | 36,704,352 | 1 | true | 0 | 0 | Unfortunately no, the width and precision of the format specifier only affect presentation of the mantissa. You will need to post-process the string if you want to affect the exponent. | 1 | 0 | 0 | Are there any means by which one could tell Python 2.x/3.x to always use 3 digits for the exponent when printing a float (==IEEE754 double precision) in scientific format using the "E" format specifier (or another one)? | formatted printing of the exponent of a double in Python | 1.2 | 0 | 0 | 268 |
36,707,367 | 2016-04-19T02:15:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,session,flask | 36,829,120 | 2 | false | 1 | 0 | Yeah its kind of possible run a loop to see until session['key']==None and if the condition becomes true call the function. I hope this helps!!! | 1 | 5 | 0 | In my Flask application, I am saving files that correspond to a user, and want to delete these files when the user's "session" expires. Is it possible to detect the session expiration and immediately call a function? | Call a function when Flask session expires | 0 | 0 | 0 | 3,947 |
36,708,543 | 2016-04-19T04:27:00.000 | 2 | 0 | 1 | 0 | python,video,ffmpeg | 36,708,804 | 1 | false | 0 | 0 | There will be some loss of quality, that's unavoidable, but you can try forcing a high bitrate and then stepping it down while quality remains acceptable. Start with something like
ffmpeg -i infile -vcodec mpeg1video -b:v 8192k -acodec libmp3lame -intra outfile.mpg
and work from there | 1 | 1 | 0 | I am currently making a program on python (3) using the pygame module. One of the things I need it to do is to play a video which is currently in AVI format.
From what I managed to understand from the pygame.movie documentation I have to use FFMPEG and not another program to convert the video to an MPEG (I tried it with NCH Prism and the result was quite memorable).
I managed to convert my file to an MPG using the sample command found in the pygame.movie documentation (ffmpeg -i <infile> -vcodec mpeg1video -acodec libmp3lame -intra <outfile.mpg>) but the video quality dropped very much. I tried looking at different cites but they never actually had a working example...
Is there a way to keep the current video quality? I don't really care about the file size...
Thanks in advance! | FFMPEG video conversion for pygame | 0.379949 | 0 | 0 | 555 |
36,713,581 | 2016-04-19T09:01:00.000 | 1 | 0 | 1 | 1 | python,module,pycharm,pydrive | 54,055,599 | 1 | true | 0 | 0 | After noticing that the module is already installed, both by pip and by the project interpreter, and nothing worked, this what did the trick (finaly!):
make sure the module is indeed installed:
sudo pip{2\3} install --upgrade httplib2
locate the module on your computer:
find / | grep httplib2
you will need to reach the place in which pip is installing the module, the path would probably look like this:
/usr/local/lib/python2.7/dist-packages
get into the path specified there, search for the module and copy all the relevant files and folders into your local pycharm project environment. this will be a directory with a path like this:
/home/your_user/.virtualenvs/project_name/lib/python2.7
this is it. note however you may need to do this multiple times, since each module may have a dependencies...
good luck! | 1 | 1 | 0 | I try to import the PyDrive module in my PyCharm project : from pydrive.auth import GoogleAuth.
I tried different things :
Installing it directly from the project interpreter
Download it with a pip command and import it with the path for the poject interpreter
The same thing in Linux
Nothing works. Each time PyCharm recognize the module and even sugest the auto-completion, but when I run the project it keeps saying ImportError: No module named pydrive.auth
Any suggestion ?
EDIT : When I put directly the pydrive folder in my repository, and this time : ImportError: No module named httplib2 from the first import of PyDrive.
My path is correct and httplib2 is again in my PyCharm project | PyCharm recognize a module but do not import it | 1.2 | 0 | 1 | 413 |
36,717,654 | 2016-04-19T11:56:00.000 | 0 | 0 | 0 | 0 | python,aws-lambda,aws-api-gateway | 36,757,815 | 2 | false | 1 | 0 | You could return it base64-encoded... | 2 | 0 | 0 | I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client.
Is there a way to return a png image to the API gateway, and if so how can this be done? | Passing an image from Lambda to API Gateway | 0 | 0 | 1 | 835 |
36,717,654 | 2016-04-19T11:56:00.000 | 0 | 0 | 0 | 0 | python,aws-lambda,aws-api-gateway | 36,727,013 | 2 | false | 1 | 0 | API Gateway does not currently support passing through binary data either as part of a request nor as part of a response. This feature request is on our backlog and is prioritized fairly high. | 2 | 0 | 0 | I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client.
Is there a way to return a png image to the API gateway, and if so how can this be done? | Passing an image from Lambda to API Gateway | 0 | 0 | 1 | 835 |
36,720,700 | 2016-04-19T14:02:00.000 | 0 | 0 | 1 | 0 | python,textblob | 71,493,064 | 2 | false | 0 | 0 | you can try change your file name TextBlob.py | 1 | 4 | 0 | Using windows 10
I've install textblob using "py -m pip install textblob".
I can import textblob, or from textblob import blob,word
But i cant: from textblobl import Textblob.
The error i get is:
Traceback (most recent call last):
File "", line 1, in
from textblob import Textblob
ImportError: cannot import name 'Textblob'
Thanks. | Python: Error importing textblob lib | 0 | 0 | 0 | 503 |
36,720,976 | 2016-04-19T14:14:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,python-wheel | 59,698,559 | 3 | false | 0 | 0 | Why would you be doing that? The __pycache__ directory will be generated anyway when a project is run for the first time on the target machine.
It's simply an optimised bytecode representation of Python.
But anyway, you can write an script that unpacks the .whl file and does the modifications and then repacks the wheel. | 1 | 13 | 0 | What's the correct way to exclude files from a python wheel distribution package?
Editing the MANIFEST.in does not have any effect and I can't find information about this detail. | How to exclude *.pyc and __pycache__ from python wheels? | 0.066568 | 0 | 0 | 4,087 |
36,721,348 | 2016-04-19T14:27:00.000 | 4 | 0 | 0 | 1 | bash,python-2.7,machine-learning,neural-network,deep-learning | 36,728,171 | 2 | false | 0 | 0 | Follow the below instructions and see if it works:
Open a terminal
cd to caffe root directory
Make sure the file caffe exists by listing them using ls ./build/tools
If the file is not present, type make. Running step 3 will list the file now.
Type ./build/tools/caffe, No such file error shouldn't get triggered this time. | 2 | 2 | 1 | I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset
./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt
But I received this error
bash: ./build/tools/caffe: No such file or directory
How can I resolve this error? Any suggestions would be of great help. | ./build/tools/caffe: No such file or directory | 0.379949 | 0 | 0 | 7,193 |
36,721,348 | 2016-04-19T14:27:00.000 | 2 | 0 | 0 | 1 | bash,python-2.7,machine-learning,neural-network,deep-learning | 36,724,914 | 2 | true | 0 | 0 | You should specify absolute paths to all your files and commands, to be on the safer side. If /home/user/build/tools/caffe train still doesn't work, check if you have a build directory in your caffe root. If not, then use /home/user/tools/caffe train instead. | 2 | 2 | 1 | I have a question regarding the command for running the training in Linux. I am using GoogleNet model in caffe framework for binary classification of my images. I used the following command to train my dataset
./build/tools/caffe train --solver=models/MyModelGoogLenet/quick_solver.prototxt
But I received this error
bash: ./build/tools/caffe: No such file or directory
How can I resolve this error? Any suggestions would be of great help. | ./build/tools/caffe: No such file or directory | 1.2 | 0 | 0 | 7,193 |
36,722,859 | 2016-04-19T15:27:00.000 | 0 | 0 | 0 | 0 | python,django,pycharm | 36,729,774 | 2 | false | 1 | 0 | I've just tried it on 2016.1.2 and the auto-complete works for me for statements which handle models. I have not changed my code editing settings on PyCharm for several versions now.
Baffling. Have you perhaps tried a restart of PyCharm? | 2 | 11 | 0 | The 2016.1.2 version of PyCharm doesn't seem to autocomplete queries on Django models anymore. For example on Foo.objects.filter(some-field-lookup) the filter method doesn't get autocompleted (or any other method) and also the field-lookup parameters don't get autcompleted, which both worked in PyCharm version 5.
Is anybody else having this issue? Is this expected behavior? Is there some setting which needs to be turned on?
Restarting or invalidating the cache and restarting didn't have any effect on this | PyCharm doesn't autocomplete Django model queries anymore in 2016.1.2 | 0 | 0 | 0 | 3,482 |
36,722,859 | 2016-04-19T15:27:00.000 | 26 | 0 | 0 | 0 | python,django,pycharm | 42,135,532 | 2 | false | 1 | 0 | For me, the problem turned about to be that PyCharm wasn't aware that the site was using Django, since I didn't use PyCharm's creation tool to start the Django project. (I assume most people don't after the first few projects they try, which is why the autocompletion seems to work and then break)
Go under Settings/Languages & Frameworks/Django, and make sure that Django Support is turned on, and that the settings.py and manage.py files are correctly specified. This fixed the problem for me. | 2 | 11 | 0 | The 2016.1.2 version of PyCharm doesn't seem to autocomplete queries on Django models anymore. For example on Foo.objects.filter(some-field-lookup) the filter method doesn't get autocompleted (or any other method) and also the field-lookup parameters don't get autcompleted, which both worked in PyCharm version 5.
Is anybody else having this issue? Is this expected behavior? Is there some setting which needs to be turned on?
Restarting or invalidating the cache and restarting didn't have any effect on this | PyCharm doesn't autocomplete Django model queries anymore in 2016.1.2 | 1 | 0 | 0 | 3,482 |
36,722,975 | 2016-04-19T15:31:00.000 | 3 | 1 | 0 | 0 | python,g++,theano | 40,705,647 | 6 | false | 0 | 1 | This is the error that I experienced in my mac running jupyter notebook with a python 3.5 kernal hope this helps someone, i am sure rggir is well sorted at this stage :)
Error
Using Theano backend.
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
Cause
update of XCode (g++ compiler) without accepting terms and conditions, this was pointed out above thanks Emiel
Resolution:
type g++ --version in the mac terminal
"Agreeing to the Xcode/iOS license requires admin privileges, please re-run as root via sudo." is output as an error
launch Xcode and accept terms and conditions
return g++ --version in the terminal
Something similar to the following will be returned to show that Xcode has been fully installed and g++ is now available to keras
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Restart you machine… I am sure there are some more complicated steps that someone smarter than me can add here to make this faster
Run the model.fit function of the keras application which should run faster now … win! | 3 | 18 | 0 | I installed theano but when I try to use it I got this error:
WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute
optimized C-implementations (for both CPU and GPU) and will default to Python
implementations. Performance will be severely degraded.
I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it.
Does anyone know how to solve the problem or which may be the cause? | theano g++ not detected | 0.099668 | 0 | 0 | 31,119 |
36,722,975 | 2016-04-19T15:31:00.000 | 7 | 1 | 0 | 0 | python,g++,theano | 39,568,992 | 6 | false | 0 | 1 | I had this occur on OS X after I updated XCode (through the App Store). Everything worked before the update, but after the update I had to start XCode and accept the license agreement. Then everything worked again. | 3 | 18 | 0 | I installed theano but when I try to use it I got this error:
WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute
optimized C-implementations (for both CPU and GPU) and will default to Python
implementations. Performance will be severely degraded.
I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it.
Does anyone know how to solve the problem or which may be the cause? | theano g++ not detected | 1 | 0 | 0 | 31,119 |
36,722,975 | 2016-04-19T15:31:00.000 | 6 | 1 | 0 | 0 | python,g++,theano | 37,846,308 | 6 | false | 0 | 1 | On Windows, you need to install mingw to support g++. Usually, it is advisable to use Anaconda distribution to install Python. Theano works with Python3.4 or older versions. You can use conda install command to install mingw. | 3 | 18 | 0 | I installed theano but when I try to use it I got this error:
WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute
optimized C-implementations (for both CPU and GPU) and will default to Python
implementations. Performance will be severely degraded.
I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it.
Does anyone know how to solve the problem or which may be the cause? | theano g++ not detected | 1 | 0 | 0 | 31,119 |
36,725,361 | 2016-04-19T17:25:00.000 | 0 | 0 | 0 | 0 | python-2.7,matplotlib,plot | 44,015,767 | 2 | false | 0 | 0 | Something that has worked for me in a similar problem (time varying heat-maps) was to run a batch job of producing several thousands such plots over night, saving each as a separate image. At 10s a figure, you can produce 3600 in 10h. You can then simply scan through the images which could provide you with the insight you're looking for. | 1 | 0 | 1 | I have a large (10-100GB) data file of 16-bit integer data, which represents a time series from a data acquisition device. I would like to write a piece of python code that scans through it, plotting a moving window of a few seconds of this data. Ideally, I would like this to be as continuous as possible.
The data is sampled at 4MHz, so to plot a few seconds of data involves plotting ~10 million data points on a graph. Unfortunately I cannot really downsample since the features I want to see are sparse in the file.
matplotlib is not really designed to do this. It is technically possible, and I have a semi-working matplotlib solution which allows me to plot any particular time window, but it's far too slow and cumbersome to do a continuous scan of incrementally changing data - redrawing the figure takes several seconds, which is far too long.
Can anyone suggest a python package or approach do doing this? | scanning plot through a large data file using python | 0 | 0 | 0 | 373 |
36,726,799 | 2016-04-19T18:38:00.000 | 1 | 0 | 0 | 1 | python,macos,docker,docker-machine | 36,726,873 | 2 | false | 0 | 0 | There are scenarios when I update myprogram.py and need to kill the
command, transfer the updated myprogram.py file to the container, and
execute python myprogram.py again. I imagine this to be a common
scenario.
Not really. The common scenario is either:
Kill existing container
Build new image via your Dockerfile
Boot container from new image
Or:
Start container with a volume mount pointing at your source
Restart the container when you update your code
Either one works. The second is useful for development, since it has a slightly quicker turnaround. | 1 | 1 | 0 | I have a docker container that is running a command. In the Dockerfile the last line is CMD ["python", "myprogram.py"] . This runs a flask server.
There are scenarios when I update myprogram.py and need to kill the command, transfer the updated myprogram.py file to the container, and execute python myprogram.py again. I imagine this to be a common scenario.
However, I haven't found a way to do this. Since this is the only command in the Dockerfile...I can't seem to kill it. from the containers terminal when I run ps -aux I can see that python myprogram.py is assigned a PID of 1. But when I try to kill it with kill -9 1 it doesn't seem to work.
Is there a workaround to accomplish this? My goal is to be able to change myprogram.py on my host machine, transfer the updated myprogram.py into the container, and execute python myprogram.py again. | Is there a way to stop a command in a docker container | 0.099668 | 0 | 0 | 1,751 |
36,731,567 | 2016-04-20T00:07:00.000 | 1 | 0 | 0 | 0 | python,flask,ibm-cloud | 36,745,441 | 1 | false | 1 | 0 | All Bluemix traffic goes through the IBM WebSphere® DataPower® SOA Appliances, which provide reverse proxy, SSL termination, and load balancing functions. For security reasons DataPower closes inactive connections after 2 minutes.
This is not configurable (as it affects all Bluemix users), so the only solution for your scenario is to change your program to make sure the connection is not idle for more than 2 minutes. | 1 | 0 | 0 | I have an API written with python flask running on Bluemix. Whenever I send it a request and the API takes more than 120 seconds to respond it times out. It does not return anything and it returns the following error: 500 Error: Failed to establish a backside connection.
I need it to be able to process longer requests as well. Is there any way to extend the timeout value or is there a workaround for this issue? | Bluemix Flask API Call Timeout | 0.197375 | 0 | 0 | 330 |
36,738,514 | 2016-04-20T08:50:00.000 | 0 | 0 | 0 | 0 | algorithm,python-2.7,machine-learning,scikit-learn,missing-data | 36,814,562 | 1 | false | 0 | 0 | AFAIK scikit-learn doesn't have ML algorithms that can work with missing values without preprocessing them first. R does though. | 1 | 0 | 1 | I would like to know if there are any implementations of machine learning algorithms in python which can work even if there are missing values in the dataset. Please note that I don't want algorithms imputing the missing values first.(I could have done that using the Imputer package ). I would like to know about the implementation of algorithms which work even if there are missing values present in the dataset without imputation. | Implementations of algorithms without imputation of missing values | 0 | 0 | 0 | 76 |
36,746,071 | 2016-04-20T13:54:00.000 | 0 | 0 | 0 | 0 | python,node.js,nlp,chatbot | 54,928,184 | 3 | false | 0 | 0 | Two things to think about are: How are you planning on handling the generation side of things? Entity extraction and classification are going to be useful for the Natural language understanding (NLU) side of things, but generation can be tricky in itself.
Another thing to think about is that the training and development of the pipeline of these models is often a separate problem form the deployment. The fact that you want to use node suggests that you already know about deploying software, I think. But remember that deploying large machine learning models in a pipeline can be complicated, and I suspect that these API's may offer neatly packaged pipelines for you. | 1 | 6 | 1 | I'm building a chatbot and I'm new to NLP.
(api.ai & AlchemyAPI are too expensive for my use case. And wit.ai seems to be buggy and constantly changing at the moment.)
For the NLP experts, how easily can I replicate their services locally?
My vision so far (with node, but open to Python):
entity extraction via StanfordNER
intent via NodeNatural's LogisticRegressionClassifier
training UI with text and validate/invalidate buttons (any prebuilt tools for this?)
Are entities and intents all I'll need for a chatbot? How good will NodeNatural/StanfordNER be compared to NLP-as-a-service? What headaches am I not seeing? | Building your own NLP API | 0 | 0 | 0 | 2,286 |
36,746,446 | 2016-04-20T14:09:00.000 | 0 | 0 | 0 | 0 | python,unit-testing,numpy | 36,749,915 | 1 | false | 0 | 0 | These functions are implemented in numpy/testing/utils.py. Studying that code may be your best option.
I see that assert_raises passes the task on to nose.tools.assert_raises(*args,**kwargs). So it depends on what that does. And if I recall use of this in other modules correctly, you are usually more interested in the error message raised by the Error, as opposed to displaying your own. Remember, unittests are more for your own diagnostic purposes, not as a final user-friendly tool.
assert_equal is a complex function that tests various kinds of objects, and builds the error message accordingly. It may including information of the objects.
Choices in this part of the code were determined largely by what has been useful to the developers. They are written primarily to test the numpy code itself. So being systematic is not a priority. | 1 | 0 | 1 | Contrary to np.testing.assert_equal(), np.testing.assert_raises() does not accept an err_msg parameter. Is there a clean way to display an error message when this assert fails?
More generally, why do some assert_* methods accept this parameter, while some others don't? | Use error message with numpy.testing.assert_raises() | 0 | 0 | 0 | 173 |
36,748,120 | 2016-04-20T15:13:00.000 | 3 | 0 | 0 | 0 | python,python-3.x,pandas,multiprocessing,interprocess | 36,754,207 | 1 | true | 0 | 0 | To me, the most important thing you mentioned is this:
It is VERY CRITICAL that the consumer catches every single DataFrame the producer produces.
So, let's suppose you used a variable to store the DataFrame. The producer would set it to the produced value, and the consumer would just read it. That would work very fine, I guess.
But what would happen if somehow the consumer got blocked by more than one producing cycle? Then some old value would be overwritten before reading. And that's why I think a (thread-safe) queue is the way to go almost "by definition".
Besides, beware of premature optimization. If it works for your case, excellent. If some day, for some other case, performance comes to be a problem, only then you should spend the extra work, IMO. | 1 | 4 | 1 | This question is related to Python Multiprocessing. I am asking for a suitable interprocess communication data-structure for my specific scenario:
My scenario
I have one producer and one consumer.
The producer produces a single fairly small panda Dataframe every 10-ish secs, then the producer puts it on a python.multiprocess.queue.
The consumer is a GUI polling that python.multiprocess.queue every 100ms. It is VERY CRITICAL that the consumer catches every single DataFrame the producer produces.
My thinking
python.multiprocess.queue is serving the purpose (I think), and amazingly simple to use! (praise the green slithereen lord!). But I am clearly not utilizing queue's full potential with only one producer one consumer and a max of one item on the queue. That leads me to believe that there is simpler thing than queue. I tried to search for it, I got overwhelmed by options listed in: python 3.5 documentation: 18. Interprocess Communication and Networking. I am also suspecting there may be a way not involving interprocess communication data-structure at all for my need.
Please Note
Performance is not very important
I will stick with multiprocessing for now, instead of multithreading.
My Question
Should I be content with queue? or is there a more recommended way? I am not a professional programmer, so I insist on doing things the tried and tested way.
I also welcome any suggestions of alternative ways of approaching my problem.
Thanks | 1 producer, 1 consumer, only 1 piece of data to communicate, is queue an overkill? | 1.2 | 0 | 0 | 300 |
36,749,105 | 2016-04-20T15:52:00.000 | 0 | 0 | 0 | 0 | python-2.7,pandas,datareader,google-finance,pandas-datareader | 36,783,492 | 1 | false | 1 | 0 | That URL is a 404 - pandas isn't at fault, maybe just check the URL? Perhaps they're on different exchanges with different google finance support. | 1 | 1 | 1 | I found that some of the stock exchanges is not supported for datareader. Example, Singapore. Any workaround?
query = web.DataReader(("SGX:BLA"), 'google', start, now) return such error`
IOError: after 3 tries, Google did not return a 200 for url 'http://www.google.com/finance/historical?q=SGX%3ABLA&startdate=Jan+01%2C+2015&enddate=Apr+20%2C+2016&output=csv
It works for IDX indonesia
query = web.DataReader(("IDX:CASS"), 'google', start, now) | Pandas: datareader unable to get historical stock data | 0 | 0 | 1 | 850 |
36,757,158 | 2016-04-20T23:47:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,logistic-regression,bernoulli-probability | 36,758,704 | 1 | false | 0 | 0 | If they are categorical - you should provide binarized version of it. I don't know how that code in R works, but you should binarize your categorical feature always. Because you have to emphasize that each value of your feature is not related to other one, i.e. for feature "blood_type" with possible values 1,2,3,4 your classifier must learn that 2 is not related to 3, and 4 is not related to 1 in any sense. These is achieved by binarization.
If you have too many features after binarization - you can reduce dimensionality of binarized dataset by FeatureHasher or more sophisticated methods like PCA. | 1 | 0 | 1 | So, I know that in R you can provide data for a logistic regression in this form:
model <- glm( cbind(count_1, count_0) ~ [features] ..., family = 'binomial' )
Is there a way to do something like cbind(count_1, count_0) with sklearn.linear_model.LogisticRegression? Or do I actually have to provide all those duplicate rows? (My features are categorical, so there would be a lot of redundancy.) | Can you use counts in sklearn logistic regression input? | 0 | 0 | 0 | 308 |
36,757,558 | 2016-04-21T00:29:00.000 | 0 | 0 | 0 | 0 | multithreading,user-interface,parallel-processing,wxpython | 36,760,661 | 1 | false | 0 | 1 | Yes and no.
As far as I know, it is not possible to run 2 GUIs just seperated by threads.
We had a quite similar problem. And as we are using Windows mainly, we needed to work around the Python GIL, to enable the GUI to stop the subprocess which executes the tests.
In order to do so, we moved to multiprocessing library and we are using a Processpool which executes the tests (we are running multiple items at the same time - you can use just a process for your purpose). In a new process you will also be able to execute a second GUI, which is completely independent (We are doing this as well). When using another process, it gets a PID, which can be used with os.kill. The main GUI will stay stable throughout this.
To communicate with the new process we use Pyro4, but you could also use a approach with Queues to get the communication done. You cannot directly access data from another process if you are using Windows like with the threading approach.
Hope this helps.
Regards,
Michael | 1 | 0 | 0 | I just started a new job. Existing code is wxpython GUI that is partially implemented. The jist is that there are 2 independent GUIs: one is a "status monitor with test abort button" and the other is a test executive, running tests that might take a few minutes.
At any time, the status monitor GUI abort button should be able to be pressed to stop the long running test script process running in the executive GUI.
The long script process is started in a new thread and is working from the test executive GUI, but no matter what I do, the status monitor GUI is frozen.
Ive done similar things in pyside and it didnt seem to be as much of a bear as with wxpython.
I don think this is a thread problem. seems like its a wxpython problem.
Is it possible to have 2 independent wxpython GUIs started from the same script? I seem to have read somewhere that im required to have only one main loop.
Any hints/examples of 2 parallel independent GUIs would be great! | 2 "parallel" wxpython GUIs possible? | 0 | 0 | 0 | 144 |
36,759,037 | 2016-04-21T03:24:00.000 | 1 | 0 | 0 | 0 | python,tensorflow | 36,870,610 | 1 | true | 0 | 0 | The model_with_buckets() function in seq2seq.py returns 2 tensors: the output and the losses. The outputs variable contains the raw output of the decoder that you're looking for (that would normally be fed to the softmax). | 1 | 0 | 1 | I was using Tensorflow sequence to sequence example code. for some reason, I don't want to add softmax to output. instead, I want to get the raw output of decoder without softmax. I was wondering if anyone know how to do it based on sequence to sequence example code? Or I need to create it from scratch or modify the the seq2seq.py (under the /tensorflow/tensorflow/python/ops/seq2seq.py)?
Thank you | tensorflow sequence to sequence without softmax | 1.2 | 0 | 0 | 389 |
36,759,068 | 2016-04-21T03:27:00.000 | 0 | 1 | 1 | 0 | python,mfc,rendering,maya,monitor | 36,766,982 | 3 | false | 0 | 0 | Use some renderfarm program as Deadline or something else. | 1 | 2 | 0 | I wrote some codes in Maya using Maya Python to render over 2,000 pictures. Since there are a lot of work for Maya to finish, during the long process of rendering, Maya may get crashed. So I have to make a module to monitor Maya. If Maya get stuck, the module has to keep Maya going and modify the mistakes. I want to know what tools can I use to achieve this function. What kind of language should I use to code this module? | How to monitor a maya program running in my computer? | 0 | 0 | 0 | 330 |
36,766,082 | 2016-04-21T09:57:00.000 | 3 | 0 | 1 | 0 | ipython,jupyter-notebook | 36,766,083 | 1 | false | 0 | 0 | Use unconfined=True to disable max-width confinement of the image:
from IPython.core.display import Image, display
display(Image('https://i.ytimg.com/vi/j22DmsZEv30/maxresdefault.jpg', width=1900, unconfined=True)) | 1 | 3 | 0 | How can I display a large image with scrollbars inside an IPython notebook output cell? The example below scales down the image to fit into the cell, and width does not have any effect.
from IPython.core.display import Image, display
display(Image('https://i.ytimg.com/vi/j22DmsZEv30/maxresdefault.jpg', width=1900)) | How to display large image from IPython notebook with scrolling? | 0.53705 | 0 | 0 | 3,089 |
36,766,231 | 2016-04-21T10:03:00.000 | 1 | 0 | 0 | 1 | python,c | 36,773,253 | 1 | false | 0 | 0 | It's impossible without modifying either kernel or ps itself. Simple process can't hide itself. But you can change process name by changing argv[0] and mimicry to another common process, like httpd, sshd, etc - that's what a lot of malware does. | 1 | 0 | 0 | I want to create a process that should not be listed in ps -ef command, while the process is running. I need this for testing an Intrusion detection system (IDS) application in Linux. | How to create a process that should not be listed in ps -ef command in linux | 0.197375 | 0 | 0 | 51 |
36,773,380 | 2016-04-21T14:57:00.000 | 0 | 0 | 0 | 0 | python,django,amazon-ec2,celery,slack-api | 36,780,059 | 1 | true | 0 | 0 | Assumptions:
You're using the outgoing webhooks from slack, not the Real Time Messaging API
You're not trying to do some kind of multiple question-answer response where state between each question & answer needs to be maintained.
Skip all the Django stuff and just use AWS Lambda to respond to user requests. That only works for fairly simple "MyBot: do_something_for_me" style things but it's working pretty well for us. Lot easier to manage as well since there's no ec2, no rds, easy deployment, etc. Just make sure you set a reasonable time limit for each Lambda request. From my experience 3 seconds is generally enough time unless you've got a bit of a larger script.
If you really really really have to maintain all this state then you might be better off just writing some kind of quick thing in Flask rather than going through all the setup of django. You'll then have to deal with all the deployment, autoscaling, backup rigmarole that you would for any web service but if you need it, well then ya need it =) | 1 | 1 | 0 | I have a bot written in Python running on Amazon EC2 with Django as a framework. The bot's end goal is to sustain conversations with multiple users on the same Slack team at once. As I understand it, Amazon will handle load-bearing between Slack teams, but I'm trying to figure out how to manage the load within a single Slack.
Right now, my bot sits in a busy loop waiting for a single user to respond. I've been doing some research on this - is Celery the right tool for the job? Should I split each conversation into a separate thread/task, or maybe have a dispatcher handle new messages? Is there a way for Slack to send an interrupt, or am I stuck with while loops?
Thanks for any help/guidance! I'm pretty new to this.
Edit: I managed to solve this problem by implementing a list of "Conversation" objects pertaining to each user. These objects save the state of each conversation, so that the bot can pick up where it left off when the user messages again. | Serving multiple users at once with a Python Slack bot | 1.2 | 0 | 1 | 982 |
36,774,887 | 2016-04-21T15:59:00.000 | 0 | 0 | 0 | 0 | python,django | 36,775,635 | 2 | false | 1 | 0 | The RelatedManager is a Manager and not a QuerySet, but it implements the database-abstraction API and because of that it has all the QuerySet methods such as get(), exclude(), filter() and all().
The difference in calling all() in a RelatedManager is that it actually performs a query in the database.
The all() method returns a QuerySet. | 1 | 0 | 0 | In django ManyToManyField(), when you refer to it, it is going to return a RelatedManager.
If you want to get the actual objects, you have to call all(), however I don't see any documents describing this behaviour, is RelatedManager a kind of QuerySet? Otherwise, why there can be an all() method?
And after calling all(), is it going to return a QuerySet? | What is the all() function in RelatedManager? | 0 | 0 | 0 | 371 |
36,779,522 | 2016-04-21T20:12:00.000 | 2 | 0 | 0 | 0 | python,r,csv | 36,780,531 | 3 | false | 0 | 0 | use sed '2636759d' file.csv > fixedfile.csv
As a test for a 40,001 line 1.3G csv, removing line 40,000 this way takes 0m35.710s. The guts of the python solution from @en_Knight (just stripping the line and writing to a temp file) is ~ 2 seconds faster for this same file.
edit OK sed (or some implementations) may not work (based on feedback from questioner)
You could, in plain bash, to remove row n from a file of N rows, file.csv, you can do head -[n-1] file.csv > file_fixed.csv and tail -[N-n] file.csv >> file_fixed.csv (in both of these the expression in brackets is replaced by a plain number).
To do this, though you need to know N. The python solution is better... | 1 | 6 | 1 | I have a ~220 million row, 7 column csv file. I need to remove row 2636759.
This file is 7.7GB, more than will fit in memory. I'm most familiar with R, but could also do this in python or bash.
I can't read or write this file in one operation. What is the best way to build this file incrementally on disk, instead of trying to do this all in memory?
I've tried to find this on SO but have only been able to find how to do this with files that are small enough to read/write in memory, or with rows that are at the beginning of the file. | remove known exact row in huge csv | 0.132549 | 0 | 0 | 1,468 |
36,781,105 | 2016-04-21T21:50:00.000 | 1 | 0 | 0 | 0 | python,importerror | 63,840,105 | 6 | false | 0 | 0 | It may happen sometimes, usually in the Linux environment. You have both 2.x and 3.x versions of python installed.
So, in that case, if you are using the command python "file.py"
then by default, it will python 2.x will run the file.
So, use the command python3 "file.py"
I was facing this issue. Maybe it can resolve someone's issue. | 2 | 37 | 0 | I am trying to import urllib.request for python 2.7.10 on PyCharm 4.5.4 on Window 10 but getting the error "ImportError: No module named request". | Import urllib.request, ImportError: No module named request | 0.033321 | 0 | 1 | 101,433 |
36,781,105 | 2016-04-21T21:50:00.000 | 12 | 0 | 0 | 0 | python,importerror | 55,922,381 | 6 | false | 0 | 0 | You'll get this error if you try running a python 3 file with python 2. | 2 | 37 | 0 | I am trying to import urllib.request for python 2.7.10 on PyCharm 4.5.4 on Window 10 but getting the error "ImportError: No module named request". | Import urllib.request, ImportError: No module named request | 1 | 0 | 1 | 101,433 |
36,783,438 | 2016-04-22T01:53:00.000 | 0 | 0 | 1 | 0 | python,visual-studio,intellisense | 36,783,730 | 1 | false | 0 | 0 | Click the warning message, that should fix it | 1 | 0 | 0 | The Refresh DB button in Python Environments is not enabled, I can't click on it, what can be the problem ? | Python VS2013 's Intellisense is not working, "Refresh DB" button is not enabled | 0 | 0 | 0 | 69 |
36,787,345 | 2016-04-22T07:14:00.000 | 0 | 0 | 1 | 0 | python,string,python-2.7,user-input,nameerror | 48,070,422 | 3 | false | 0 | 0 | In Python 2, raw_input() returns a string, and input() tries to run the input as a Python expression.
Since getting a string was almost always what you wanted, Python 3 does that with input(). | 2 | 4 | 0 | When I enter
username = str(input("Username:"))
password = str(input("Password:"))
and after running the program, I fill in the input with my name and hit enter
username = sarang
I get the error
NameError: name 'sarang' is not defined
I have tried
username = '"{}"'.format(input("Username:"))
and
password = '"{}"'.format(input("Password:"))
but I end up getting the same error.
How do I convert the input into a string and fix the error? | How to convert user input to string in python 2.7 | 0 | 0 | 0 | 37,603 |
36,787,345 | 2016-04-22T07:14:00.000 | 0 | 0 | 1 | 0 | python,string,python-2.7,user-input,nameerror | 44,138,391 | 3 | false | 0 | 0 | You have to explicitly use raw_input(prompt), instead of just using input(prompt).
As you are currently using Python 2.x which supports this type of formatting for receiving an input from user. | 2 | 4 | 0 | When I enter
username = str(input("Username:"))
password = str(input("Password:"))
and after running the program, I fill in the input with my name and hit enter
username = sarang
I get the error
NameError: name 'sarang' is not defined
I have tried
username = '"{}"'.format(input("Username:"))
and
password = '"{}"'.format(input("Password:"))
but I end up getting the same error.
How do I convert the input into a string and fix the error? | How to convert user input to string in python 2.7 | 0 | 0 | 0 | 37,603 |
36,790,997 | 2016-04-22T10:14:00.000 | 0 | 0 | 0 | 0 | python,django | 36,795,738 | 1 | true | 1 | 0 | You can override the /admin/base_site.html template in your project by including a template with the same relative path on your projects /template dir. | 1 | 0 | 0 | Is it possible to add a custom button in admin panel? To be more specific: I have a django app with some custom admin views, but without models. Now I can reach this app typing url. Maybe there is more appropriate way?
The difficulty is that I don't want to interract with the project, just with reusable app. | Custom button in django admin panel | 1.2 | 0 | 0 | 496 |
36,791,091 | 2016-04-22T10:18:00.000 | 1 | 0 | 0 | 0 | python,django,django-models,singleton | 36,791,481 | 5 | false | 1 | 0 | Use of Django caching will be best here. You will need to use a 3rd party caching server e.g. Redis. There is Memcached too, but as you said your data is 20MB so you will need Redis as Memcached only allows 1MB at max per key.
Also using cache is very easy, you just need to sudo apt-get install redis, add CACHES setting in Django settings and you will be good to go.
Redis (or Memcached) are in-memory cache servers and hold all the cached data in memory, so getting it from Redis will be as fast as it can be. | 1 | 2 | 0 | Situation
When the Django website starts up, it need to load some data from a table in the database for computation. The data is read-only and large (e.g. 20MB).
The computation will be invoked every time a certain page is open. A module will use the data for computation. Therefore, I don't want the module to SELECT and load data every time the page is open.
Question
I guess singleton may be one of the solutions. How to implement the singleton in Django? Or is there any better solutions? | Implementation of read-only singleton in Python Django | 0.039979 | 0 | 0 | 433 |
36,795,203 | 2016-04-22T13:28:00.000 | 7 | 0 | 1 | 0 | python,python-2.7,canopy | 36,795,402 | 2 | true | 0 | 0 | You can't manually nuke an object from your memory in Python!
The Python Garbage Collector (GC) will automatically free up memory of objects that have no existing references any more (implementation details differ per interpreter). It's periodically checking for abandoned objects in background without your interaction.
So to get an object recycled, you have to eliminate all references to it by assigning a different value (e.g. None) to all variables that pointed to the object. You can also delete a variable name using the del statement, but as you already noticed, this only deletes the name with the reference, but not the object and its data itself. Only the GC can do that. | 1 | 7 | 0 | I am using Python (Canopy) extensively for Earth science application. Because my application is memory consuming, I am trying find way to erase variable that I don't need any more in my programs, I tried to use del command to erase the variable memory, but I found that space used by Canopy is still the same. Any ideas about how to erase variable completely from the memory.
thanks | Deleting variable does not erase its memory from RAM memory | 1.2 | 0 | 0 | 5,118 |
36,802,811 | 2016-04-22T20:29:00.000 | 0 | 0 | 1 | 0 | python,pygame,cx-freeze | 38,202,398 | 4 | false | 0 | 1 | I had that problem, the build was created and worked ok, but an ValueError: FCI error 1 take place in the msi creation. In my case, it was due to data files containing a point in your name (example AB_12345.1.fasta). I replaced the point by the underscore symbol (example AB_12345_1.fasta) and everything worked properly. I hope it helps you on something. | 3 | 3 | 0 | I want to make a msi for my PyGame game with cx_Freeze :
(I already created an execute file)
So, i used python setup.py bdist_msi, but I got this message :
File "C:\Python34\lib\msilib\__init__.py", line 213, in commit
FCICreate(filename, self.files)
ValueError: FCI error 1
An idea ? | Building msi with cx_Freeze : ValueError: FCI error 1 | 0 | 0 | 0 | 1,671 |
36,802,811 | 2016-04-22T20:29:00.000 | -1 | 0 | 1 | 0 | python,pygame,cx-freeze | 47,951,324 | 4 | false | 0 | 1 | I had the same problem. I solved my problem by deleting the build directory.
Then run python setup.py bdist_msi.
That works for me.
I wish it will help you. | 3 | 3 | 0 | I want to make a msi for my PyGame game with cx_Freeze :
(I already created an execute file)
So, i used python setup.py bdist_msi, but I got this message :
File "C:\Python34\lib\msilib\__init__.py", line 213, in commit
FCICreate(filename, self.files)
ValueError: FCI error 1
An idea ? | Building msi with cx_Freeze : ValueError: FCI error 1 | -0.049958 | 0 | 0 | 1,671 |
36,802,811 | 2016-04-22T20:29:00.000 | 0 | 0 | 1 | 0 | python,pygame,cx-freeze | 49,082,133 | 4 | false | 0 | 1 | I had the same problem. I used non-ascii characters at the path contains. I solved that by changing the path contains into ascii characters. | 3 | 3 | 0 | I want to make a msi for my PyGame game with cx_Freeze :
(I already created an execute file)
So, i used python setup.py bdist_msi, but I got this message :
File "C:\Python34\lib\msilib\__init__.py", line 213, in commit
FCICreate(filename, self.files)
ValueError: FCI error 1
An idea ? | Building msi with cx_Freeze : ValueError: FCI error 1 | 0 | 0 | 0 | 1,671 |
36,803,416 | 2016-04-22T21:14:00.000 | 2 | 0 | 0 | 1 | linux,python-2.7,debian | 37,038,569 | 1 | true | 0 | 0 | One more way to execute your script at booting time by adding below line to root's crontab
@reboot /usr/bin/python /root/simple.py
simple.py -- script need to be executed. | 1 | 1 | 0 | I'm writing a python script on Linux (debian) to install a few things, reboot, and then do more things.
I'm not sure if it's possible, but would I be able to say,
run all of my installers, reboot and resume my script from where it left off?
I would like for the user to not have to do anything (even log onto the machine, if at all possible)
Oh! Also, is there a way to keep(or Store) variables without storing them in plaintext?
Thanks! | How to reboot a host machine via python script, and on boot resume the script? | 1.2 | 0 | 0 | 415 |
36,805,233 | 2016-04-23T00:32:00.000 | 1 | 1 | 0 | 0 | java,python,performance,optimization | 36,806,181 | 3 | false | 1 | 0 | The crucial question is this one: "Java's static typing including seems to make it less prone to errors on a larger scale". The crucial word here is "seems." Sure, Java will help you catch this one particular type of error. But how important is that, and what do you have to pay for it? The overhead imposed by Java's type system means that you have to write more lines of code, which means reduced productivity. I've used both and I have no doubt that I'm more productive in Python. I have found that type-related bugs in Python are generally easy to find and fix. Keep in mind that in a professional environment you're not going to ship code without testing it pretty carefully. The bottom line for a programming environment is productivity - usable functionality per unit of effort, not the number of bugs you found and fixed during development.
My advice: if you have a working project written in Python, don't rewrite it unless you're certain there's a benefit. | 3 | 1 | 0 | First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project. | Rewrite Python project to Java - worth it? | 0.066568 | 0 | 0 | 648 |
36,805,233 | 2016-04-23T00:32:00.000 | 0 | 1 | 0 | 0 | java,python,performance,optimization | 36,805,273 | 3 | false | 1 | 0 | Java is inherently object oriented. Alternatively python is procedural.
As far as the ability of the language to handle large projects you can make do with either.
As far as producing more usable products I would recommend java script as opposed to java because of its viability in the browser. By embedding your js in a publicly hosted website you allow people with no coding knowledge to run your project seamlessly in the browser.
Further more all the GUI design features of HTML are available at your disposal.
That said any language has it's ups and downs and anything I've said here is simply my perception. | 3 | 1 | 0 | First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project. | Rewrite Python project to Java - worth it? | 0 | 0 | 0 | 648 |
36,805,233 | 2016-04-23T00:32:00.000 | 2 | 1 | 0 | 0 | java,python,performance,optimization | 36,805,510 | 3 | true | 1 | 0 | It's only worth it if it solves a real problem, note, that problem could be
I want to learn something better
I need it to go faster to reduce power requirements in my colo.
I need to hire more people and the talent pool for [insert language here]
is too small.
Insert innumerable real problems here.
Python and Java are both suitable for production. Write it in whatever makes it easiest to solve the problems you and or your team are facing and if you want to preempt some problems make sure you've done your homework. Plenty of projects have died because they chose C/C++ believing performance was going to be a major factor without thinking about the extra effort involved in using these language well.
You mentioned maintainability. You're likely to require more code to rewrite it in Java and there's a direct correlation between Bugs and LOC. It's up for debate which one is easier to maintain. I'm sure both camps believe theirs is.
Of the two which one do you enjoy coding with the most? | 3 | 1 | 0 | First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project. | Rewrite Python project to Java - worth it? | 1.2 | 0 | 0 | 648 |
36,809,967 | 2016-04-23T11:04:00.000 | 9 | 0 | 1 | 0 | python | 68,356,473 | 6 | false | 0 | 0 | All the answers here are outdated since Python.org doesn't host installers for older versions of Python anymore, only source code.
And building Python on Windows is not really a walk in the park...
My solution: Pyenv.
It's crossplatform (Linux, MacOS, Windows: where it's called pyenv-win), and you can with it automatically install among a very large list of Python versions.
There aren't ALL python versions existing but the list is already very big.
Installation pf pyenv is quite easy if you use chocolatey.
Then:
pyenv install --list : all the versions that you can install.
and then:
pyenv install 3.9.0 for example. | 2 | 20 | 0 | How can I install python 3.4 (Windows version) when a newer version (3.5.1) is now available. My app specifically is looking for 3.4. I can't seem to find a download for any of the older versions. | How to install an older version of python | 1 | 0 | 0 | 87,824 |
36,809,967 | 2016-04-23T11:04:00.000 | 0 | 0 | 1 | 0 | python | 68,018,601 | 6 | false | 0 | 0 | I had to download it from an unofficial website as the official sites offer only the source and the exe for Windows.( was trying to download 3.5.x) | 2 | 20 | 0 | How can I install python 3.4 (Windows version) when a newer version (3.5.1) is now available. My app specifically is looking for 3.4. I can't seem to find a download for any of the older versions. | How to install an older version of python | 0 | 0 | 0 | 87,824 |
36,813,366 | 2016-04-23T16:26:00.000 | -1 | 0 | 0 | 0 | python,pycharm | 44,912,869 | 1 | false | 1 | 0 | (On a mac)
To get my editor to stop auto wrapping long lines of code I did this.
PyCharm --> Preferences --> Editor --> Code Style --> Default Options --> Right margin (columns): 999
The editor wouldn't let me set a value larger than 999.
It's not perfect but it reduced the annoyance factor quite a bit for me.
Hope it helps. | 1 | 1 | 0 | can someone tell me if theres some way to disable the autoformating, when copy/pasting?
Every time i paste some line, thats longer than PEP-8-Max line, PyCharm automaticly inserts line warps. Thats realy anoying.
I'm using the professional Version.
Many thanks
rene | PyCharm 2016 and lline waprs on copy/pasting | -0.197375 | 0 | 0 | 236 |
36,816,168 | 2016-04-23T20:42:00.000 | 1 | 0 | 1 | 0 | python | 36,816,271 | 3 | false | 0 | 0 | Python's lists can be treated as arrays, but they are not arrays. They are the closest thing to Javascript arrays, though.
Numpy has high performance arrays for numerical computations, but all elements must be of the same type. | 2 | 2 | 0 | I've used JavaScript before where we use arrays. I'm following a textbook that teaches Python. I'm now learning about lists. They sound exactly like arrays. Is list simply the Python word for array, or does Python have arrays and lists which are different things? | In Python, is a list synonymous with an array? | 0.066568 | 0 | 0 | 483 |
36,816,168 | 2016-04-23T20:42:00.000 | 3 | 0 | 1 | 0 | python | 36,816,187 | 3 | false | 0 | 0 | Yes, lists are the data structure in Python.
Sidenote: Numpy (a python library) has arrays
But, as pointed out in the comments Python does, in-fact, have an array module. Which is a list of all the same data types | 2 | 2 | 0 | I've used JavaScript before where we use arrays. I'm following a textbook that teaches Python. I'm now learning about lists. They sound exactly like arrays. Is list simply the Python word for array, or does Python have arrays and lists which are different things? | In Python, is a list synonymous with an array? | 0.197375 | 0 | 0 | 483 |
36,817,217 | 2016-04-23T22:32:00.000 | 3 | 0 | 0 | 0 | python,machine-learning,svm,feature-extraction | 36,822,368 | 1 | true | 0 | 0 | Yes, it will affect the performance of the SVM. It seems your test vectors are just scaled versions of your training vectors. The SVM has no way of knowing that the scaling is irrelevant in your case (unless you present it alot of differently scaled training vectors)
A common practice for feature vectors where the scaling is irrelevant is to scale all the test and train vectors to a common length. | 1 | 0 | 1 | Imagine I have the following feature vectors:
Training vectors:
Class 1:
[ 3, 5, 4, 2, 0, 3, 2],
[ 33, 50, 44, 22, 0, 33, 20]
Class 2:
[ 1, 2, 3, 1, 0, 0, 4],
[ 11, 22, 33, 11, 0, 0, 44]
Testing vectors:
Class 1:
[ 330, 550, 440, 220, 0, 330, 200]
Class 2:
[ 110, 220, 333, 111, 0, 0, 444]
I am using SVM, which learns from the training vectors and then classifies the test samples.
As you can see the feature vectors have very different dimensions: the training set features are very low value numbers and the test set vectors are very high value numbers.
My question is whether it is confusing for SVM to learn from such feature vectors?
Of course when I do vector scaling the difference is still there:
for example after applying standardScaler() on the feature vectors for Class 1:
Training:
[ 0.19 1.53 0.86 -0.48 -1.82 0.19 -0.48]
[ 20.39 31.85 27.80 12.99 -1.82 20.39 11.64]
Test:
[ 220.45 368.63 294.54 146.35 -1.82 220.45 132.88]
Basically, this is a real world problem, and I am asking this since I have developed a way to pre-scale those feature vectors for my particular case.
So after I would use my pre-scaling method, the feature vectors for Class 1 would become:
Training:
[ 3. 5. 4. 2. 0. 3. 2.]
[ 2.75 4.16666667 3.66666667 1.83333333 0. 2.75
1.66666667]
Test:
[ 2.84482759 4.74137931 3.79310345 1.89655172 0. 2.84482759
1.72413793]
which makes them very similar in nature.
This looks even better when standardScaler() is applied onto the pre-scaled vectors:
Training:
[ 0.6 1. 0.8 0.4 0. 0.6 0.4]
[ 0.55 0.83333333 0.73333333 0.36666667 0. 0.55
0.33333333]
Test:
[ 0.56896552 0.94827586 0.75862069 0.37931034 0. 0.56896552
0.34482759]
The ultimate question is whether my pre-scaling method is going to help the SVM in any way? This is more of a theoretical question, any insight into this is appreciated. | How does Support Vector Machine deal with confusing feature vectors? | 1.2 | 0 | 0 | 62 |
36,818,862 | 2016-04-24T03:09:00.000 | 1 | 0 | 0 | 0 | python,code-generation,jinja2,template-engine,software-design | 36,819,049 | 1 | false | 1 | 0 | It doesn't matter which technique you use, you'll face three potential problems:
Using "the same (orchestration-driver) data" across all N targets.
There will be a preferred way for each target to represent that data.
You can choose a lowest common denominator (e.g., text or XML) at the price of making the target engines clumsier to write
Finding equivalent effect in each of N target. Imagine you need "eval" (I hope not) in each target; even if they appear to have similar implementations, some detail will be wrong and you'll have to work around that
The performance of one or more of the targets is poor.
If you code your own implementation, you can more easily overcome 2) and 3).
If you generate code, you have more flexibility to change how a particular target runs. If you use simple text-based "templates" to generate target language code, you won't be able to generate very efficient code; you can't optimize what you generate. If you use a more sophisticated code generator, you might be able to generate/optimize the result.
Its hard to tell how much trouble you are going to have, partly because you haven't told us what this engine will do or what the target langauges are. It will also be hard to tell even with that data; until you have a running system you can't be sure there isn't a rude surprise.
People use sophisticated code generation techniques when they are facing the unknown because that maximizes flexibility and therefore makes it easier to overcome complications.
People use simpler code generation when they don't have the energy to learn how to use a sophisticated generator. If they are lucky, no problems arise and they win. If this experiment isn't a lot of work, then you should try it and hope for the best. | 1 | 1 | 0 | I'm designing an orchestration engine which can automate tasks within multiple environments: JavaScript web UIs, Python webservers, and c runtimes. One possible approach to is to write the orchestration core in each language. That seems brittle as each new engine feature will need to be added to each supported language (and bugs will have to be resolved multiple times, all while dealing with different idioms in each language). Another approach would be to write the core once in the lowest common denominator language (possibly c) and then wrap it in the other languages. But, I think deployment of the compiled libraries to browsers would be a nightmare if not impossible. So, another option I'm considering is templates and code generation. The engine could then be written once (probably in Python), and the workflows compiled to each target using jinja templates.
Does this last approach sound feasible? If I go that route, what pitfalls should I be aware of? Should I suck it up and write the engine three times? | Code generation for multiple platforms | 0.197375 | 0 | 0 | 46 |
36,819,540 | 2016-04-24T05:15:00.000 | 0 | 0 | 0 | 0 | python,logging,file-writing,bigdata | 36,819,582 | 2 | false | 0 | 0 | It is always better to use a built-in facility unless you are facing issues with the built-in functionality.
So, use the built-in logging function. It is proven, tested and very flexible - something you cannot achieve with open() -> f.write() -> close(). | 2 | 3 | 0 | Which is more efficient? Is there a downside to using open() -> write() -> close() compared to using logger.info()?
PS. We are accumulating query logs for a university, so there's a perchance that it becomes big data soon (considering that the min-max cap of query logs per day is 3GB-9GB and it will run 24/7 constantly for a lifetime). It would be appreciated if you could explain and differentiate in great detail the efficiency in time and being error prone aspects. | Python logging vs. write to file | 0 | 1 | 0 | 1,554 |
36,819,540 | 2016-04-24T05:15:00.000 | 4 | 0 | 0 | 0 | python,logging,file-writing,bigdata | 36,819,569 | 2 | true | 0 | 0 | Use the method that more closely describes what you're trying to do. Are you making log entries? Use logger.*. If (and only if!) that becomes a performance issue, then change it. Until then it's an optimization that you don't know if you'll ever need.
Pros for logging:
It's semantic. When you see logging.info(...), you know you're writing a log message.
It's idiomatic. This is how you write Python logs.
It's efficient. Maybe not extremely efficient, but it's so thoroughly used that it has lots of nice optimizations (like not running string interpolation on log messages that won't be emitted because of loglevels, etc.).
Cons for logging:
It's not as much fun as inventing your own solution (which will invariably turn into an unfeatureful, poorly tested, less efficient version of logging).
Until you know that it's not efficient enough, I highly recommend you use it. Again, you can always replace it later if data proves that it's not sufficient. | 2 | 3 | 0 | Which is more efficient? Is there a downside to using open() -> write() -> close() compared to using logger.info()?
PS. We are accumulating query logs for a university, so there's a perchance that it becomes big data soon (considering that the min-max cap of query logs per day is 3GB-9GB and it will run 24/7 constantly for a lifetime). It would be appreciated if you could explain and differentiate in great detail the efficiency in time and being error prone aspects. | Python logging vs. write to file | 1.2 | 1 | 0 | 1,554 |
36,820,151 | 2016-04-24T06:48:00.000 | 1 | 0 | 1 | 0 | python,multithreading,concurrency,parallel-processing | 36,820,412 | 3 | false | 0 | 0 | In CPython, the threads are real OS threads, and are scheduled to run concurrently by the operating system. However, as you noted the GIL means that only one thread will be executing instructions at a time. | 2 | 3 | 0 | I was wondering if python threads run concurrently or in parallel?
For example, if I have two tasks and run them inside two threads will they be running simultaneously or will they be scheduled to run concurrently?
I'm aware of GIL and that the threads are using just one CPU core. | How does a python thread work? | 0.066568 | 0 | 0 | 1,338 |
36,820,151 | 2016-04-24T06:48:00.000 | 1 | 0 | 1 | 0 | python,multithreading,concurrency,parallel-processing | 36,820,546 | 3 | false | 0 | 0 | Let me explain what all that means. Threads run inside the same virtual machine, and hence run on the same physical machine. Processes can run on the same physical machine or in another physical machine. If you architect your application around threads, you’ve done nothing to access multiple machines. So, you can scale to as many cores are on the single machine (which will be quite a few over time), but to really reach web scales, you’ll need to solve the multiple machine problem anyway. | 2 | 3 | 0 | I was wondering if python threads run concurrently or in parallel?
For example, if I have two tasks and run them inside two threads will they be running simultaneously or will they be scheduled to run concurrently?
I'm aware of GIL and that the threads are using just one CPU core. | How does a python thread work? | 0.066568 | 0 | 0 | 1,338 |
36,820,171 | 2016-04-24T06:51:00.000 | 4 | 0 | 0 | 0 | python,django,amazon-web-services,amazon-elastic-beanstalk,amazon-rds | 36,820,728 | 1 | true | 1 | 0 | The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just add the necessary arguments to connect to the DB. For example:
$ createdb -h <RDS endpoint> -U <user> -W ebdb
You can also create a RDS snapshot when the DB is empty, and use the RDS instance actions Restore to Point in Time or Migrate Latest Snapshot. | 1 | 1 | 0 | My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there is a better way. On my local machine, I will take the following steps:
"dropdb databasename" and then "createdb databasename"
python manage.py makemigrations
python manage.py migrate
Is there something like this that I can do on amazon to delete my database and put it back online again without deleting the entire application? When I tried just deleting the RDS instance a while ago and making a new one, I was having problems with elasticbeanstalk. | How to drop table and recreate in amazon RDS with Elasticbeanstalk? | 1.2 | 1 | 0 | 5,860 |
36,821,140 | 2016-04-24T09:02:00.000 | 0 | 0 | 0 | 0 | python-3.x,oauth-2.0,google-contacts-api,django-allauth | 36,996,677 | 1 | true | 1 | 0 | See, when you are logging into the website, you are probably using cookies. So basically you might be using the same session and actually the api is not called.
The time when you are logging in incognito mode or in a diffrent browser, that cookie cannot be used, so this time api is called. For this reason, the token is getting changed.
For example, if after few users have signed up with google, you change the scope of the app, what happens is, if the user has enabled cookies and it has not expired, when he visits your site, it simply logs him in. It does not asks for permissions (that you added recently to scope). But when he logs out and logs in again, then it asks for the additional permission and th token also gets changed.
What you should do is, you should go through th codes of django-allauth and clear it out how they are using the token. You must also know that for getting refresh token, you must have offline access enabled in your configuration. | 1 | 0 | 0 | What's wrong with my setup?
I am using django-allauth for social signup and recently i added contacts to it's scope. Things are working fine. It now asks for permission to manage contacts and I am able to get contact details of users through the API.
But once i make a request to get contacts of a user(I am not saving any refresh token or accss token at that time), after an hour when i make the request again with same token, It shows this error "Invalid token: Stateless token expired".
However I can still login into the website and the token does not change. However when I logout and login again the token changes and i can again get the contacts using that token for one hour.
What's the issue? What am I missing? | Google Oauth2 Contacts API returns Invalid token: Stateless token expired after an hour | 1.2 | 0 | 0 | 252 |
36,824,269 | 2016-04-24T14:17:00.000 | 1 | 0 | 1 | 1 | python,python-3.x,cmd,python-import | 36,824,295 | 1 | false | 0 | 0 | Save the program with a .py extention. For example: hello.py
Then run it with python <script_name>.py. For example: python hello.py | 1 | 0 | 0 | I am trying to run a python program
import random
random.random()
Written in notepad in two different lines,I want to run it in cmd.how to do it? | Running python from notepad in cmd | 0.197375 | 0 | 0 | 45 |
36,826,570 | 2016-04-24T17:30:00.000 | -2 | 0 | 0 | 0 | python,python-3.x,turtle-graphics | 36,827,757 | 3 | false | 0 | 1 | Try exitonclick() or done() at the end of the file to close the window . | 2 | 16 | 0 | I'm working on a simple program in Python 3.5 that contains turtle graphics
and I have a problem: after the turtle work is finished the user has to close the window manually.
Is there any way to program the window to close after the turtle work is done? | How to close the Python turtle window after it does its code? | -0.132549 | 0 | 0 | 42,886 |
36,826,570 | 2016-04-24T17:30:00.000 | 18 | 0 | 0 | 0 | python,python-3.x,turtle-graphics | 42,283,059 | 3 | true | 0 | 1 | turtle.bye(), aka turtle.Screen().bye(), closes a turtle graphics window.
Usually, a lack of turtle.mainloop(), or one of its variants, will cause the window to close because the program will exit, closing everything. turtle.mainloop() should be the last statement executed in a turtle graphics program unless the script is run from within Python IDLE -n which disables turtle.mainloop() and variants.
turtle.Screen().mainloop() and turtle.done() are variants of turtle.mainloop().
turtle.exitonclick() aka turtle.Screen().exitonclick() binds the screen click event to do a turtle.bye() and then invokes turtle.mainloop() | 2 | 16 | 0 | I'm working on a simple program in Python 3.5 that contains turtle graphics
and I have a problem: after the turtle work is finished the user has to close the window manually.
Is there any way to program the window to close after the turtle work is done? | How to close the Python turtle window after it does its code? | 1.2 | 0 | 0 | 42,886 |
36,826,909 | 2016-04-24T18:02:00.000 | 0 | 0 | 1 | 0 | python,django | 36,826,937 | 2 | false | 1 | 0 | You'll define another model that has foreign key to the main model. | 1 | 1 | 0 | I'm looking for a field that I can use to be defined in my model which is essentially a list because it'll be used to store multiple string values. Obviously CharField cannot be used. | A list model field for Models.Model | 0 | 0 | 0 | 46 |
36,827,155 | 2016-04-24T18:24:00.000 | 0 | 0 | 0 | 0 | python,arrays,list,numpy,slice | 36,827,834 | 6 | false | 0 | 0 | A simple list comprehension can do the job:
[ L[i] for i in range(len(L)) if i%3 != 2 ]
For chunks of size n
[ L[i] for i in range(len(L)) if i%(n+1) != n ] | 1 | 3 | 1 | Edited for the confusion in the problem, thanks for the answers!
My original problem was that I have a list [1,2,3,4,5,6,7,8], and I want to select every chunk of size x with gap of one. So if I want to select select every other chunk of size 2, the outcome would be [1,2,4,5,7,8]. A chunk size of three would give me [1,2,3,5,6,7].
I've searched a lot on slicing and I couldn't find a way to select chunks instead of element. Make multiple slice operations then join and sort seems a little too expensive. The input can either be a python list or numpy ndarray. Thanks in advance. | Python/Numpy fast way to selecting every nth chunk in list | 0 | 0 | 0 | 1,225 |
36,828,383 | 2016-04-24T20:11:00.000 | 0 | 0 | 0 | 0 | python,debugging,pyqt4 | 36,916,323 | 2 | true | 0 | 1 | I found the solution. I covered that function in thread. And now its working properly without freeze or lag. | 1 | 0 | 0 | I have designed a gui with PYQT4. It has two buttons. One of start button. It starts a start.py file. And another button execute stop.py file that stop start.py pid.
This start.py and stop.py files are on a remote location. I am connecting there with ssh and paramiko.
When I click to start button, gui is freezing and never answering. I can only get rid of the situation by closing the program . I know the problem. Because there is a while loop in start.py and it never ends.
When i click to start button its wait for the while loop.
I want to run start.py and i dont want to wait for the loop. It must be run in background or etc..
What can i do ? I tried to trigger it with another .py file. I used subprocess method. But no success. Still have same problem. | Qt freezing cause of the loop? | 1.2 | 0 | 0 | 236 |
36,831,877 | 2016-04-25T04:08:00.000 | 0 | 0 | 1 | 0 | python | 36,831,904 | 5 | false | 0 | 0 | Instead of just double-clicking the file, run it from the command line. A terminal window that your program automatically created will also be automatically closed when the program ends, but if you open a terminal yourself and run the program from the command line, it won't touch the open terminal and you can read the error. | 1 | 3 | 0 | So I know I can make Python executable using pyinstaller.
However, every time it raises an error, it will instantly end the program, so I can't find what is the error.
I know I probably can use time.sleep(30000) to stop it.
But if the code raises error before it meets time.sleep(30000), it will just shut down.
To sum up, how to make it keep not shutting down, so I can see where is the mistake? | How to make Python executable pause when it raises an error? | 0 | 0 | 0 | 5,599 |
36,834,234 | 2016-04-25T07:20:00.000 | 25 | 0 | 1 | 0 | python,multithreading,gevent,eventlet,greenlets | 36,857,272 | 1 | true | 0 | 0 | You definitely don't want greenlet for this purpose, because it's a low level library on top of which you can create light thread libraries (like Eventlet and Gevent).
Eventlet, Gevent and more similar libraries provide excellent toolset for IO-bound tasks (waiting for read/write on file, network).
Likely, most of your GUI code will wait for other threads (at this point green/light/OS thread is irrelevant) to finish, which is a perfect target for above mentioned libraries.
All green thread libraries are mostly the same. Try all and decide which one suits your project best.
But also it's possible that you'll need to extract some things into a separate OS thread due to requirements of OS level GUI layer.
Considering that and better implementation of thread lock in Python3 you may want to just stick with native threading module if your application doesn't need hundreds or more threads. | 1 | 26 | 0 | I'm trying to create a GUI framework that will have an event-loop. some threads to handle the UI and some for event handling. I've searched a little bit and found these three libraries and I'm wondering which one is better to use? what are the pros and cons?
I could use one of these three library or even create something for myself by using python threads, or concurrent library.
I would appreciate sharing any kind of experience, benchmark and comparison. | Eventlet vs Greenlet vs gevent? | 1.2 | 0 | 0 | 24,553 |
36,835,341 | 2016-04-25T08:23:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 64,056,151 | 15 | false | 0 | 0 | In case the first solution didn't work, give attention to your path, there is a slight chance instead of writing :
C:\Users\\AppData\Local\Programs\Python\Python38-32
you wrote
C:/Users/a610580/AppData/Local/Programs/Python/Python38-32
if you couldn't execute pip install selenium on visual studio write it on cmd first the close and open your visual studio. | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 0 | 0 | 0 | 208,029 |
36,835,341 | 2016-04-25T08:23:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 63,097,732 | 15 | false | 0 | 0 | I discovered that using powershell PIP wasn't recognized, but in the CMD it is.
Make sure you're actually using the cmd and not powershell to run it. | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 0.039979 | 0 | 0 | 208,029 |
36,835,341 | 2016-04-25T08:23:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 65,664,454 | 15 | false | 0 | 0 | This work for me,
Download pip by modifying the Python Installation.
Step 1 - Open Apps & Features
Step 2 - Find Python and click on it
Step 3 - Press Modify
Step 4 - Select pip
Step 5 - Select Add Python to environment variables and install everything
This will install pip and add both, Python and pip to your environment variables. | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 0.02666 | 0 | 0 | 208,029 |
36,835,341 | 2016-04-25T08:23:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 63,079,986 | 15 | false | 0 | 0 | Just reinstall python and click add to PATH in the installer! | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 0.053283 | 0 | 0 | 208,029 |
36,835,341 | 2016-04-25T08:23:00.000 | 7 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 62,570,763 | 15 | false | 0 | 0 | Basiclly, You need to add the path of your pip installation to your PATH system variable.
First Option
Download pip by modifying the Pyton Installation.
Step 1 - Open Apps & Features
Step 2 - Find Python and click on it
Step 3 - Press Modify
Step 4 - Select pip
Step 5 - Select Add Python to environment variables and install everything
This will install pip and add both, Python and pip to your envirnoment variables.
Second Option
By default, pip is installed in C:\Python34\Scripts\pip
To add the path of your pip installation to your PATH variable follow theese steps.
Step 1 - Search for environment variables and open Edit the system environment variables
Step 2 - Open Environment Variables...
Step 3 - Find your PATH variable and select Edit
Step 4 - Paste the location to your pip installation (By default, it's C:\Python34\Scripts\pip) | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 1 | 0 | 0 | 208,029 |
36,835,341 | 2016-04-25T08:23:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 69,472,319 | 15 | false | 0 | 0 | Delete/uninstall, the old version of Python then, go to the path of the new version - C:\Users\Administrator\AppData\Local\Programs\Python\Python310 (Basically when you install python this is the default path)
Copy the above path.
Go to This pc advance system setting , go to environment variables , click on path ,click on new then paste the above path.
C:\Users\Administrator\AppData\Local\Programs\Python\Python310\Scripts
add this path too and then ok. | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 0.013333 | 0 | 0 | 208,029 |
36,835,341 | 2016-04-25T08:23:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 71,831,362 | 15 | false | 0 | 0 | if you have already set the path correctly and still getting the same error then just follow this:
open your vs code and create a new project and copy all the code from this link -https://bootstrap.pypa.io/get-pip.py
Name this project as get-pip.py and run the code in vs code.
pip will get downloaded at the location where you create have set the path for saved projects of your vs code
Copy get-pip.py into your main python folder.
You are good to go. | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 0 | 0 | 0 | 208,029 |
36,835,341 | 2016-04-25T08:23:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,powershell,pip | 59,978,789 | 15 | false | 0 | 0 | simply go to python root folder like \python38-32 and then go into \Scripts subfolder where contains pip.exe. You can install flawlessly from there. However, to avoid another time recursively go through many files, you should reset the PATH variable as the above answer mentioned. | 8 | 36 | 0 | I tried to install PySide but I got error from the power shell as follows:
pip : The term 'pip' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. At line:1 char:1
+ pip install -U PySide
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException | 'pip' is not recognized | 0 | 0 | 0 | 208,029 |
36,836,101 | 2016-04-25T09:04:00.000 | 0 | 0 | 0 | 0 | python-2.7,web-applications,pyramid | 36,873,321 | 2 | false | 1 | 0 | If you're using jinja, try this:
<div class="html-content">{{scraped_html|safe}}</div> | 1 | 0 | 0 | please I am working on a site where I would scrap another website's html table source code and append it to my template before rendering my page.
I have written the script which stores the html code in a variable but don't know how to appendix it.
Kindly suggest. | How to modify a pyramid template on the fly before rendering | 0 | 0 | 0 | 45 |
36,839,650 | 2016-04-25T11:46:00.000 | 2 | 0 | 1 | 1 | python,linux,jupyter | 36,839,885 | 3 | true | 0 | 0 | You can use the jupyter console -i command to run an interactive jupyter session in your terminal. From there you can run import <my_script.py>. Do note that this is not the intended use case of either jupyter or the notebook environment. You should run scripts using your normal python interpreter instead. | 2 | 1 | 0 | I want to execute one python script in Jupyter, but I don't want to use the web browser (IPython Interactive terminal), I want to run a single command in the Linux terminal to load & run the python script, so that I can get the output from Jupyter.
I tried to run jupyter notebook %run <my_script.py>, but it seems jupyter can't recognize %run variable.
Is it possible to do that? | How to run python script on Jupyter in the terminal? | 1.2 | 0 | 0 | 11,122 |
36,839,650 | 2016-04-25T11:46:00.000 | 0 | 0 | 1 | 1 | python,linux,jupyter | 41,051,815 | 3 | false | 0 | 0 | You can run this command to run an interactive jupyter session in your terminal.
jupyter notebook | 2 | 1 | 0 | I want to execute one python script in Jupyter, but I don't want to use the web browser (IPython Interactive terminal), I want to run a single command in the Linux terminal to load & run the python script, so that I can get the output from Jupyter.
I tried to run jupyter notebook %run <my_script.py>, but it seems jupyter can't recognize %run variable.
Is it possible to do that? | How to run python script on Jupyter in the terminal? | 0 | 0 | 0 | 11,122 |
36,839,755 | 2016-04-25T11:51:00.000 | 1 | 0 | 1 | 0 | python,module | 36,839,938 | 2 | false | 0 | 0 | Static method is something used by OOP, functions are used in structural programming. Since OOP is modern - you know, what I'm trying to say. All in all - that is an entry in a dict structure...
To me - group by logic, not the purpose. For instance some projects have subdirectories for models, views, and controllers. If you try to change user functionality - you are supposed to look around, since functionality is in different locations. I know that html templates are located separately to make webmaster's work easier. | 2 | 1 | 0 | I have a project, simulating numerics for my diploma thesis in physics, I think one should call this a rather small project... there might be maybe a hundred functions in the end (each doing some rather small logical block).
Somehow, I'm reluctant to put this all into a package, it feels "too big" a tool (and also it feels nicer to have just some text files importing each other than a more complex folder structure)
Is there a reason not to use classes with @staticmethods instead of grouping your functions into packages?
In short: I want to have more then one level of nested functions (not just file -> function), because in this case I have to search for the correct names and type them each time. It should be like: type np. -> get a list in the editor -> chose linalg. -> get a list -> chose function. Is this possible without building a package, as indicated above?
I hope this question is not just a matter of opinion but sufficiently "answerable". Sorry if not. | Python, advice on grouping functions | 0.099668 | 0 | 0 | 1,286 |
36,839,755 | 2016-04-25T11:51:00.000 | 0 | 0 | 1 | 0 | python,module | 36,840,400 | 2 | true | 0 | 0 | You could group you functions into modules without creating a class to hold a bunch of static methods. you would only want to go that route if you felt there were instances of the class you were creating and there was a need to act outside the instance level (maybe on a list of instances or setting some global variable that is used in creation of the instances). based on your description it sounds more like you need to head with organizing the functions into modules (possibly by responsibility or data acted on in the overall program). | 2 | 1 | 0 | I have a project, simulating numerics for my diploma thesis in physics, I think one should call this a rather small project... there might be maybe a hundred functions in the end (each doing some rather small logical block).
Somehow, I'm reluctant to put this all into a package, it feels "too big" a tool (and also it feels nicer to have just some text files importing each other than a more complex folder structure)
Is there a reason not to use classes with @staticmethods instead of grouping your functions into packages?
In short: I want to have more then one level of nested functions (not just file -> function), because in this case I have to search for the correct names and type them each time. It should be like: type np. -> get a list in the editor -> chose linalg. -> get a list -> chose function. Is this possible without building a package, as indicated above?
I hope this question is not just a matter of opinion but sufficiently "answerable". Sorry if not. | Python, advice on grouping functions | 1.2 | 0 | 0 | 1,286 |
36,841,121 | 2016-04-25T12:50:00.000 | 2 | 0 | 0 | 1 | python,mongodb,tornado-motor,motordriver | 36,842,258 | 2 | true | 0 | 0 | No there isn't. Motor is a MongoDB driver, it does basic operations but doesn't provide many conveniences. An Object Document Mapper (ODM) library like MongoTor, built on Motor, provides higher-level features like schema validation.
I don't vouch for MongoTor. Proceed with caution. Consider whether you really need an ODM: mongodb's raw data format is close enough to Python types that most applications don't need a layer between their code and the driver. | 1 | 2 | 0 | There is a way to define MongoDB collection schema using mongoose in NodeJS. Mongoose verifies the schema at the time of running the queries.
I have been unable to find a similar thing for Motor in Python/Tornado. Is there a way to achieve a similar effect in Motor, or is there a package which can do that for me? | Is there a way to define a MongoDB schema using Motor? | 1.2 | 1 | 0 | 1,449 |
36,841,334 | 2016-04-25T12:59:00.000 | 1 | 0 | 0 | 1 | python,ios,lldb | 36,847,673 | 1 | true | 0 | 0 | There isn't such a thing built into lldb, but presumably you could set a timer in Python and have it kill the debug session if that's appropriate.
Note, when you restart the device, the connection from lldb to the remote debug server should close, and lldb should detect that it closed and quit the process. It won't exit when that happens by default, but presumably whatever you have waiting on debugger events can detect the debuggee's exit and exit or whatever you need it to do.
Note, if lldb is waiting on input from debugserver (if the program is running) then it should notice this automatically, since the select call will return with EOF. But if the process is stopped when you close the connection, lldb probably won't notice that till it goes to read something.
In the latter case, you should be able to have lldb react to the stop that indicates the "needle" is found, and kill the debug session by hand. | 1 | 0 | 0 | I have a Program written in python for automated testing on mobile devices (iOS & Android). The proper workflow of this program is as follows (for smoke tests):
Deploy executable to USB-connected device (.ipa or .app) using ios-deploy
Start Application (debugging process) --> writes to stdout.
Write output into Pipe --> this way it is possible to read the output of the debugging process parallel to it.
If the searched needle is detected in the output, the device is restarted (this is quite a dirty workaround, I am going to insert a force-stop method or something similar)
My Problem is: When the needle is detected in the output of the debug process, the lldb session is interrupted, but not exited. To exit the lldb session, I have to reconnect the device or quit terminal and open it again.
Is there a possibility to append something like a "time-to-live-flag" to the lldb call to determine how long the lldb session should run until it exits auomatically? Another way I can imagine how to exit the lldb session is to join the session again after the device is restarted and then exit it, but it seems that lldb is just a subprocess of ios-deploy. Therefore I have not found any possibility to get access to the lldb process. | Quit LLDB session after a defined amount of time | 1.2 | 0 | 0 | 168 |
36,847,022 | 2016-04-25T17:17:00.000 | 5 | 0 | 0 | 0 | python,numpy-random | 44,995,504 | 7 | false | 0 | 0 | What is normally called a random number sequence in reality is a "pseudo-random" number sequence because the values are computed using a deterministic algorithm and probability plays no real role.
The "seed" is a starting point for the sequence and the guarantee is that if you start from the same seed you will get the same sequence of numbers. This is very useful for example for debugging (when you are looking for an error in a program you need to be able to reproduce the problem and study it, a non-deterministic program would be much harder to debug because every run would be different). | 2 | 18 | 1 | I have noticed that you can put various numbers inside of numpy.random.seed(), for example numpy.random.seed(1), numpy.random.seed(101). What do the different numbers mean? How do you choose the numbers? | What numbers that I can put in numpy.random.seed()? | 0.141893 | 0 | 0 | 20,730 |
36,847,022 | 2016-04-25T17:17:00.000 | 0 | 0 | 0 | 0 | python,numpy-random | 60,006,676 | 7 | false | 0 | 0 | One very specific answer: np.random.seed can take values from 0 and 2**32 - 1, which interestingly differs from random.seed which can take any hashable object. | 2 | 18 | 1 | I have noticed that you can put various numbers inside of numpy.random.seed(), for example numpy.random.seed(1), numpy.random.seed(101). What do the different numbers mean? How do you choose the numbers? | What numbers that I can put in numpy.random.seed()? | 0 | 0 | 0 | 20,730 |
36,847,349 | 2016-04-25T17:35:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,pycharm,nose,python-unittest | 59,145,768 | 2 | false | 0 | 0 | One thing I found it that in the particular case my Test class was from a subclass of unittest.TestCase that is defined in a local module. There is a known bug in pycharm that has been around for years that it sometimes does not fully see a local module that is in your virtualenv in some cases like marking the imports as unknown. There is a workaround for that which is to add either the egg for that local project or its source path to as a source in the project using it.
When I did that workaround for the other bug the problem went away. So it seems pycharm machinery did not recognize my Test class as a unittest.TestCase due to the other issue. | 1 | 4 | 0 | I have a python package where all my unittest test classes are stored in modules in a subpackage mypkg.tests. In the tests/__init__.py file I have a function called suite. I normally run these tests by calling python setup.py test which has test_suite='satpy.tests.suite'. Is it possible to run this test suite from pycharm?
The reason I have the suite function is that it only contains tests that are ready to be run from my continuous integration, but other failing tests exist in the directory (from older versions of the package). I could also see this being useful for selecting quick unittests versus long running tests. I've tried running as a script, function as nosetest or unittest configurations. I've tried adding if __name__ == "__main__": and other types of command line running methods with no success.
Is there a way to run only some tests from a pycharm run configuration? | PyCharm run select unittests | 0 | 0 | 0 | 1,707 |
36,850,685 | 2016-04-25T20:41:00.000 | 0 | 0 | 1 | 0 | python,windows,python-sphinx | 36,852,158 | 1 | false | 0 | 0 | I reinstalled anaconda and it seemed to solve the problem..I'm not sure if it is something to do with the upgraded version. | 1 | 1 | 0 | I'm having a strange problem with sphinx-quickstart. Everything was working fine, but I decided to upgrade all my packages since I wasn't able to get iPython3 notebooks to render properly (with syntax highlighting and nbsphinx). I did this using the pip install sphinx --upgrade command.
Since then, my sphinx has been broken. I can run sphinx-quickstart and sphinx-build from the command window, but it hangs with no output. Strangely, if I navigate to the folder and execute sphinx-quickstart directly, it opens fine with correct output.
In addition if I type out the whole path "C:\Users\XX\Anaconda3\Scripts\sphinx-quickstart" it runs fine.
I'm not sure what kind of problem would cause this behavior... | Running sphinx-quickstart from command line hangs on windows | 0 | 0 | 0 | 442 |
36,855,905 | 2016-04-26T05:24:00.000 | 5 | 0 | 1 | 0 | python-3.x,spyder | 36,875,757 | 2 | true | 0 | 0 | on the IPython console within spyder allows you to use pip. So, in the example, you could do:
[1] !pip install numdifftools | 1 | 3 | 0 | I have installed numdifftools and it works in Python shell. But in Spyder, I get this error which don't know how to solve!
ImportError: No module named 'numdifftools' | Spyder doesn't recognise my library, ImportError: No module named 'numdifftools' | 1.2 | 0 | 0 | 28,541 |
36,856,532 | 2016-04-26T06:09:00.000 | 1 | 0 | 0 | 0 | python,opencv | 36,857,440 | 3 | true | 0 | 0 | Regarding your question about the haar cascades. You can use them to classify the images the way you want:
Train two haar cascades, one for cars and one for bikes. Both cascades will return a value of how certain they are, that the image contains the object they were trained for. If both are uncertain, the image probably contains nothing. Otherwise you take the class with the higher certainty for the content of the image. | 1 | 4 | 1 | I am new to OpenCV.
I am using OpenCV 3.1 and python 2.7.
I have 5 images of bikes and 5 images of cars.
I want to find out given any image is it a car or a bike .
On the internet I found out that using haar cascade we can train,
but most of the examples contain only one trained data means, the user will train only car images and
with query image and they will try to find it it is a car or not,
but I want to check if it is a car or a bike or nothing.
I want to match images based on shape of objects.
Another option I was thinking that take query image and compare with stored images and depending upon similarity give the result. But I know this would take longer time which would be not good.
Are there any better option? There is also template matching but I dont know which would be better options to go for this kind of solution since I dont have knowledge about OpenCV. | OpenCV to recognize image using python | 1.2 | 0 | 0 | 1,017 |
36,859,840 | 2016-04-26T08:49:00.000 | 0 | 0 | 0 | 0 | python,numpy,fft | 36,976,133 | 1 | true | 0 | 0 | Yes. Apply fftfreq to each spatial vector (x and y) separately. Then create a meshgrid from those frequency vectors.
Note that you need to use fftshift if you want the typical representation (zero frequencies in center of spatial spectrum) to both the output and your new spatial frequencies (before using meshgrid). | 1 | 0 | 1 | I know that for fft.fft, I can use fft.fftfreq. But there seems to be no such thing as fft.fftfreq2. Can I somehow use fft.fftfreq to calculate the frequencies in 2 dimensions, possibly with meshgrid? Or is there some other way? | How to extract the frequencies associated with fft2 values in numpy? | 1.2 | 0 | 0 | 343 |
36,861,358 | 2016-04-26T09:51:00.000 | 0 | 1 | 0 | 0 | python,node.js,django,nginx | 36,862,095 | 1 | false | 0 | 0 | You really should move away from using the IP as the restriction. Not only can the IP be changed allowing for an intermediary to replay the OTP.
A combination of the visiting IP along with additional unique vectors would serve as a better method of identifying the visitor and associating the OTP with their access.
Because of this the throttling you wish to implement would be better served at the code or application level vs. your web server. You should also be doing that anyways in order to better protect the OTP and the best practices associated with them; expiring, only using them once etc. etc. | 1 | 2 | 0 | Scenario :
I have an OTP generation API. As of now , if I do POST with contact number in body, it will be generating OTP code irrespective of how many times, it gets invoked by same ip. There is no security at code level and nginx level.
Suggestions are accepted whether blocking IP should be done at code level or Nginx. I want to restrict access to api 5 times in a day from same IP . | Securing OTP API at code level or Nginx level? | 0 | 0 | 0 | 219 |
36,864,116 | 2016-04-26T11:49:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn | 36,864,312 | 1 | false | 0 | 0 | It means those words are "strongly associated" with one of the responses, in your case probably illegal(1). Depending on your classifier, the exact technical definition of strongly associated will vary. It could be the joint probability of the word and response, P(X='theft', Y='illegal'), or it could be the conditional probabilityP(X='theft' | Y='illegal').
Intuitively, whenever these terms appear in a document, the probability of that document belonging to the illegal category is increased. | 1 | 1 | 1 | I am just curious on the interpretation of sklearn's feature_importances_ attribute. I know that the features with highests coefficients are the features that would highly predict the outcome. My question is - Are these the features strongly predictive to return a 1 (or yes) or not necessarily? (Supervised Learning - Binary response - yes(1) or no(0)).
For example, after building the predictive model, I found out that these words are the top features - insider-trading, theft, embezzlement, investment. The response is 'illegal'(1) or 'legal'(0).
Does it mean that when a certain text has those words, there's a huge chance it's illegal or not necessarily? And, it just simply means that the value of these words would lead to a strong prediction (either illegal or legal). Appreciate any answer to such. | sklearn's feature importances_ | 0 | 0 | 0 | 213 |
36,864,537 | 2016-04-26T12:08:00.000 | 0 | 0 | 0 | 0 | python,macos,python-3.x,tkinter | 36,864,625 | 2 | false | 0 | 1 | By changing the file extension to .pyw (Python Windowed), any terminal/shell/cmd would be hidden by default (I even don't know about the preferences window).
Hope this helps! | 1 | 3 | 0 | I developed a GUI app in Python 3. I want to be able to launch it by clicking on the script itself or in a custom-made launching script. I have changed permissions and it's now opening with "Python Launcher" on double-click. But everytime I run my app, I get a lot of windows:
My Python application (that's the only one I need to be on screen)
The Python launcher preferences (I could live with this one, but
would prefer it to be hidden)
A Terminal window with the default shell (don't need this one)
Another Terminal window showing the output of the app (like print, errors... I don't want this window)
What is the easiest way to get only the GUI on screen, without any Terminal windows? | How to run a Python 3 tkinter app without opening any Terminal windows in Mac OS X? | 0 | 0 | 0 | 3,492 |
36,864,863 | 2016-04-26T12:23:00.000 | 1 | 0 | 0 | 0 | python,pandas | 36,866,111 | 2 | false | 0 | 0 | In short. No.
You see, dtypes is not a pandas controlled entity. Dtypes is typically a numpy thing.
Dtypes are not controllable in any way, they are automagically asserted by numpy and can only change when you change the data inside the dataframe or numpy array.
That being said, the typical reason for ending up with a float instead of an int as a dtype is because of the introduction of NaN values into the series or numpy array. This is a pandas gotcha some say. I personally would argue it is due to the (too) close coupling between pandas and numpy.
In general, dtypes should never be trusted for anything, they are incredibly unreliable. I think everyone working with numpy/pandas would live a better life if they were never exposed to dtypes at all.
If you really really hate floats, the only other option for you as far as I know is to use string representations, which of course causes even more problems in most cases. | 1 | 1 | 1 | I have a dataframe df which is sparse and for memory efficiency I wish to convert it using to_sparse()
However it seems that the new representation ends up with the dtype=float64, even when my df is dtype=int8.
Is there a way specify the data type/ prevent auto conversion to dtype=float64 when using to_sparse() ? | Defining dtype of df.to_sparse() result | 0.099668 | 0 | 0 | 141 |
36,867,924 | 2016-04-26T14:29:00.000 | 1 | 0 | 0 | 0 | python,arrays,scikit-learn | 36,892,239 | 1 | true | 0 | 0 | Make the array into a column: use x[:, np.newaxis] instead of x | 1 | 0 | 1 | I have around 7k samples and 11 features which I concentrated into one. This concentrated value I call ResVal and is a weighted sum of previous features. Then I gathered these ResVals into 1D array.
Now I want to cluster this results with AgglomerativeClustering but console complains about 1D array.
How can I fix it and get cluster results by line number? | How to use agglometative clustering with 1 dimensional array valueset? | 1.2 | 0 | 0 | 28 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.