Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
38,991,799 | 2016-08-17T08:36:00.000 | 0 | 0 | 0 | 0 | python,apache-spark,scikit-learn,pyspark | 38,994,681 | 1 | false | 0 | 0 | The fact that you are using spark shouldn't hold you from using external python libraries.
You can import sklearn library in your spark-python code, and use sklearn logistic regression model with the saved pkl file. | 1 | 0 | 1 | I have trained a logistic regression model in sklearn and saved the model to .pkl files. Is there a method of using this pkl file from within spark? | Scikit-learn and pyspark integration | 0 | 0 | 0 | 374 |
38,994,725 | 2016-08-17T10:58:00.000 | 1 | 0 | 1 | 1 | python,ssis | 38,997,349 | 1 | false | 0 | 0 | This is actually now solved - or rather, never actually broken; I was writing to a parent package variable (i.e. by creating the variable in the child package, configuring the task, setting delay validation to true and then deleting the variable) - it appears when I do this, it takes SSIS a long time to write to it! If i use a child package variable, it completes straight away but it takes 1-2 minutes for the parent package variable to be written to.
At least it's completing. | 1 | 0 | 0 | I am using SSIS's Execute Process Task to execute a compiled python script.
The script executes as expected and completes as expected with either success or failure.
However, when I configure a variable to catch Standard Error or Standard Output, the application hangs. The command prompt flashes up and down indicating that the execution has completed but then the SSIS task itself never completes.
To reiterate, when I don't configure the variable, there is no issue and the task finishes as expected. I have also debugged the execution of the script independently and I can verify that:
Status code is 0 when success.
Standard error contains text.
Any ideas what is causing the task to hang? | SSIS Execute Process Task Hanging when Standard Error Variable Provided | 0.197375 | 0 | 0 | 589 |
39,001,104 | 2016-08-17T15:48:00.000 | 3 | 0 | 0 | 0 | python,machine-learning,deep-learning,theano,keras | 39,093,328 | 1 | false | 0 | 0 | I'm dealing some something similar right now. I want to make my epochs shorter so I can record more information about the loss or adjust my learning rate more often.
Without diving into the code, I think the fact that .fit_generator works with the randomly augmented/shuffled data produced by the keras builtin ImageDataGenerator supports your suspicion that it doesn't reset the generator per epoch. So I believe you should be fine, as long as the model is exposed to your whole training set it shouldn't matter if some of it is trained in a separate epoch.
If you're still worried you could try writing a generator that randomly samples your training set. | 1 | 4 | 1 | I'm using Keras with Theano to train a basic logistic regression model.
Say I've got a training set of 1 million entries, it's too large for my system to use the standard model.fit() without blowing away memory.
I decide to use a python generator function and fit my model using model.fit_generator().
My generator function returns batch sized chunks of the 1M training examples (they come from a DB table, so I only pull enough records at a time to satisfy each batch request, keeping memory usage in check).
It's an endlessly looping generator, once it reaches the end of the 1 million, it loops and continues over the set
There is a mandatory argument in fit_generator() to specify samples_per_epoch. The documentation indicates
samples_per_epoch: integer, number of samples to process before going to the next epoch.
I'm assuming the fit_generator() doesn't reset the generator each time an epoch runs, hence the need for a infinitely running generator.
I typically set the samples_per_epoch to be the size of the training set the generator is looping over.
However, if samples_per_epoch this is smaller than the size of the training set the generator is working from and the nb_epoch > 1:
Will you get odd/adverse/unexpected training resulting as it seems the epochs will have differing sets training examples to fit to?
If so, do you 'fastforward' you generator somehow? | In Keras, If samples_per_epoch is less than the 'end' of the generator when it (loops back on itself) will this negatively affect result? | 0.53705 | 0 | 0 | 1,961 |
39,003,909 | 2016-08-17T18:30:00.000 | 3 | 0 | 1 | 0 | python,tensorflow,gpu | 39,004,702 | 2 | true | 0 | 0 | Do something like this before running your main script
export TF_MIN_GPU_MULTIPROCESSOR_COUNT=4
Note though that the default is set for a reason -- if you enable slower GPU by changing that variable, your program may run slower than it would without any GPU available, because TensorFlow will try to put run everything on that GPU | 2 | 0 | 0 | I get a message that says my GPU Device is ignored because its multiprocessor count is lower than the minimum set. However, it gives me the environment variable TF_MIN_GPU_MULTIPROCESSOR_COUNT but it doesn't seem to exist because I keep getting command not found. When I look at the environment variables using set or printenv and grep for the variable name, it doesn't exist. Does anyone know where I can find it or how I can change its set value? | Can't find TF_MIN_GPU_MULTIPROCESSOR_COUNT | 1.2 | 0 | 0 | 2,221 |
39,003,909 | 2016-08-17T18:30:00.000 | 0 | 0 | 1 | 0 | python,tensorflow,gpu | 56,837,569 | 2 | false | 0 | 0 | In windows, create a new environmental variable with this name and assign its value.
You can do that by right clicking on the This PC in File Explorer, select Properties at bottom, then select Advanced system settings on left. That will get you to the System Properties dialog. Also you can type "environmental properties" in Cortana Search.
From there you click the Environmental Variables button. Once in the Environmental Variables dialog, select new to create the variable and assign the value, then back out. You may have to restart your IDE or open a new DOS window for that environmental variable to be visible. | 2 | 0 | 0 | I get a message that says my GPU Device is ignored because its multiprocessor count is lower than the minimum set. However, it gives me the environment variable TF_MIN_GPU_MULTIPROCESSOR_COUNT but it doesn't seem to exist because I keep getting command not found. When I look at the environment variables using set or printenv and grep for the variable name, it doesn't exist. Does anyone know where I can find it or how I can change its set value? | Can't find TF_MIN_GPU_MULTIPROCESSOR_COUNT | 0 | 0 | 0 | 2,221 |
39,004,849 | 2016-08-17T19:31:00.000 | 1 | 0 | 1 | 1 | python,anaconda | 39,654,422 | 1 | true | 0 | 0 | Type the following commands in the terminal:
source activate python2
spyder
Spyder will be launched with the python2 environment. With this method you do not use the anaconda navigator, but at least you can use spyder with your python2 environment. | 1 | 0 | 0 | Under OS X (10.11.6) I installed the current Python 3.5 version of Anaconda. Anaconda Navigator then works just fine to launch sypder, jupyter,or qtconsole with python 3.5.2 running.
At the command line I also created a python 2.7 environment (conda create --name python2 python=2.7 anaconda). But now when I open Anaconda Navigator, go to Environments in the left pane, and select my python2 environment, still if I go back to Home and launch sypder, jupyter, qtconsole, the python version shown is still 3.5.2.
I tried closing Anaconda Navigator, executing "source activate python2" at the command line, and reopening Anaconda Navigator, and again selecting python2 from Environments there. But still sypder, jupyter, qtconsole open with python 3.5.2.
How do I launch with python 2.7? | How open python2.7 in spyder, jupyter, qtconsole. from Anaconda Navigator installed with python3? (OS X) | 1.2 | 0 | 0 | 761 |
39,005,380 | 2016-08-17T20:06:00.000 | 4 | 0 | 1 | 1 | python,docker,virtualenv | 39,005,477 | 1 | true | 0 | 0 | Just like everything else in a Docker Container, your libraries are inside the container. Unless you mount a host volume, or a volume from another container of course. On the plus side, though, they're copy-on-write, so if you're not making changes to the libraries in your container (why would you do that anyway?) then you can have 100 running containers from the same image and they don't require any extra disk space.
Some people advocate for using a virtualenv within the container - there are pros and cons to the approach, and I don't think there's a one-sized-fits-all answer, though I would lean for not having a virtualenv. | 1 | 2 | 0 | In an environment where Docker Containers are used for each application, where are Python's shared libraries stored? Are they stored separately within each Docker Container, or shared by the host O/S?
Additionally I'm wondering if it would be best practice to use a virtual environment regardless? | When using Docker Containers, where are shared Python libraries stored? | 1.2 | 0 | 0 | 684 |
39,007,823 | 2016-08-17T23:37:00.000 | 1 | 0 | 1 | 0 | java,python,type-conversion,reserved-words,jpype | 39,027,662 | 2 | false | 1 | 0 | Figured out that jpype appends an "_" at the end for those methods/fields in its source code. So you can access it by Jpype.JClass("Foo").pass_
Wish it's documented somewhere | 2 | 1 | 0 | Any idea how this can be done? ie, if we have a variable defined in java as below
public Class Foo {
String pass = "foo";
}
how can I access this via jpype since pass is a reserved keyword? I tried
getattr(Jpype.JClass(Foo)(), "pass") but it fails to find the attribute named pass | jpype accessing java mehtod/variable whose name is reserved name in python | 0.099668 | 0 | 0 | 358 |
39,007,823 | 2016-08-17T23:37:00.000 | 0 | 0 | 1 | 0 | java,python,type-conversion,reserved-words,jpype | 39,007,952 | 2 | false | 1 | 0 | unfortunally Fields or methods conflicting with a python keyword can’t be accessed | 2 | 1 | 0 | Any idea how this can be done? ie, if we have a variable defined in java as below
public Class Foo {
String pass = "foo";
}
how can I access this via jpype since pass is a reserved keyword? I tried
getattr(Jpype.JClass(Foo)(), "pass") but it fails to find the attribute named pass | jpype accessing java mehtod/variable whose name is reserved name in python | 0 | 0 | 0 | 358 |
39,008,391 | 2016-08-18T00:56:00.000 | 0 | 0 | 0 | 0 | python,pandas,dask | 69,125,753 | 3 | false | 0 | 0 | MRocklin's answer is correct and this answer gives more details on when it's appropriate to convert from a Dask DataFrame to and Pandas DataFrame (and how to predict when it'll cause problems).
Each partition in a Dask DataFrame is a Pandas DataFrame. Running df.compute() will coalesce all the underlying partitions in the Dask DataFrame into a single Pandas DataFrame. That'll cause problems if the size of the Pandas DataFrame is bigger than the RAM on your machine.
If df has 30 GB of data and your computer has 16 GB of RAM, then df.compute() will blow up with a memory error. If df only has 1 GB of data, then you'll be fine.
You can run df.memory_usage(deep=True).sum() to compute the amount of memory that your DataFrame is using. This'll let you know if your DataFrame is sufficiently small to be coalesced into a single Pandas DataFrame.
Repartioning changes the number of underlying partitions in a Dask DataFrame. df.repartition(1).partitions[0] is conceptually similar to df.compute().
Converting to a Pandas DataFrame is especially possible after performing a big filtering operation. If you filter a 100 billion row dataset down to 10 thousand rows, then you can probably just switch to the Pandas API. | 1 | 47 | 1 | How can I transform my resulting dask.DataFrame into pandas.DataFrame (let's say I am done with heavy lifting, and just want to apply sklearn to my aggregate result)? | How to transform Dask.DataFrame to pd.DataFrame? | 0 | 0 | 0 | 32,382 |
39,008,462 | 2016-08-18T01:08:00.000 | 0 | 0 | 1 | 0 | python,powershell,window-server,nano-server | 39,020,077 | 1 | false | 0 | 0 | Nano Server is designed to be administered remotely, the local 'console' only allows you to set firewall rules and network config.
You'll need to do everything via a remote session with Nano Server. If that's not suitable for you the only option is to move to the full Server 2016 OS instead as this has the standard local console with GUI. | 1 | 0 | 0 | I have created NS image with 'Development' switch using Windows 2016 Technical Preview 5. I am deploying the NS image onto a physical machine.I want to run Python interactive shell on local Powershell but it appears that there is no local PS console on NanoServer. | How do I launch Powershell locally on NanoServer? | 0 | 0 | 0 | 120 |
39,010,119 | 2016-08-18T04:39:00.000 | 0 | 0 | 0 | 0 | python-3.x | 39,010,183 | 2 | false | 0 | 0 | faced a similar problem (Python 3.4 32-bit, on Windows 7 64-bit). After installation of cx_freeze, three files appeared in c:\Python34\Scripts:
cxfreeze
cxfreeze-postinstall
cxfreeze-quickstart
These files have no file extensions, but appear to be Python scripts. When you run python.exe cxfreeze-postinstall from the command prompt, two batch files are being created in the Python scripts directory:
cxfreeze.bat
cxfreeze-quickstart.bat
From that moment on, you should be able to run cx_freeze.
cx_freeze was installed using the provided win32 installer (cx_Freeze-4.3.3.win32-py3.4.exe). Installing it using pip gave exactly the same result. | 2 | 1 | 0 | Ok, I am using python 3.4.3 and I think I downloaded the right file but when I go to python shell, it says No module named 'cx_Freeze'
I know there are plenty of questions like this but none of them helped. There was one I found using my exact same problem and version but even that did not work. I do not know what to do. I have put the file in the same place, I think anyways, as python is and I tried putting it on my desktop but still does not work. Any ideas? | cx_Freeze not found error-python | 0 | 0 | 0 | 807 |
39,010,119 | 2016-08-18T04:39:00.000 | 0 | 0 | 0 | 0 | python-3.x | 39,030,109 | 2 | false | 0 | 0 | Ok, I figured it out. So this is for all the future people have the same problem as I am. First, download pip. Then open a python shell and import pip. This is to make sure the download of pip was successful. Then go to the cx_Freeze website and for python 3.4.3, it will be the last one I think. It will say the version of cx_Freeze and then say the version of python which is 3.4.3 for me. That will download and then go to python shell and import cx_Freeze. It should work. Remember that you have to capitalize the "F" and have the code be exactly like this "cx_Freeze" but without the quotes. That is how I solved this problem with this exact python version. | 2 | 1 | 0 | Ok, I am using python 3.4.3 and I think I downloaded the right file but when I go to python shell, it says No module named 'cx_Freeze'
I know there are plenty of questions like this but none of them helped. There was one I found using my exact same problem and version but even that did not work. I do not know what to do. I have put the file in the same place, I think anyways, as python is and I tried putting it on my desktop but still does not work. Any ideas? | cx_Freeze not found error-python | 0 | 0 | 0 | 807 |
39,012,046 | 2016-08-18T07:15:00.000 | 1 | 0 | 1 | 1 | python,aptana | 42,376,440 | 1 | false | 0 | 0 | a very belated response but it sounds like your issue is that you have the 'Show Console When Standard Out Changes' option selected.
Hope that helps, or that you found the solution on your own. Cheers! | 1 | 0 | 0 | If I run two different Python scripts simultaneously, I see a console window which shows output from each of the scripts alternately, switching back and forth every second or so. If I open a second console window before running the second script, the same thing happens - both console windows switch between the 2 scripts.
How can I get each script to output to its own console window? | Running multiple Python scripts in Aptana 3 | 0.197375 | 0 | 0 | 81 |
39,012,383 | 2016-08-18T07:34:00.000 | 0 | 0 | 1 | 1 | python,pdcurses,unicurses | 50,429,379 | 1 | true | 0 | 0 | This is impossible, because it was a build for Python 3.4! | 1 | 1 | 0 | I want to use UniCurses on Windows. For this, I downloaded various ZIP-archives. I downloaded pdc34dll.zip, pdc34dlls.zip, pdc34dllu.zip, pdc34dllw.zip and pdcurses34.zip. The last was just the source.
I tried to place the files within the pdc34dll-folder, extracted from pdc34dll.zip, to the main directory of the Python 3.5.2 installation folder, to the directory where Unicurses is installed (C:\programming\python\352.lib.site-packages\unicurses) and in the System32-directory (C:\windows\system32).
But I still get the message that pdcurses.dll cannot be found.
What do I wrong and what should I do to solve this problem properly?
Thanks for the help. | Where to place PDCurses for use with UniCurses | 1.2 | 0 | 0 | 283 |
39,014,670 | 2016-08-18T09:33:00.000 | 0 | 1 | 0 | 1 | python,linux,robotframework | 39,104,313 | 2 | false | 0 | 0 | I installed the zlib-devel and python-level with the help of yum, and recompiled the python, finally completed the test of installation. Thank you for your answer. | 2 | 0 | 0 | (centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version, errors came out as follows.
I want to install the test environment of python 2.7.3 and robot framework 2.7.6 and paramiko-1.7.4 and pycrypto-2.6
[root@localhost robotframework-2.7.6]# pybot --version
Traceback (most recent call last):
File "/usr/bin/pybot", line 4, in
from robot import run_cli
File "/usr/lib/python2.7/site-packages/robot/__init__.py", line 22, in
from robot.rebot import rebot, rebot_cli
File "/usr/lib/python2.7/site-packages/robot/rebot.py", line 268, in
from robot.conf import RebotSettings
File "/usr/lib/python2.7/site-packages/robot/conf/__init__.py", line 17, in
from .settings import RobotSettings, RebotSettings
File "/usr/lib/python2.7/site-packages/robot/conf/settings.py", line 17, in
from robot import utils
File "/usr/lib/python2.7/site-packages/robot/utils/__init__.py", line 23, in
from .compress import compress_text
File "/usr/lib/python2.7/site-packages/robot/utils/compress.py", line 25, in
import zlib
ImportError: No module named zlib | (centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version,errors came out | 0 | 0 | 0 | 127 |
39,014,670 | 2016-08-18T09:33:00.000 | 0 | 1 | 0 | 1 | python,linux,robotframework | 39,037,542 | 2 | false | 0 | 0 | Reasons could be any of the following:
Either the python files (at least one) have lost the formatting. Python is prone to formatting errors
At least one installation (python, Robo) doesn't have administrative privileges.
Environment variables (PATH, CLASSPATH, PYTHON PATH) are not set fine.
What does python --version print? If this throws errors, installation has issues. | 2 | 0 | 0 | (centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version, errors came out as follows.
I want to install the test environment of python 2.7.3 and robot framework 2.7.6 and paramiko-1.7.4 and pycrypto-2.6
[root@localhost robotframework-2.7.6]# pybot --version
Traceback (most recent call last):
File "/usr/bin/pybot", line 4, in
from robot import run_cli
File "/usr/lib/python2.7/site-packages/robot/__init__.py", line 22, in
from robot.rebot import rebot, rebot_cli
File "/usr/lib/python2.7/site-packages/robot/rebot.py", line 268, in
from robot.conf import RebotSettings
File "/usr/lib/python2.7/site-packages/robot/conf/__init__.py", line 17, in
from .settings import RobotSettings, RebotSettings
File "/usr/lib/python2.7/site-packages/robot/conf/settings.py", line 17, in
from robot import utils
File "/usr/lib/python2.7/site-packages/robot/utils/__init__.py", line 23, in
from .compress import compress_text
File "/usr/lib/python2.7/site-packages/robot/utils/compress.py", line 25, in
import zlib
ImportError: No module named zlib | (centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version,errors came out | 0 | 0 | 0 | 127 |
39,015,119 | 2016-08-18T09:53:00.000 | 1 | 0 | 0 | 0 | python,timestamp,bit-shift | 39,016,857 | 1 | false | 0 | 0 | I'm not sure of the result but I took them literally explanation
I shifted 32-bit timestamps of bits left 16 positions
then I shifted 16 places to the right and I made one or bitwise with the 16-bit timestamp | 1 | 1 | 0 | I belong a garmin watch , to report statistic they have a sdk
in this SDK they have a timestamp in two format
one is a true timestamp on 32 bit
another is the lower part on 16 bit which must be combinate whith the first
I dont know to code this in Python Can somebody help me
here is their explanation and the formula
*timestamp_16 is a 16 bit version of the timestamp field (which is 32 bit) that represents the lower 16 bits of the timestamp.
This field is meant to be used in combination with an earlier timestamp field that is used as a reference for the upper 16 bits.
The proper way to deal with this field is summarized as follows:
mesgTimestamp += ( timestamp_16 - ( mesgTimestamp & 0xFFFF ) ) & 0xFFFF;*
my problem is not to obtain the two timestamp but to combinate the two in python
thanks | python mixing garmin timestamps | 0.197375 | 0 | 0 | 200 |
39,015,410 | 2016-08-18T10:07:00.000 | 0 | 0 | 1 | 1 | python,windows,pip,pyinstaller | 44,056,779 | 1 | false | 0 | 0 | I had a similar problem with both 32 and 64-bit versions of Python installed. I found if I ran the pip install in the command prompt from the location of pip.exe it worked fine. In my case, the file path was the following:
C:\Program Files\Python\3.5\Scripts | 1 | 1 | 0 | I have made a simple python script and built a 64-bit Windows executable from it via pyinstaller. However, most computers at my office run 32-bit Windows operating systems, thus my program does not work. From what I have read, it is possible to make an executable for 32-bit systems as long as I use the 32-bit version of python. So I went ahead and installed the 32-bit version of python 3.5, but I can't find the way to link pip to the 32-bit version of python so I can install all the necessary modules. Every time I call pip it displays all the modules that are installed on the 64-bit version, even though by default I am running the 32-bit version python. | Windows Python 64 & 32 bit versions and pip | 0 | 0 | 0 | 6,739 |
39,017,678 | 2016-08-18T11:59:00.000 | 0 | 0 | 0 | 1 | python,django,asynchronous,rabbitmq,celery | 39,065,804 | 2 | false | 1 | 0 | I've used the following set up on my application:
Task is initiated from Django - information is extracted from the model instance and passed to the task as a dictionary. NB - this will be more future proof as Celery 4 will default to JSON encoding
Remote server runs task and creates a dictionary of results
Remote server then calls an update task that is only listened for by a worker on the Django server.
Django worker read results dictionary and updates model.
The Django worker listens to a separate queue, those this isn't strictly necessary. Results backend isn't used - data needed is just passed to the task | 1 | 5 | 0 | I have a django project where I am using celery with rabbitmq to perform a set of async. tasks. So the setup i have planned goes like this.
Django app running on one server.
Celery workers and rabbitmq running from another server.
My initial issue being, how to do i access django models from the celery tasks resting on another server?
and assuming I am not able to access the Django models, is there a way once the tasks gets completed, i can send a callback to the Django application passing values, so that i get to update the Django's database based on the values passed? | Django and celery on different servers and celery being able to send a callback to django once a task gets completed | 0 | 0 | 0 | 1,298 |
39,017,998 | 2016-08-18T12:15:00.000 | 2 | 0 | 0 | 0 | python,neural-network,deep-learning,caffe,conv-neural-network | 39,018,076 | 1 | true | 0 | 0 | For deploy you only need to discard the loss layer, in your case the "EuclideanLoss" layer. The output of your net is the "bottom" you fed the loss layer.
For "SoftmaxWithLoss" layer (and "SigmoidCrossEntropy") you need to replace the loss layer, since the loss layer includes an extra layer inside it (for computational reasons). | 1 | 2 | 1 | I have trained a regression network with caffe. I use "EuclideanLoss" layer in both the train and test phase. I have plotted these and the results look promising.
Now I want to deploy the model and use it. I know that if SoftmaxLoss is used, the final layer must be Softmax in the deploy file. What should this be in the case of Euclidean loss? | Deploy caffe regression model | 1.2 | 0 | 0 | 594 |
39,018,026 | 2016-08-18T12:17:00.000 | 0 | 0 | 0 | 0 | javascript,jquery,python,flask,autocomplete | 54,605,309 | 2 | false | 1 | 0 | If you managed to implement the search box itself in your flask app (it's being rendered and everything) but there are no drop-down suggestions you should be able to find out the exact error message in the developer tools of your browser.
One of the reasons could be that the URL of your web app is not included in your API key to accept requests from. | 1 | 2 | 0 | I would like to add the Google places autocomplete library to an input field but am not able to in my flask app (it doesn't give the dropdown suggestions), although it works fine in a standalone HTML file. | How to add Google places autocomplete to flask? | 0 | 0 | 0 | 983 |
39,020,353 | 2016-08-18T14:06:00.000 | 0 | 0 | 1 | 0 | python,pyinstaller | 39,025,222 | 1 | false | 0 | 0 | I believe all you'd need to do is set the path for your python environment to point to the python3 install location. You'd then just use the pip3 install pyinstaller command and it should run. You can use the command pyinstaller --version to confirm. | 1 | 1 | 0 | I am on a windows machine. I wrote my application in python 3. I have Pyinstaller installed for both python 2 and 3. How do I call python 3 pyinstaller? | How to run Python 3 Pyinstaller when I have Pyinstaller installed for both python 2 and 3? | 0 | 0 | 0 | 560 |
39,020,591 | 2016-08-18T14:18:00.000 | 1 | 0 | 0 | 1 | python,luigi | 39,028,831 | 2 | false | 0 | 0 | In general, you would not need to pass the parameters for Task A to Task B, but Task B would then need to generate the values of those parameters for Task A. If Task B can not generate those parameters, you would have to setup Task B to take those parameters in from the command line, and then pass them through to the Task A constructor in the requires method. | 1 | 0 | 0 | So I have two tasks (let's say TaskA and TaskB). I want both tasks to run hourly, but TaskB requires TaskA. TaskB does not have any parameters, but TaskA has two parameters for the day and the hour. If I run TaskB on the command line, would I need to pass it arguments? | How do Luigi parameters work? | 0.099668 | 0 | 0 | 425 |
39,021,144 | 2016-08-18T14:41:00.000 | 0 | 0 | 0 | 0 | python,xml,text,wxwidgets | 39,021,238 | 1 | false | 0 | 1 | That's a nice idea, but wx' text boxes are either entirely read only, or editable.
I think the way to go for it is to query your position inside the text box for each cursor movement, and toggle the text box state readonly/editable according to you current position, current selection, etc..
It looks like a tough task, though... :-) | 1 | 1 | 0 | I am trying to determine if something is possible. I haven't written any code for this specifically yet.
Using wxPython I would like to set up a text box (possibly a staticText) with primarily un-editable text. However I need certain parts, individual words, to be editable similar to PDF document with added text boxes.
The ultimate goal is to visually display an XML file and allow a user to directly edit only element text and nothing else in-situ. I have a couple of other ways of doing this but they are very much sub-optimal.
Thanks for any input/direction/help. | Trying to determine if a specific behavior is allowed in wxPython text | 0 | 0 | 0 | 12 |
39,021,814 | 2016-08-18T15:14:00.000 | 1 | 1 | 0 | 0 | python,api,telegram | 39,022,320 | 1 | true | 0 | 0 | No, not always.
The server however usually packs multiple messages into containers.
I would advise that you decode all the data returned from the server.
You then have a full view /log of all that is being returned, then you can decide on what needs to be replied to. | 1 | 1 | 0 | I'd like to understand the sequence of events when sending a method to the telegram server.
For example, if I send the get_future_salts method I am expecting from the server a response of type FutureSalts, but what I receive is a type of MessageContainer (which I'm having trouble parsing, but that is a separate issue).
If I ignore the MessageContainer object and simply request the next response from the server I receive the expected FutureSalts object.
Will there always be a MessageContainer object returned for each method called? If so, do I need to parse and process these MessageContainer objects? | Client/Server interactions using the Telegram.org API | 1.2 | 0 | 0 | 87 |
39,022,185 | 2016-08-18T15:32:00.000 | 0 | 1 | 0 | 0 | python,testing,code-coverage,nose | 39,025,314 | 1 | true | 0 | 0 | Unfortunately I do not know of a way in nosetests to perform this action. I actually ended up uninstalling nosetests and using just coverage.py because it seems like nosetests and coverage don't play nicely together. I know for a fact you can specify down to individual test methods what you want to run. I'm not sure if that's exactly what you are looking for but I beat my head against a brick wall for days trying to get nosetests to cooperate with no luck. Maybe it would save some effort to switch and run coverage.py directly instead? | 1 | 1 | 0 | Is there a way to have nosetests restrict the coverage information to only the tests that were run?
I know about the cover-package flag, but it would be nicer if you didn't have to specify the package.
This would be especially useful when running a a single unit test class that lives in a file with multiple unit test classes. | Restrict nosetests coverage to only the tests that were run | 1.2 | 0 | 0 | 155 |
39,022,296 | 2016-08-18T15:37:00.000 | 0 | 0 | 0 | 1 | python-2.7,driver,wmi | 39,023,289 | 1 | false | 0 | 0 | This is going to sound different but I know the powershell command will get you the driver version.
strCommand = r"powershell.exe ""Get-WmiObject Win32_PnPSignedDriver | select devicename, driverversion | ConvertTo-CSV"""
Then you can parse each line in your output. Each line is csv delimited so you have the Driver Name, and the Driver Version. I wrote a quick demo but since I am still a bit new here my code did not look right. But that is my suggestion. | 1 | 0 | 0 | I am trying to call a python [module] method to find the version of a newly installed driver on a Windows computer. Tried with WMI_SystemDriver but it does not provide the version, only other fields not needed by me at this time.Is there a way to see something like:
Question also posted on a Google group - not answered
version x.y.z.t
Thank you | How can I find an installed drivers version in Python under Windows? | 0 | 0 | 0 | 1,524 |
39,022,629 | 2016-08-18T15:53:00.000 | 0 | 1 | 1 | 0 | python,pycharm,pythonpath | 39,022,921 | 2 | false | 0 | 0 | Not sure how much effort you want to put into this temporary python path thing but you could always use a python virtual environment for running scripts or whatever you need. | 1 | 8 | 0 | I'm thinking of something like
python3 my_script.py --pythonpath /path/to/some/necessary/modules
Is there something like this? I know (I think) that Pycharm temporarily modifies PYTHONPATH when you use it to execute scripts; how does Pycharm do it?
Reasons I want to do this (you don't really need to read the following)
The reason I want to do this is that I have some code that usually needs to run on my own machine (which is fine because I use Pycharm to run it) but sometimes needs to run on a remote server (on the commandline), and it doesn't work because the remote server doesn't have the PYTHONPATHs that Pycharm automatically temporarily adds. I don't want to export PYTHONPATH=[...] because it's a big hassle to change it often (and suppose it really does need to change often). | Provide temporary PYTHONPATH on the commandline? | 0 | 0 | 0 | 3,666 |
39,023,221 | 2016-08-18T16:26:00.000 | 0 | 0 | 0 | 0 | python,layout,pyqt4,spacing | 39,023,363 | 2 | false | 0 | 1 | Admission: I don't use PyQT, but I use Qt in C++. I believe it would work the same.
The trick here, is to as you run QHBoxLayout::addWidget(), you define a stretch factor (greater than zero) to the widgets you want to stretch. You define a zero-stretch factor (the default) to the ones you want to stay small.
Have you run QWidget::setMaximumWidth() on the smallish widgets? That would also be useful, perhaps. | 1 | 0 | 0 | Is there a way to set a fixed width of a QHBoxLayout?
For example, when I have two small widgets in it that take up little space and I don't want the two to split over the entire screen width when I full screen the app. The widgets already have their widths set to their minimumSizeHint() widths. | PyQt4 GUI Box Layout Fixed Width | 0 | 0 | 0 | 903 |
39,023,270 | 2016-08-18T16:30:00.000 | 0 | 0 | 1 | 0 | python,scikit-learn,ipython | 39,031,052 | 1 | false | 0 | 0 | If the code is in a file called file.py, you should just be able to do import file (if you're not in the right folder, just run cd folder in IPython first.) | 1 | 0 | 0 | I have downloaded from github a package (scikit-lean) and put the code source in repository folder (Windows 7 64-bit).
After modifying the code source, how can I load the package into the IPython notebook for testing ?
Should I copy paste the modified in sites-packages folder ?
(what about the current original scikit-lean package)
Can I add the modified folder to the Python path ?
How to manage versioning when loading package in Python since both are same names ?
(ie: the original package vs the package I modified)
Sorry, it looks like beginner questions, but could not find anything how to start with | How to load a code source modified package in Python? | 0 | 0 | 0 | 85 |
39,027,681 | 2016-08-18T21:18:00.000 | 3 | 0 | 0 | 0 | python,sql-server,pypyodbc | 39,027,828 | 1 | true | 0 | 0 | It sounds like it's not pointing to the correct database. Have you made sure the connection information changes to point to the correct DB? So the server name is correct, the login credentials are good, etc.? | 1 | 0 | 0 | I am facing a strange problem right now. I am using pypyodbc to insert data into a test database hosted by AWS. This database that I created was by hand and did not imitate all relations and whatnot between tables. All I did was create a table with the same columns and the same datatypes as the original (let's call it master) database. When I run my code and insert the data it works in the test environment. Then I change it over to the master database and the code runs all the way through but no data is actually inputted. Is there any chance that there are security protocols in place which prevent me from inputting data in through the Python script rather than through a normal SQL query? Is there something I am missing? | Same code inserts data into one database but not into another | 1.2 | 1 | 0 | 54 |
39,029,810 | 2016-08-19T01:30:00.000 | 0 | 0 | 1 | 0 | python,pip,python-3.5 | 41,795,892 | 3 | false | 0 | 0 | You can either go directly to the dictionary of where Pip is installed, like Scripts on Windows. From that hold down the left shift button and right-click on it. Then click on "open command prompt here", something like that (depends on the language). Now you should be able to use every pip commands without an error. | 2 | 0 | 0 | I have updated my path for python 3.5.2 to its installation folder and installed pip manually through the get-pip.py file.
PIP is saying "Requirement already up-to-date: pip in c:\users\MyName\appdata\local\programs\python\python35-32\lib\site-packages"
When typing pip into CMD, it is saying that it is not recognized. Any alternatives? | PIP doesn't seem to be installing correctly | 0 | 0 | 0 | 129 |
39,029,810 | 2016-08-19T01:30:00.000 | 1 | 0 | 1 | 0 | python,pip,python-3.5 | 39,029,852 | 3 | true | 0 | 0 | You need adding pip.exe directory (C:\Pythonxxx\Scripts) to PATH Environment Variable in Windows. | 2 | 0 | 0 | I have updated my path for python 3.5.2 to its installation folder and installed pip manually through the get-pip.py file.
PIP is saying "Requirement already up-to-date: pip in c:\users\MyName\appdata\local\programs\python\python35-32\lib\site-packages"
When typing pip into CMD, it is saying that it is not recognized. Any alternatives? | PIP doesn't seem to be installing correctly | 1.2 | 0 | 0 | 129 |
39,031,796 | 2016-08-19T05:39:00.000 | -1 | 0 | 1 | 0 | python-3.x,spyder | 57,203,241 | 5 | false | 0 | 0 | Control Enter is a quick way of executing a line or block of code in both R Studio & Python.
In Spyder, make sure the line or block is highlighted before you hit 'ctrl-enter' | 5 | 34 | 0 | I am very new to Python and I am used to R studio so I choose Spyder. On the Spyder layout I saw a button 'run current line (ctrl +f10)'. But it doesn't work by pressing the button or c+10. Am I missing something? I can only select the script and 'ctrl+enter ' to run current line which is not convenient at all. I am using ubuntu with Anaconda distribution. | How to run current line in Spyder 3.5( ctrl +f10 not working) | -0.039979 | 0 | 0 | 61,819 |
39,031,796 | 2016-08-19T05:39:00.000 | 1 | 0 | 1 | 0 | python-3.x,spyder | 53,450,347 | 5 | false | 0 | 0 | Some keyboards have a different layout than others in terms of what the keys are supposed to do. For me running happens if done via Fn + F9. | 5 | 34 | 0 | I am very new to Python and I am used to R studio so I choose Spyder. On the Spyder layout I saw a button 'run current line (ctrl +f10)'. But it doesn't work by pressing the button or c+10. Am I missing something? I can only select the script and 'ctrl+enter ' to run current line which is not convenient at all. I am using ubuntu with Anaconda distribution. | How to run current line in Spyder 3.5( ctrl +f10 not working) | 0.039979 | 0 | 0 | 61,819 |
39,031,796 | 2016-08-19T05:39:00.000 | 6 | 0 | 1 | 0 | python-3.x,spyder | 55,779,875 | 5 | false | 0 | 0 | F9 is the key that does the job for you.
To replicate the RStudio style, go to Preferences in Tools menu and go to Keyboard Shortcuts.
Since Ctrl + Enter is assigned to another function, change that first.
Then assign the F9 key value to Ctrl + Enter. Now Spyder is the same as RStudio. Atleast in a way. | 5 | 34 | 0 | I am very new to Python and I am used to R studio so I choose Spyder. On the Spyder layout I saw a button 'run current line (ctrl +f10)'. But it doesn't work by pressing the button or c+10. Am I missing something? I can only select the script and 'ctrl+enter ' to run current line which is not convenient at all. I am using ubuntu with Anaconda distribution. | How to run current line in Spyder 3.5( ctrl +f10 not working) | 1 | 0 | 0 | 61,819 |
39,031,796 | 2016-08-19T05:39:00.000 | 21 | 0 | 1 | 0 | python-3.x,spyder | 50,222,294 | 5 | false | 0 | 0 | Coming from R studio I imagine you were hoping to have a command that runs the next command, rather than just that one row (which can break a command into several parts and cause errors).
The exact equivalent doesn't exist yet but if you get accustomed to adding #%% before and after chunks ("cells") you want to run together then you can use the following commands to run the whole chunk.
Run cell: Ctrl + Return
Run cell and advance : Shift+Return | 5 | 34 | 0 | I am very new to Python and I am used to R studio so I choose Spyder. On the Spyder layout I saw a button 'run current line (ctrl +f10)'. But it doesn't work by pressing the button or c+10. Am I missing something? I can only select the script and 'ctrl+enter ' to run current line which is not convenient at all. I am using ubuntu with Anaconda distribution. | How to run current line in Spyder 3.5( ctrl +f10 not working) | 1 | 0 | 0 | 61,819 |
39,031,796 | 2016-08-19T05:39:00.000 | 61 | 0 | 1 | 0 | python-3.x,spyder | 39,037,466 | 5 | true | 0 | 0 | The key to run the current line by itself is F9. The shortcut ctrl+F10 is used if you are in debugging mode.
You can see a list of shortcuts by selecting Preferences in the Tool menu, and then clicking on Keyboard shortcuts. | 5 | 34 | 0 | I am very new to Python and I am used to R studio so I choose Spyder. On the Spyder layout I saw a button 'run current line (ctrl +f10)'. But it doesn't work by pressing the button or c+10. Am I missing something? I can only select the script and 'ctrl+enter ' to run current line which is not convenient at all. I am using ubuntu with Anaconda distribution. | How to run current line in Spyder 3.5( ctrl +f10 not working) | 1.2 | 0 | 0 | 61,819 |
39,035,360 | 2016-08-19T09:16:00.000 | 1 | 0 | 1 | 1 | python,caffe,pycaffe | 42,746,031 | 1 | false | 0 | 0 | I've got the same error while building matcaffe interface with python 3.5, so I downgraded Anaconda and Python to 2.7 version and it successed. | 1 | 1 | 0 | I am trying to compile pycaffe in Windows 7 using Anaconda 3 and Visual studio 2013. I have set the anaconda path and lib path correctly. When I try to build I am getting the following error:
"Error 1 error LNK1104: cannot open file 'python27.lib' D:\caffe-master\windows\caffe\LINK caffe"
I am using Python 3.6 but not sure why the build is looking for 2.7 lib. How do I make build pick the correct python lib?
Thanks | pycaffe windows - cannot open python27.lib | 0.197375 | 0 | 0 | 1,017 |
39,045,825 | 2016-08-19T18:42:00.000 | 0 | 1 | 0 | 0 | python-2.7,hadoop,hdfs,tweepy,centos7 | 39,046,078 | 1 | true | 0 | 0 | It looks like you're using Anaconda's Python to run your script, but you installed tweepy into CentOS's system installation of Python using pip. Either use conda to install tweepy, or use Anaconda's pip executable to install tweepy onto your Hadoop cluster. | 1 | 0 | 1 | I have a Hadoop Cluster running on Centos 7. I am running a program (sitting on HDFS) to extract tweets and I need to import tweepy for that. I did pip install tweepy as root on all the nodes of the cluster but i still get an import error when I run the program.
Error says: ImportError: No module named tweepy
I am sure Tweepy is installed because, pip freeze | grep "tweepy" returns tweepy==3.5.0.
I created another file x.py with just one line import tweepy in the /tmp folder and that runs without an error. Error occurs only on HDFS.
Also, my default python is Python 2.7.12 which I installed using Anaconda. Can someone help me with this issue? The same code is running without any such errors on another cluster running on Centos 6.6. Is it an OS issue? Or do I have to look into the Cluster? | Tweepy import Error on HDFS running on Centos 7 | 1.2 | 0 | 0 | 192 |
39,048,714 | 2016-08-19T22:54:00.000 | 1 | 0 | 1 | 0 | vim,ipython | 39,092,066 | 1 | true | 0 | 0 | So Thomas K. got the general problem: vim was exiting with an exit code of 1. The cause of that was a python flake8 checker that didn't like some formatting as I saved. | 1 | 1 | 0 | When I try to edit a multi-line command in IPython, it opens vim just fine.
I edit my code, but when I write and quit, I get the message
Editing... WARNING: Could not open editor.
And the edited code does not appear in IPython.
Any idea what this could be? I do have several plugins installed, and can list them if someone thinks a plugin might be the problem. | I get a "could not open editor" warning in IPython even though editor did open | 1.2 | 0 | 0 | 194 |
39,048,934 | 2016-08-19T23:26:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 39,049,021 | 2 | false | 0 | 0 | I did this:
Create folder wherever you want your file to be
Open this folder in pycharm as your project (if you have a project open, close it and open this folder as your new project)
Open Project tab on the left
Right click your folder -> New -> Python File -> Create hello.py with print 'Hello World' as contents
From the top menus, go to Run-> from the drop-down click Run
Click hello so it knows to run that.
If you don't want to deal with the Run stuff you can always invoke your script from the command line with python hello.py.
Hello World! should appear in your console! | 2 | 0 | 0 | I have a seemingly simple task to do. Run this ( print('Hello World') ) line of code in PyCharm. No, I'm serious. I wont bother complaining about what I've tried or how much I've coded in the past (spoiler alert, a lot) because I just want to get it running. The online tutorial wants me to do some kind of project structure thing before I can even start, which didn't even work to begin with. I really just want to be able to run code from a file, so if anyone who's figured it out can tell me in a
"
Do this
then this
finally this
"
type format that would be wonderful, because I cant even run a single line from a file. | Basic PyCharm questions | 0 | 0 | 0 | 342 |
39,048,934 | 2016-08-19T23:26:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 39,049,040 | 2 | false | 0 | 0 | Well it's really simple, if you used any code editor in the past you probably will understand what i'm doing here:
File >> New >> Python File >> "name of the file" >> Create.
To run the code simply click the play icon which is run 'project'.
Then the output should appear in a console. | 2 | 0 | 0 | I have a seemingly simple task to do. Run this ( print('Hello World') ) line of code in PyCharm. No, I'm serious. I wont bother complaining about what I've tried or how much I've coded in the past (spoiler alert, a lot) because I just want to get it running. The online tutorial wants me to do some kind of project structure thing before I can even start, which didn't even work to begin with. I really just want to be able to run code from a file, so if anyone who's figured it out can tell me in a
"
Do this
then this
finally this
"
type format that would be wonderful, because I cant even run a single line from a file. | Basic PyCharm questions | 0.099668 | 0 | 0 | 342 |
39,055,728 | 2016-08-20T15:22:00.000 | 1 | 0 | 1 | 0 | python,django,virtualenv | 39,055,884 | 4 | false | 1 | 0 | In most simple way, a virtual environment provides you a development environment independent of the host operating system. You can install and use necessary software in the /bin folder of the virtualenv, instead of using the software installed in the host machine.
Python development generally depends of various libraries and dependencies. For eg. if you install latest version of pip using sudo pip install django, the django software of the specific version will be available system wide. Now, if you you need to use django of another version for a project, you can simply create a virtaulenv, install that version of django in it, and use, without bothering the django version installed in os.
Yes, it is strongly recommended to setup separate virtualenv for each project. Once you are used to it, it will seem fairly trivial and highly useful for development, removing a lot of future headaches. | 4 | 4 | 0 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. | importance of virtual environment setup for django with python | 0.049958 | 0 | 0 | 6,137 |
39,055,728 | 2016-08-20T15:22:00.000 | 0 | 0 | 1 | 0 | python,django,virtualenv | 66,072,526 | 4 | false | 1 | 0 | It allows you to switch between different dependencies and versions of Python and other systems like PIP and Django.
It is similar to using Docker where you can pick and choose each version. It is definitely recommended. If you are starting fresh and using the latest versions, you do not NEED to use it however it is good practice to just install virtualenv and start using it before you install django. | 4 | 4 | 0 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. | importance of virtual environment setup for django with python | 0 | 0 | 0 | 6,137 |
39,055,728 | 2016-08-20T15:22:00.000 | 17 | 0 | 1 | 0 | python,django,virtualenv | 39,055,882 | 4 | true | 1 | 0 | A virtual environment is a way for you to have multiple versions of
python on your machine without them clashing with each other, each
version can be considered as a development environment and you can
have different versions of python libraries and modules all isolated
from one another
Yes it's very important. For example without a virtualenv, if you're
working on an open source project that uses django 1.5 but locally on
your machine, you installed django 1.9 for other personal projects.
It's almost impossible for you to contribute because you'll get a lot of
errors due to the difference in django versions. If you decide to
downgrade to django 1.5 then you can't work on your personal projects
anymore because they depend on django 1.9.
A virtualenv handles all this for you by enabling you to create seperate
virtual (development) environments that aren't tied to each other and can
be activated and deactivated easily when you're done. You can also have
different versions of python
You're not forced to but you should, it's as easy as:
virtualenv newenv
cd newenv
source bin/activate # The current shell uses the virtual environment
Moreover it's very important for testing, lets say you want to port
a django web app from 1.5 to 1.9, you can easily do that by creating
different virtualenv's and installing different versions of django.
it's impossible to do this without uninstalling one version (except
you want to mess with sys.path which isn't a good idea) | 4 | 4 | 0 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. | importance of virtual environment setup for django with python | 1.2 | 0 | 0 | 6,137 |
39,055,728 | 2016-08-20T15:22:00.000 | 2 | 0 | 1 | 0 | python,django,virtualenv | 39,055,867 | 4 | false | 1 | 0 | While I can't directly describe the experience with Django and virtual environments, I suspect its pretty similar to how I have been using Flask and virtualenv.
Virtual environment does exactly what it says - where a environment is set up for you to develop your app (including your web app) that does not impact the libraries that you run on your machine. It creates a blank slate, so to speak, with just the core Python modules. You can use pip to install new modules and freeze it into a requirements.txt file so that any users (including yourself) can see which external libraries are needed.
It has a lot of importance because of the ability to track external libraries. For instance, I program between two machines and I have a virtual environment set up for either machine. The requirements.txt file allows me to install only the libraries I need with the exact versions of those libraries. This guarantees that when I am ready to deploy on a production machine, I know what libraries that I need. This prevents any modules that I have installed outside of a virtual environment from impacting the program that I run within a virtual environment.
Yes and no. I think it is good practice to use a virtual environment for those above reasons and keeps your projects clean. Not to mention, it is not difficult to set up a virtual environment and maintain it. If you're just running a small script to check on an algorithm or approach, you may not need a virtual environment. But I would still recommend doing so to keep your runtime environments clean and well managed. | 4 | 4 | 0 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. | importance of virtual environment setup for django with python | 0.099668 | 0 | 0 | 6,137 |
39,056,356 | 2016-08-20T16:27:00.000 | 11 | 0 | 1 | 0 | python,python-3.x,python-venv | 39,056,653 | 2 | true | 0 | 0 | venv has the activate script which you can modify to add your environment variables.
I would add the variables at the bottom, making a nice comment block to clearly separate the core functionality and my custom variables. | 1 | 13 | 0 | I'm using venv (used pyvenv to create the environment) and would like to set up environment variables here, but postactivate looks like a virtualenv thing. Can this be done with venv? | How can I use a postactivate script using Python 3 venv? | 1.2 | 0 | 0 | 2,737 |
39,057,569 | 2016-08-20T18:43:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,encryption,aws-lambda,boto3 | 39,058,104 | 2 | false | 0 | 0 | If you turn on the debug logging you should see how exactly is data transmitted. Or try netstat or Wireshark to see if it makes connection to port 443 rather than 80.
From my experience with boto3 and S3 (not Lambda) it uses HTTPS, which I would consider somewhat secure. I hope the certificates are verified... | 1 | 1 | 0 | I am using the Python AWS API. I would like to invoke a lambda function from client code, but I have not been able to find documentation on whether the payload sent during invocation is encrypted.
Can someone watching the network potentially snoop on the AWS invocation payload? Or is the payload transmitted over a secure channel? | Does Boto3 encrypt the payload during transmission when invoking a lambda function? | 0 | 0 | 1 | 943 |
39,060,596 | 2016-08-21T03:26:00.000 | 0 | 1 | 1 | 0 | python,performance,import,module | 39,060,635 | 1 | false | 0 | 0 | When a python program runs, the main script is always passed through the interpreter. When a module is imported, however, python checks its cache ( subdirectory named __pycache__) where it stores modules that have previously been compiled to bytecode. If the date of the cached copy matches the date of the source code date, it uses the cached version. That probably accounts for what you are seeing. | 1 | 0 | 0 | I have a scrip 'xyz.py' that I'm importing as a module for another script (Main.py). Everything in xyz.py is inside of a class that I call in Main.py. Both, xyz.py and Main.py share the same import statements: "xml.etree.ElementTree"; "Tkinter"; "cv2"; "tkFileDialog"; "tkfd"; "from PIL import Image"; "ImageTk"; "os"
I noticed that when I run in Main.py the class having all the methods and statements of xyz.py, they run faster as a module than as the main script.
Is there a general fact behind this observation that I could use to speed up other stuff? Thank you.
PS: I didn't provide the code because it sums up to >400 lines, and I don't know exactly what I'm supposed to be looking at, so I'm not able to take a small and relevant sample. | why does a script run faster as a module? | 0 | 0 | 0 | 50 |
39,062,263 | 2016-08-21T08:21:00.000 | 0 | 0 | 1 | 0 | python,ios,xcode | 47,449,527 | 4 | false | 1 | 0 | I had the exact thing happen to me except on High Sierra. I had deleted the old version folders of Python in /System/Library/Frameworks/Python.framework/Versions/, which was a mistake seeing that these are the Apple installed Python files. After trying to launch Xcode, Xcode could no longer access the Python files it needed. Unfortunately I had deleted them and emptied the trash, so the only way I could restore those files was by reinstalling High Sierra.
So if you run into this plugin error and you've messed with Python files, you need to recover those files either by taking them back out of the trash or by reinstalling your operating system (reinstalling doesn't erase the data on your computer, but it will add missing files, such as the Python ones I deleted).
Hope that helps someone in a similar situation. | 1 | 3 | 0 | when launching Xcode beta 8 on a macOS Sierra beta I'm getting this error:
Loading a plug-in failed.
The plug-in or one of its prerequisite plug-ins may be missing or damaged and may need to be reinstalled.
After searching, it seems that the issue is related with python and the new security measures that Apple introduced after XCode Ghost.
I couldn't find a solution, anybody can help?
EDIT
By looking at the Xcode logs, I noticed that it has NOTHING (apparently) to do with Python.
I see a whole bunch of
*Requested but did not find extension point with identifier Xcode.**
errors
I have to say that I also have Xcode 7 installed on my machine. | Xcode 8: Loading a plug-in failed | 0 | 0 | 0 | 2,895 |
39,062,605 | 2016-08-21T09:08:00.000 | 0 | 0 | 0 | 0 | django,amazon-s3,python-django-storages | 39,062,626 | 1 | false | 1 | 0 | The URL is relative to the amazon storage address you provide in your settings. so you only need to move the images to a new bucket and update your settings. | 1 | 0 | 0 | I have a Django application where I use django-storages and amazon s3 to store images.
I need to move those images to a different account: different user different bucket.
I wanted to know how do I migrate those pictures?
my main concern is the links in my database to all those images, how do I update it? | changing s3 storages with django-storages | 0 | 1 | 0 | 82 |
39,064,796 | 2016-08-21T13:36:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,terminal | 39,064,972 | 2 | false | 0 | 0 | You would need to have python implemented into the software.
Also, I believe this is a task for GCSE Computing this year as I was privileged enough to choose what test we are doing and there was a question about serial numbers. | 1 | 0 | 0 | I'm writing a code to read serial input. Once the serial input has been read, I have to add a time stamp below it and then the output from a certain software. To get the output from the software, I want python to write a certain command to the terminal, and then read the output that comes on the terminal. Could you suggest how do I go about doing the last step: namely, writing to the terminal then reading the output? I'm a beginner in python, so please excuse me if this sounds trivial. | Giving input to terminal in python | 0 | 0 | 0 | 338 |
39,069,794 | 2016-08-21T23:43:00.000 | 0 | 0 | 1 | 0 | python,xlwings,comtypes | 39,083,271 | 1 | false | 0 | 1 | maybe this would work better:
click on the "WinPython Command Prompt" icon of your WinPython distribution
in the opening DOS windows, type:
pip install xlwings | 1 | 1 | 0 | Python and xlwings in same folder. comtypes folder in xlwings folder
can't find module named 'com types'
The xlwings documentation says to install with pip. This puts xlwings in the C:\Python27 folder. WinPython ends up in the Downloads/WinPython-64bit-3.4.4-3Qtr5/ (1.37GB, btw) moved the xlwings to the WinPython installed folder.
This is way too difficult. Is there a straightforward way to set all this up so I can run a python script and get import xlwings as xw to work? | win 10 WinPython Script import xlwings get errors | 0 | 0 | 0 | 256 |
39,074,638 | 2016-08-22T08:28:00.000 | 1 | 0 | 0 | 1 | python,ibm-cloud,openwhisk | 39,074,811 | 1 | false | 0 | 0 | This is not currently possible.
OpenWhisk can only create Actions from Docker images stored in the external Docker Hub registry. | 1 | 1 | 0 | I pushed my Docker image to my Bluemix registry; I ran the container on Bluemix just fine; I have also set up a skeleton OpenWhisk rule which triggers a sample Python action but wish to trigger the image in my Bluemix registry as the action.
But, as far as I can see from the OpenWhisk documents, it is only possible to trigger Docker actions hosted on Docker Hub. (Per the wsk idk install docker skeleton).
Can OpenWhisk trigger Docker actions in my Bluemix registry? | Can OpenWhisk trigger Docker actions in my Bluemix registry? | 0.197375 | 0 | 0 | 128 |
39,080,416 | 2016-08-22T13:07:00.000 | 1 | 0 | 1 | 0 | python,syntax,ternary-operator | 39,080,644 | 5 | false | 0 | 0 | I think that first it will check <condition> if it's True then it execute X and skip executing Y if X evaluate to True
But if <condition> fails then it skip executing X and execute OR statement and execute Y. | 2 | 6 | 0 | In Java or C we have <condition> ? X : Y, which translates into Python as X if <condition> else Y.
But there's also this little trick: <condition> and X or Y.
While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how and and or operators are able to produce correct result. What's the logic behind this? | X and Y or Z - ternary operator | 0.039979 | 0 | 0 | 3,890 |
39,080,416 | 2016-08-22T13:07:00.000 | 0 | 0 | 1 | 0 | python,syntax,ternary-operator | 39,080,730 | 5 | false | 0 | 0 | This makes use of the fact that precedence of and is higher than or.
So <condition> and X or Y is basically (<condition> and X) or Y. If <condition> and X evaluates to True, there is no need to evaluate further, as anything True or Y is True. If <condition> and X evaluates to False, then Y is returned as False or Y is basically Y. | 2 | 6 | 0 | In Java or C we have <condition> ? X : Y, which translates into Python as X if <condition> else Y.
But there's also this little trick: <condition> and X or Y.
While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how and and or operators are able to produce correct result. What's the logic behind this? | X and Y or Z - ternary operator | 0 | 0 | 0 | 3,890 |
39,085,646 | 2016-08-22T17:43:00.000 | -1 | 1 | 0 | 0 | python,graph,igraph | 39,735,109 | 2 | false | 0 | 0 | So I figured it out. What you need to do is find a community structure, either pre-defined or using one of the methods provided for community detection, such as infomap or label_propagation. This gives you a vertex clustering, which you can use to place on another graph and from that use .q to find the modularity. | 1 | 0 | 0 | I have two related graphs created in iGraph, A and G. I find community in structure in G using either infomap or label_propagation methods (because they are two that allow for weighted, directional links). From this, I can see the modularity of this community for the G graph. However, I need to see what modularity this will provide for the A graph. How can I do this? | Find overlapping modularity in two graphs - iGraph in Python | -0.099668 | 0 | 0 | 366 |
39,086,368 | 2016-08-22T18:29:00.000 | 5 | 0 | 1 | 1 | python | 39,086,415 | 2 | true | 0 | 0 | You can't read more bytes than is in the file. "End of file" literally means exactly that. | 1 | 0 | 0 | I'm trying to read beyond the EOF in Python, but so far I'm failing (also tried to work with seek to position and read fixed size).
I've found a workaround which only works on Linux (and is quite slow, too) by working with debugfs and subprocess, but this is to slow and does not work on windows.
My Question: is it possible to read a file beyond EOF in python (which works on all platforms)? | python: read beyond end of file | 1.2 | 0 | 0 | 1,809 |
39,086,388 | 2016-08-22T18:30:00.000 | 2 | 1 | 0 | 0 | python,amazon-web-services,boto3,boto | 39,086,478 | 2 | false | 1 | 0 | Boto is a Python wrapper for AWS APIs. If you want to interact with AWS using its published APIs, you need boto/boto3 library installed. Boto will not be supported for long. So if you are starting to use Boto, use Boto3 which is much simpler than Boto.
Boto3 supports (almost) all AWS services. | 2 | 1 | 0 | Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.
Then I googled AWS and Python, Amazon mentioned Boto.
I don't know why using Boto. Because if I type python, it already installed.
What I want to do is run a script on day time.
Is there a need for me to reading about Boto or just run xx.py on AWS ?
Any help is appreciated. | Running Python scripts on Amazon Web Services? Do I need to use Boto? | 0.197375 | 0 | 1 | 561 |
39,086,388 | 2016-08-22T18:30:00.000 | 3 | 1 | 0 | 0 | python,amazon-web-services,boto3,boto | 39,086,443 | 2 | true | 1 | 0 | Boto is a python interface to Amazon Services (like copying to S3, etc).
You don't need it to just run regular python as you would on any linux instance with python installed, except to access AWS services from your EC2 instance. | 2 | 1 | 0 | Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.
Then I googled AWS and Python, Amazon mentioned Boto.
I don't know why using Boto. Because if I type python, it already installed.
What I want to do is run a script on day time.
Is there a need for me to reading about Boto or just run xx.py on AWS ?
Any help is appreciated. | Running Python scripts on Amazon Web Services? Do I need to use Boto? | 1.2 | 0 | 1 | 561 |
39,086,420 | 2016-08-22T18:32:00.000 | 1 | 0 | 0 | 1 | python,python-requests | 39,086,692 | 1 | true | 0 | 0 | requests is a HTTP request library, while Spark's wordcount example provides a raw socket server, so no, requests is not the right package to communicate with your Spark app. | 1 | 0 | 0 | I have an application (spark based service), which when starts..works like following.
At localhost:9000
if I do nc -lk localhost 9000
and then start entering the text.. it takes the text entered in terminal as an input and do a simple wordcount computation on it.
how do i use the requests library to programmatically send the text, instead of manually writing them in the terminal.
Not sure if my question is making sense.. | Using requests package to make request | 1.2 | 0 | 1 | 32 |
39,086,434 | 2016-08-22T18:33:00.000 | 1 | 1 | 0 | 0 | python,django,email,testing | 39,087,027 | 1 | false | 1 | 0 | You don't need to define the EMAIL_BACKEND setting (it has a default), but you do need to define a setting module. You can set the DJANGO_SETTINGS_MODULE in your shell environment, or set os.environ['DJANGO_SETTINGS_MODULE'] to point to your settings module.
Note that calling python manage.py shell will set up the Django environment for you, which includes setting DJANGO_SETTINGS_MODULE and calling django.setup(). You still need to call setup_test_environment() to manually run tests in your python shell. | 1 | 0 | 0 | I want to test a view in my Django application. So I open the python shell by typing python and then I type from django.test.utils import setup_test_environment. It seems to work fine. Then I type setup_test_environment() and it says
django.core.exceptions.ImproperlyConfigured: Requested setting
EMAIL_BACKEND, but settings are not configured. You must either define
the environment variable DJANGO_SETTINGS_MODULE or call
settings.configure() before accessing settings.
I don't need to send mails in my test, so why does Django wants me to configure an email back-end ?
Are we forced to configure an email back-end for any test even if it doesn't need it ? | django testing without email backend | 0.197375 | 0 | 0 | 969 |
39,087,037 | 2016-08-22T19:09:00.000 | 0 | 1 | 1 | 1 | eclipse,python-3.x,pydev | 39,088,428 | 1 | false | 0 | 0 | I figured out that I was opening Eclipse in the wrong workspace. When I found the correct workspace for that project (by looking for the .metadata file on my C drive) everything was all set (and I didn't have to import the project at all).
I was going to delete the question, but figured instead I'd answer in case this helps someone else. | 1 | 0 | 0 | I need to install an existing pydev project into Eclipse on a new machine. (Actually it is the same machine, but re-imaged.) The new machine has Eclipse Neon. I was using an older version previously.
My data has all been copied over. I have the folder where the project lived on my old machine, which includes the .project and .pydevproject files. I used the Import wizard to import it, but I don't see my run configurations, pythonpath, etc.
Where might those be stored on my old machine, and can I recover them easily without setting them up again by hand? | Import Pydev Project into Eclipse on a new machine | 0 | 0 | 0 | 35 |
39,090,071 | 2016-08-22T23:10:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,pip | 39,090,123 | 1 | true | 0 | 0 | Try pip.main(['install', '--upgrade', package]) instead. pip.main just takes arguments exactly like the command line version. | 1 | 2 | 0 | I saw an older question, suggesting to use pip.main(package), however this does not upgrade a package. I could not find anything. Thanks in advance. | Upgrade packages with pip from inside code | 1.2 | 0 | 0 | 257 |
39,090,768 | 2016-08-23T00:53:00.000 | 0 | 0 | 0 | 0 | python,selenium,testing | 39,091,712 | 2 | false | 1 | 0 | As suggested by saurabh , use
1 self.wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, OR.Sub_categories)))
Else put a sleep and see however it is not advisable to use that, may be the xpath you have changes at the time of page load | 1 | 1 | 0 | The homepage for the web application I'm testing has a loading screen when you first load it, then a username/password box appears. It is a dynamically generated UI element and the cursor defaults to being inside the username field.
I looked around and someone suggested using action chains. When I use action chains, I can immediately input text into the username and password fields and then press enter and the next page loads fine. Unfortunately, action chains are not a viable long-term answer for me due to my particular setup.
When I use the webdriver's find_element_by_id I am able to locate it and I am not able to send_keys to the element though because it is somehow not visible. I receive
selenium.common.exceptions.ElementNotVisibleException: Message: element not visible.
I'm also not able to click the field or otherwise interact with it without getting this error.
I have also tried identifying and interacting with the elements via other means, such as "xpaths" and css, to no avail. They are always not visible.
Strangely, it works with dynamic page titles. When the page first loads it is Loading... and when finished it is Login. The driver will return the current title when driver.title is called.
Does anyone have a suggestion? | Webpage contained within one dynamic page, unable to use driver interactions with Selenium (Python) | 0 | 0 | 1 | 69 |
39,096,384 | 2016-08-23T08:43:00.000 | 1 | 0 | 1 | 1 | python,environment-variables | 39,096,747 | 1 | false | 0 | 0 | While using bash add this
~/.bashrc
export PYTHONPATH="${PYTHONPATH}:/Home/dev/path
Make sure the directory you point to has at the topmost init.py file in your directory structure | 1 | 0 | 0 | I want to know if it is possible to add the path to a directory to the environment variables permanently using python. I have seen other questions that relate to mine but the answers there only add the path temporarily,I want to know if there's a way to add it permanently | How do I add the path to a directory to the environment variables using python | 0.197375 | 0 | 0 | 53 |
39,110,800 | 2016-08-23T21:16:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter | 39,112,160 | 2 | false | 0 | 1 | Tkinter provides no option for this. Alt-tab is intercepted before tkinter ever sees it. If you want to do this, you'll have to find some platform-specific hooks. | 1 | 0 | 0 | How can i disable Alt+Tab combination, especially tab on my Tkinter App.
I disabled Alt and F4 with - return "break" - but i can't disable Tab key with it. | Disable Alt+Tab Combination on Tkinter App | 0 | 0 | 0 | 1,869 |
39,110,980 | 2016-08-23T21:29:00.000 | 0 | 0 | 0 | 1 | python,linux,jenkins,pip | 39,174,711 | 2 | false | 0 | 0 | Not a specific plug-in like you might want, but as was said, you can create a virtual environment in one of a few ways to get the functionality you're after.
Docker can handle this, you can create a small script to build a docker image that will have access to pip and there are Jenkins plug-ins for docker. | 1 | 0 | 0 | If pip is not installed on the jenkins linux-box,
is there any jenkins-plugin that lets me run pip, without installing it at the os-level? | Run pip through jenkins-plugin? | 0 | 0 | 0 | 639 |
39,111,598 | 2016-08-23T22:22:00.000 | 2 | 0 | 0 | 0 | python,database-design,amazon-dynamodb | 39,114,304 | 1 | true | 1 | 0 | You can use RabbitMQ to schedule jobs asynchronously. This would be faster than multiple DB queries. Basically, this tool allows you to create a job queue (Containing UserID, StoreID & Timestamp) where workers can remove (at midnight if you want) and create your reports (or whatever your heart desires).
This also allows you to scale your system horizontally across nodes. Your workers can be different machines executing these tasks. You will also be safe if your DB crashes (though you may still have to design redundancy for a machine running RabbitMQ service).
DB should be used for persistent storage and not as a queue for processing. | 1 | 2 | 0 | I need to store some daily information in DynamoDB. Basically, I need to store user actions: UserID, StoreID, ActionID and Timestamp.
Each night I would like to process the information generated that day, do some aggregations, some reports, and then I can safely deleted those records.
How should I model this? I mean the hash key and the sort key... I need to have the full timestamp of each action for the reports but in order to query DynamoDB I guess it is easier to also save the date only.
I have some PKs as UserID and StoreID but anyhow I need to process all data each night, not the data related to one user or one store...
Thanks!
Patricio | How to build model in DynamoDB if each night I need to process the daily records and then delete them? | 1.2 | 1 | 0 | 29 |
39,111,886 | 2016-08-23T22:50:00.000 | 0 | 0 | 1 | 0 | python,anaconda | 67,132,209 | 3 | false | 0 | 0 | I recommend installing the newest Anaconda version and using virtual-environments. This way, you would set up a Python 3.4 environment. | 2 | 4 | 0 | How can I download Anaconda with previous Python versions like Python 3.4 64-bit.
The reason is Bloomberg API is only available up to 3.4 and 3.5 is not out yet. | Python 3.4 64-bit download | 0 | 0 | 0 | 14,421 |
39,111,886 | 2016-08-23T22:50:00.000 | 0 | 0 | 1 | 0 | python,anaconda | 44,196,539 | 3 | false | 0 | 0 | If you are in an environment with Python version 3.4.2, this command will update Python to 3.4.3, which is the latest version in the 3.4 branch
$ conda update python
This command will upgrade Python to another branch such as 3.5 by installing that version of Python:
$ conda install python=3.5
Hope that helps :) | 2 | 4 | 0 | How can I download Anaconda with previous Python versions like Python 3.4 64-bit.
The reason is Bloomberg API is only available up to 3.4 and 3.5 is not out yet. | Python 3.4 64-bit download | 0 | 0 | 0 | 14,421 |
39,113,533 | 2016-08-24T02:35:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,character-encoding | 39,114,483 | 2 | false | 0 | 0 | I just found that you need to do 2 things to achieve this:
Change the Window's display language to Chinese.
Use encoding UTF-16 in the writing process. | 1 | 1 | 0 | I have been trying several times to export Chinese from list variables to csv or txt file and found problems with that.
Specifically, I have already set the encoding as utf-8 or utf-16 when reading the data and writing them to the file. However, I noticed that I cannot do that when my Window 7’s base language is English, even that I change the language setting to Chinese. When I run the Python programs under the Window 7 with Chinese as base language, I can successfully export and show Chinese perfectly.
I am wondering why that happens and any solution helping me show Chinese characters in the exported file when running the Python programs under English-based Window? | Python 3.5: Exporting Chinese Characters | 0 | 0 | 0 | 1,051 |
39,115,001 | 2016-08-24T05:22:00.000 | 1 | 0 | 1 | 0 | python | 39,115,859 | 2 | false | 0 | 0 | One of the simplest ways is just copying the site-package directory in the original winpython to the new one (It is assumed that the versions of two winpythons are the same, saying python 3.5).
If you thinks the above way is silly, then you can use pip instead.
Extract the installed packages from original winpython
(pip used below should belong to the original winpython)
pip freeze --all > python_packages.txt
Install the extracted package list with pip.
(pip used below should belong to the new winpython)
pip install -r python_packages.txt | 1 | 0 | 0 | I am using winpython. Now for simple distribution, I want to use winpython zero.
Is it possible to install the package from winpython folder to winpython zero folder? | How to install module from another python installation | 0.099668 | 0 | 0 | 142 |
39,115,603 | 2016-08-24T06:10:00.000 | 2 | 0 | 1 | 0 | python,debugging,pycharm | 39,115,848 | 3 | true | 0 | 0 | If you debug using pressing SHIFT-F9 it debugs the last file you debugged, which might be some file you debugged yesterday...
To debug a new file press ALT-SHIF-F9.
You can see these two different debugging options from the Run menu. There is Debug <last file> and there is Debug... | 2 | 0 | 0 | I am doing simple python coding in pycharm but the problem is whenever I debug it starts debugging some other file in my project and not the one I am working with.
I did go to run-->edit configuration and check if my file was set for debugging and it was but still it debugs another file when I start debugging.
any help will be appreciated | pycharm debugger not working properly | 1.2 | 0 | 0 | 5,098 |
39,115,603 | 2016-08-24T06:10:00.000 | 0 | 0 | 1 | 0 | python,debugging,pycharm | 67,144,827 | 3 | false | 0 | 0 | The saving python files with the python module name causing debugging issues.
Replace the files with other name it will work. | 2 | 0 | 0 | I am doing simple python coding in pycharm but the problem is whenever I debug it starts debugging some other file in my project and not the one I am working with.
I did go to run-->edit configuration and check if my file was set for debugging and it was but still it debugs another file when I start debugging.
any help will be appreciated | pycharm debugger not working properly | 0 | 0 | 0 | 5,098 |
39,116,877 | 2016-08-24T07:22:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,opencv,image-processing,histogram | 39,119,151 | 1 | false | 0 | 0 | You can remove the white color, rebin the histogra and then compare:
Compute a histrogram with 256 bins.
Remove the white bin (or make it zero).
Regroup the bins to have 64 bins by adding the values of 4 consecutive bins.
Perform the compareHist().
This would work for any "predominant color". To generalize, you can do the following:
Compare full histrograms. If they are different, then finish.
If they are similar, look for the predominant color (with a 256-bin histogram), and perform the procedure described above, to remove the predominant color from the comparisson. | 1 | 0 | 1 | my program's purpose is to take 2 images and decide how similar they are.
im not talking here about identical, but similarity. for example, if i take 2 screenshots of 2 different pages of the same website, their theme colors would probably be very similar and therefor i want the program to declare that they are similar.
my problem starts when both images have a white background that pretty much takes over the histogram calculation (over then 30% of the image is white and the rest is distributed).
in that case, the cv2.compareHist (using correlation method, which works for the other cases) gives very bad results, that is, the grade is very high even though they look very different.
i have thought about taking the white (255) off the histogram before comparing, but that requires me to calculate the histogram with 256 bins, which is not good when i want to check similarity (i thought that using 32 or 64 bins would be best)
unfortunately i cant add images im working with due to legal reasons
if anyone can help with an idea, or code that solves it i would be very grateful
thank you very much | python2.7 histogram comparison - white background anomaly | 0 | 0 | 0 | 161 |
39,121,432 | 2016-08-24T10:58:00.000 | 6 | 0 | 1 | 0 | python,pycharm | 39,121,575 | 2 | false | 0 | 0 | There is redundant parenthesis under Inspections (PyCharm 2016.1.4). Look closer.
If you still can't find it, there is a search bar on the top left corner of the settings menu. Search for redun and the redundant parenthesis inspection should come up. | 1 | 4 | 0 | PyCharm decides that certain parenthesis in my Python code are 'redundant'. I want to keep them anyway. So PyCharm started annoying me with green lines under them. I don't want to give in to PyCharm's quirks.
I was able to ignore other warnings in the following way:
File > Settings > Editor > Inspections > uncheck all warnings that you don't like..
Sadly, the 'redundant parentheses' warning does not appear in that list.
How do I ignore this warning? | How do I ignore the "redundant parentheses" feature in PyCharm? | 1 | 0 | 0 | 9,591 |
39,123,699 | 2016-08-24T12:43:00.000 | 0 | 0 | 0 | 1 | python,django,virtualenv | 39,124,070 | 2 | false | 1 | 0 | Thanks to @Oliver and @Daniel's comments that lead me to the answer why it did not work.
I started the virtual environment on my Debian with python 3. virtualenv made the virtual environment but it was specifically for Debian.
When I used it for mac, since it could not run the python executable in the virtual environment (since it is only compatible with Debian), hence, it used my Mac's system python, which is Python 2.7.10.
In summary, as virtualenv uses the python executable on the system, when the python executable is run on another system, it will not work. | 1 | 0 | 0 | I am working on a django project on two separate systems, Debian Jessie and Mac El Capitan. The project is hosted on github where both systems will pull from or push to.
However, I noticed that on my Debian, when I run python --version, it gives me Python 3.4.2 but on my Mac, it gives me Python 2.7.10 despite being in the same virtual environment. Moreover, when I run django-admin --version on my Debian, it gives me 1.10 while on my Mac, 1.8.3.
This happens even when I freshly clone the projects from github and run the commands.
Why is it that the virtual environment does not keep the same version of python and django? | Virtualenv gives different versions for different os | 0 | 0 | 0 | 89 |
39,126,411 | 2016-08-24T14:41:00.000 | 1 | 0 | 0 | 0 | python,image,filesystems | 39,126,518 | 1 | true | 0 | 0 | I see multiple solutions :
When you first create your images in the first folder, add a suffix to their name, for instance, filexxx.jpg.part and when they are fully written just rename them, removing the .part.
Then in your watchdog, be sure not to work on files ending with .part
In your watchdog, test the image file, like try to load the file with an image library, and catch the exceptions. | 1 | 0 | 0 | I'm currently working on a project that adds images to a folder. As they're added they also need to be moved (in groups of four) to a secondary folder overwriting the images that are already in there (if any). I have it sort of working using watchdog.py to monitor the first folder. When the 'on_created' event fires I take the file path of the newly added image and copy it to the second folder using shutil.copy(), incrementing a counter and using the counter value to rename the image as it copies (so it becomes folder/1.jpg). When the counter reaches 4 it resets to 0 and the most recent 4 images are displayed on a web page. All these folders are in the local filesystem on the same drive.
My problem is that sometimes it seems the event fires before the image is fully saved in the first folder (the images are around 1Mb but vary slightly so I can't check file size) which results in a partial or corrupted image being copied to the second folder. At worst it throws an IOError saying the file isn't even there.
Any suggestions. I'm using OSX 10.11, Python 2.7. The images are all Jpegs. | How can I check a file has been copied fully to a folder before moving it using python | 1.2 | 0 | 0 | 244 |
39,126,445 | 2016-08-24T14:42:00.000 | 1 | 0 | 0 | 0 | python,django | 39,127,214 | 1 | false | 1 | 0 | I agree with the comments; there are prettier approaches than this.
You could add your code to the __init__.pyof your app | 1 | 0 | 0 | I need to run some code every time my application starts. I need to be able to manipulate models, just like I would in actual view code. Specifically, I am trying to hack built-in User model to support longer usernames, so my code is like this
def username_length_hack(sender, *args, **kwargs):
model = sender._meta.model
model._meta.get_field("username").max_length = 254
But I cannot seem to find the right place to do it. I tried adding a class_prepared signal handler in either models.py or app.py of the app that uses User model (expecting that User will by loaded by the time this apps models are loaded). The post_migrate and pre_migrate only run on migrate command. Adding code into settings.py seems weird and besides nothing is loaded at that point anyway. So far, the only thing that worked was connecting it to a pre_init signal and having it run every time a User instance is spawned. But that seems like a resource drain. I am using Django 1.8. How can I run this on every app load? | Running code on Django application start | 0.197375 | 0 | 0 | 242 |
39,127,477 | 2016-08-24T15:31:00.000 | 1 | 1 | 0 | 0 | python,finance,back-testing | 49,956,148 | 1 | false | 0 | 0 | That's terribly slow. I run backtest on 350k+ min bars, including multiple signal generations, portfolio optimization, rebalancing, and execution priority algorithm, in around 40 mins. Pure python, no pandas, jit, or cython.
IMO, it will depend a lot on the level of sophistication and complexity on the many of your moving parts. | 1 | 3 | 0 | I'm current developing an event-driven backtesting engine in Python. I would like to have an idea about how fast a high speed backtesting engine should be, especially in Python. Right now, I can replay one year of 1 min bar data about 10 hours.Is it fair to say the speed now is acceptable?
I know there are some open source backtesting engine on Github, like Pipline. I don't really know whether it is event-driven , because I did not play around with it before.
Anyone has a good idea of how fast a good quality event driven backtesting engine should be ? Thank you so much for your help. | Event Driven Backtesting Engine Speed | 0.197375 | 0 | 0 | 818 |
39,127,624 | 2016-08-24T15:38:00.000 | 2 | 0 | 0 | 0 | python,unit-testing,continuous-integration,functional-testing | 39,130,155 | 2 | false | 0 | 0 | Here's another option. You could separate different test categories by directory. If you wanted to try this strategy, it may look something like:
python
-modules
unit
-pure unit test modules
functional
-other unit test modules
In your testing pipeline, you can call your testing framework to only execute the desired tests. For example, with Python's unittest, you could run your 'pure unit tests' from within the python directory with
python -m unittest discover --start-directory ../unit
and the functional/other unit tests with
python -m unittest discover --start-directory ../functional
An advantage of this setup is that your tests are easily categorized and you can do any scaffolding or mocked up services that you need in each testing environment. Someone with a little more Python experience might be able to help you run the tests regardless of the current directory, too. | 1 | 3 | 0 | I am working on a project that has many "unit tests" that have hard dependencies that need to interact with the database and other APIs. The tests are a valuable and useful resource to our team, but they just cannot be ran independently, without relying on the functionality of other services within the test environment. Personally I would call these "functional tests", but this is just the semantics already established within our team.
The problem is, now that we are beginning to introduce more pure unit tests into our code, we have a medley of tests that do or do not have external dependencies. These tests can be ran immediately after checking out code with no requirement to install or configure other tools. They can also be ran in a continuous integration environment like jenkins.
So my question is, how I can denote which is which for a cleaner separation? Is there an existing decorator within unit testing library? | How to designate Python unit tests as having database dependency? | 0.197375 | 0 | 0 | 374 |
39,128,100 | 2016-08-24T16:03:00.000 | 0 | 0 | 0 | 0 | python,database,caching,redis,memcached | 39,128,415 | 2 | false | 1 | 0 | I had this exact question myself, with a PHP project, though. My solution was to use ElasticSearch as an intermediate cache between the application and database.
The trick to this is the ORM. I designed it so that when Entity.save() is called it is first stored in the database, then the complete object (with all references) is pushed to ElasticSearch and only then the transaction is committed and the flow is returned back to the caller.
This way I maintained full functionality of a relational database (atomic changes, transactions, constraints, triggers, etc.) and still have all entities cached with all their references (parent and child relations) together with the ability to invalidate individual cached objects.
Hope this helps. | 1 | 2 | 0 | For my app, I am using Flask, however the question I am asking is more general and can be applied to any Python web framework.
I am building a comparison website where I can update details about products in the database. I want to structure my app so that 99% of users who visit my website will never need to query the database, where information is instead retrieved from the cache (memcached or Redis).
I require my app to be realtime, so any update I make to the database must be instantly available to any visitor to the site. Therefore I do not want to cache views/routes/html.
I want to cache the entire database. However, because there are so many different variables when it comes to querying, I am not sure how to structure this. For example, if I were to cache every query and then later need to update a product in the database, I would basically need to flush the entire cache, which isn't ideal for a large web app.
I would prefer is to cache individual rows within the database. The problem is, how do I structure this so I can flush the cache appropriately when an update is made to the database? Also, how can I map all of this together from the cache?
I hope this makes sense. | How do I structure a database cache (memcached/Redis) for a Python web app with many different variables for querying? | 0 | 1 | 0 | 488 |
39,128,268 | 2016-08-24T16:10:00.000 | 0 | 0 | 1 | 0 | python | 39,132,110 | 2 | false | 0 | 0 | Adding to Edward's answer, I believe the PyEZ RPC calls are implemented using reflection (__call__ method), so today it is not aware of valid RPC calls nor args. The way to make it aware would be to load the Netconf schema dynamically from the device and use that to map the named arg to a tag or an element.
A potential issue from trying to abstract this calling convention from the user is what to do when there is a tag and an element with the same name for the same RPC – not sure if that is the case today or there are rules to prevent this in the schemas, but in that case the user of the call should be able to control what goes in the RPC doc IMHO. | 1 | 1 | 0 | When using RPC calls in PyEZ we add the parameters as named arguments like rpc.get_interface_information(terse="True", interface-name="xe-0/0/0"), however for configuration the options need to be within a dictionary like rpc.get_configuration({"inherit":"inherit", "groups":"groups"})
What's the reason for these differences? | PyEZ RPC options format differ between get_configuration and other calls | 0 | 0 | 0 | 285 |
39,137,476 | 2016-08-25T05:41:00.000 | 0 | 0 | 1 | 0 | python,headless,pyautogui | 42,882,644 | 1 | false | 0 | 0 | It is not possible to run PyAutoGUI in headless mode or on a remote desktop.
This feature is on the roadmap, but there is no timeline or resources dedicated to it. | 1 | 1 | 0 | Is it possible to run PyAutoGUI in headless in windows 7 using Universal Termsrv.dll ,creating multi seats ? | Is it possible to run PyAutoGUI in headless mode? | 0 | 0 | 0 | 1,761 |
39,140,566 | 2016-08-25T08:43:00.000 | 0 | 1 | 0 | 0 | php,python,rabbitmq | 64,604,740 | 1 | false | 0 | 0 | Though you already might have solved the problem:
You could use 2 Queues instead of one.
produce -> Q1 -> Direct Exchange -> Q2 -> consume
Then you can dynamically delete the binding (API call "unbind") between the Exchange and the Q2. Then Q2 drains empty and Msg queue in Q1 until you bind it again after your maintenance.
I wish there was something like "Pause Queue for x-Minutes" to implement a simple retry mechanism. | 1 | 1 | 0 | I'm looking for a way to block message delivery for a moment and reactivate it without loosing the messages.
The case is when we need to migrate consumers, I don't want messsages to be delivered for like 10 minutes. I want to block queue delivery and then reactivate it.
Is there a way to do this? In Python or in PHP?
EDIT:
With this process I don't want to get consumers disconnected. I want it like putting the queue on hold, no message delivered to current consumers and then "reactivate it". | Block Queue delivery on RabbitMQ | 0 | 0 | 0 | 167 |
39,141,642 | 2016-08-25T09:34:00.000 | 1 | 0 | 0 | 0 | python,google-analytics,google-bigquery,google-cloud-platform | 39,172,452 | 1 | true | 0 | 0 | We are in the process of releasing a new feature that can update the schema of the destination table within a load/query job. With autodetect and the new feature you can directly load the new data to the existing table, and the schema will be updated as part of the load job. Please stay tuned. The current ETA is 2 weeks. | 1 | 0 | 0 | Background
I studied and found that bigQuery doesn't accept schemas defined by online tools (which have different formats, even though meaning is same).
So, I found that if I want to load data (where no. of columns keeps varying and increasing dynamically) into a table which has a fixed schema.
Thoughts
What i could do as a workaround is:
First check if the data being loaded has extra fields.
If it has, a schema mismatch will occur, so first you create a temporary table in BQ and load this data into the table using "autodetect" parameter, which gives me a schema (that is in a format,which BQ accepts schema files).
Now i can download this schema file and use it,to update my exsisting table in BQ and load it with appropriate data.
Suggestion
Any thoughts on this, if there is a better approach please share. | schema free solution to BigQuery Load job | 1.2 | 1 | 0 | 353 |
39,144,281 | 2016-08-25T11:39:00.000 | 1 | 0 | 1 | 1 | python,file,cmd,ghostscript,ram | 39,147,540 | 1 | false | 0 | 0 | You can't use RAM for the input and output file using the Ghostscript demo code, it doesn't support it. You can pipe input from stdin and out to stdout but that's it for the standard code.
You can use the Ghostscript API to feed data from any source, and you can write your own device (or co-opt the display device) to have the page buffer (which is what the input is rendered to) made available elsewhere. Provided you have enough memory to hold the entire page of course.
Doing that will require you to write code to interface with the Ghostscript shared object or DLL of course. Possibly the Python library does this, I wouldn't know not being a Python developer.
I suspect that the pointer from John Coleman is sufficient for your needs though. | 1 | 1 | 0 | I want to run this command from python:
gs.exe -sDEVICE=jpeg -dTextAlphaBits=4 -r300 -o a.jpg a.pdf
Using ghostscript, to convert pdf to series of images. How do I use the RAM for the input and output files? Is there something like StringIO that gives you a file path?
I noticed there's a python ghostscript library, but it does not seem to give much more over the command line | Python write to ram file when using command line, ghostscript | 0.197375 | 0 | 0 | 574 |
39,149,554 | 2016-08-25T15:44:00.000 | 0 | 0 | 1 | 1 | python,macos,ipython,homebrew | 39,149,676 | 3 | false | 0 | 0 | To transfer all your packages you can use pip to freeze all of your packages installed in ipython and then install them all easily from the file that you put them in.
pip freeze > requirements.txt
then to install them from the file pip install -r requirements.txt
I'm not entirely sure if I understood what you're asking so if this isn't what you want to do please tell me. | 2 | 0 | 0 | I installed python via brew, and made it my default python. If I run which python, I obtain /usr/local/bin/python. Also pip is installed via brew, which pip returns /usr/local/bin/pip.
I do not remember how I installed ipython, but I didn't do it via brew, since when I type which ipython, I obtain /opt/local/bin/ipython. Is it the OS X version of ipython?
I installed all libraries on this version of ipython, for example I have matplotlib on ipython but not on python. I do not want to re-install everything again on the brew python, rather continue to install libraries on this version of ipython. How can I install new libraries there? For example, Python Image Library, or libjpeg?
If possible, I would like an exhaustive answer so to understand my problem, and not just a quick fix tip. | brew python versus non-brew ipython | 0 | 0 | 0 | 2,385 |
39,149,554 | 2016-08-25T15:44:00.000 | 0 | 0 | 1 | 1 | python,macos,ipython,homebrew | 39,151,146 | 3 | false | 0 | 0 | OK, so I solved by uninstalling macport (and so the ipython I was using, which was under /opt/local/bin) and installing ipython via pip. Then I re-install what I needed (e.g. jupyter) via pip. | 2 | 0 | 0 | I installed python via brew, and made it my default python. If I run which python, I obtain /usr/local/bin/python. Also pip is installed via brew, which pip returns /usr/local/bin/pip.
I do not remember how I installed ipython, but I didn't do it via brew, since when I type which ipython, I obtain /opt/local/bin/ipython. Is it the OS X version of ipython?
I installed all libraries on this version of ipython, for example I have matplotlib on ipython but not on python. I do not want to re-install everything again on the brew python, rather continue to install libraries on this version of ipython. How can I install new libraries there? For example, Python Image Library, or libjpeg?
If possible, I would like an exhaustive answer so to understand my problem, and not just a quick fix tip. | brew python versus non-brew ipython | 0 | 0 | 0 | 2,385 |
39,151,732 | 2016-08-25T17:48:00.000 | 0 | 0 | 0 | 0 | python,rest,api,eloqua | 39,824,955 | 1 | false | 0 | 0 | If the Campaign Canvas has dependencies, that means the Campaign Canvas is explicitly referenced. In example, the dependent Segment or Filter includes a "Clicked Emails from Campaigns" filer criteria that references the Campaign Canvas.
In this example, in order to remove dependencies, you must edit the criteria to not include the Campaign Canvas you want to delete. | 1 | 0 | 0 | I need to delete a big number of canvases and they have filters and segments as dependencies.
I've done an application in Python which sends API calls to get the segments and filters by using a search key but I can't delete them because they are dependencies in canvases.
Is there a way to delete the segments and filters using Eloqua REST API? The segments are also dependencies on some newer canvases that shouldn't be deleted.
Thanks for the help! | Eloqua delete canvas dependencies using REST API | 0 | 0 | 1 | 200 |
39,153,790 | 2016-08-25T19:59:00.000 | 0 | 0 | 0 | 0 | javascript,python,django | 39,154,045 | 1 | false | 1 | 0 | Try opening in the page in Chrome and hitting F12 - there's a tonne of developer tools and web page debuggers in there.
For your particular question about loading order, check the Network tab, then hit refresh on your page - it'll show you every file that the browser loads, starting with the HTML in your browsers address bar.
If you're trying to figure out javascript, check out the Sources tab. It even allows you to create break points -very handy for following along with a page is doing. | 1 | 0 | 0 | Ok, this question is going to sound pretty dumb, but I'm an absolute novice when it comes to web development and have been tasked with fixing a website for my job (that has absolutely nothing in the way of documentation).
Basically, I'm wondering if there is any tool or method for tracking the order a website loads files when it is used. I just want to know a very high-level order of the pipeline. The app I've been tasked with maintaining is written in a mix of django, javascript, and HTML (none of which I really know, besides some basic django). I can understand how django works, and I kind of understand what's going on with HTML, but (for instance) I'm at a complete loss as to how the HTML code is calling javascript, and how that information is transfered back to HTML. I wish I could show the code I'm using, but it can't be released publicly.
I'm looking for what amounts to a debugger that will let me step through each file of code, but I don't think it works like that for web development.
Thank you | Determining the Order Files Are Run in a Website Built By Someone Else | 0 | 0 | 0 | 22 |
39,154,611 | 2016-08-25T20:57:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,math,equation,exp | 71,583,338 | 6 | false | 0 | 0 | Just to add, numpy also has np.e | 1 | 48 | 0 | How can i write x.append(1-e^(-value1^2/2*value2^2)) in python 2.7?
I don't know how to use power operator and e. | How can I use "e" (Euler's number) and power operation in python 2.7 | 0 | 0 | 0 | 248,925 |
39,155,391 | 2016-08-25T22:00:00.000 | 1 | 0 | 1 | 0 | python,django,debugging,pycharm | 39,164,574 | 2 | false | 1 | 0 | It's really easy. You can debug your script by pressing Alt+F5 or the bug button in Pycharm IDE. After that the Debugger handle the execution of the script. now you can debugging line by line by F10, get into a function or other object by pressing F11. Also there is Watch Window where you can trace your variable values while debugging. I really encourage you to search blogs on internet. There are lots of tutorial in this area | 1 | 0 | 0 | I have reached the stage in developing my Django project where I need to start debugging my code, as my site is breaking and I don't know why. I'm using Pycharm's IDE to code, and the debugger that comes with it is super intimidating!
Maybe because I am a total newbie to programming (been coding only since May) but I don't really understand how debugging, as a basic concept, works. I've read the Pycharm docs about debugging, but I'm still confused. What is the debugger supposed to do/how is it supposed to interact with your program? What helpful info about the code is debugging supposed to offer?
When I previously thought about debugging I imagined that it would be a way of running through the code line by line, say, and finding out "my program is breaking at this line of code," but "stepping through my code" seems to take me into files that aren't even part of my project (e.g. stepping into my code in admin.py will take me into the middle of a function in widgets.py?) etc. and seems to present lots of extra/confusing info. How do I use debugging productively? How can I use it to debug my Django webapp?
Please help! TIA :) | Can someone explain debugging / Pycharm's debugger in an easy to understand way? | 0.099668 | 0 | 0 | 109 |
39,155,669 | 2016-08-25T22:24:00.000 | -1 | 0 | 1 | 0 | python,windows,python-2.7,pip | 66,812,656 | 9 | false | 0 | 0 | It happens on windows as you should have admin rights to install anything on disk C.
I have the same issue Scripts folder was not installed. I would sugest to instal it on disk D. | 5 | 9 | 0 | I have Python 2.7.11 installed on my machine which to my understanding should come with pip, however when I check the C:\Python27\Tools\Scripts\ directory there is no pip.exe present.
I have tried completely removing and reinstalling Python 2.7.11 without success. Running the installer pip is set to be installed, but after the install pip is nowhere to be found.
I also have Python 3.4 installed which has pip as expected. Any thoughts? | Python 2.7.11 pip not installed | -0.022219 | 0 | 0 | 24,089 |
39,155,669 | 2016-08-25T22:24:00.000 | 0 | 0 | 1 | 0 | python,windows,python-2.7,pip | 66,454,858 | 9 | false | 0 | 0 | I had the same issue:
Installed Python 27
Tried to use pip, but failed with unrecognized command error
Checked installation: no "C:\Python27\Scripts", only "C:\Python27\Tools\Scripts"
This issue happens only on some versions of Windows.
HOW TO FIX IT:
Uninstall Python 27
Reinstall Python 27, but unselect "pip" feature
Check installation: no "C:\Python27\Scripts" as expected
Start the installer again and use "Change Python"
Set "pip" and "Add Python.exe to Path" features to be installed
Check installation: "C:\Python27\Scripts" is now correctly present
So for some unknown reason, pip is not correctly installed on some versions of Windows if it is installed during default Python 27 setup. To fix this issue, pip must be installed afterwards using the "Change Python" setup. | 5 | 9 | 0 | I have Python 2.7.11 installed on my machine which to my understanding should come with pip, however when I check the C:\Python27\Tools\Scripts\ directory there is no pip.exe present.
I have tried completely removing and reinstalling Python 2.7.11 without success. Running the installer pip is set to be installed, but after the install pip is nowhere to be found.
I also have Python 3.4 installed which has pip as expected. Any thoughts? | Python 2.7.11 pip not installed | 0 | 0 | 0 | 24,089 |
39,155,669 | 2016-08-25T22:24:00.000 | 4 | 0 | 1 | 0 | python,windows,python-2.7,pip | 57,252,115 | 9 | false | 0 | 0 | I encountered the same problem - pip not installed - with python-2.7.16, Win10, installing for 'all users'. It was resolved when I allowed the MSI installer to target the default location (C:\Python27) rather than changing it to under Program Files (x86). I've no clue why this changed anything. | 5 | 9 | 0 | I have Python 2.7.11 installed on my machine which to my understanding should come with pip, however when I check the C:\Python27\Tools\Scripts\ directory there is no pip.exe present.
I have tried completely removing and reinstalling Python 2.7.11 without success. Running the installer pip is set to be installed, but after the install pip is nowhere to be found.
I also have Python 3.4 installed which has pip as expected. Any thoughts? | Python 2.7.11 pip not installed | 0.088656 | 0 | 0 | 24,089 |
39,155,669 | 2016-08-25T22:24:00.000 | 0 | 0 | 1 | 0 | python,windows,python-2.7,pip | 63,997,900 | 9 | false | 0 | 0 | Had the issue where no matter which version of python 2.7 I installed on windows 10 there was no pip.exe generated in the "Scripts" folder.
I solved it by ensuring that that MSI Installer file had Admin privileges before installing | 5 | 9 | 0 | I have Python 2.7.11 installed on my machine which to my understanding should come with pip, however when I check the C:\Python27\Tools\Scripts\ directory there is no pip.exe present.
I have tried completely removing and reinstalling Python 2.7.11 without success. Running the installer pip is set to be installed, but after the install pip is nowhere to be found.
I also have Python 3.4 installed which has pip as expected. Any thoughts? | Python 2.7.11 pip not installed | 0 | 0 | 0 | 24,089 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.