Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
44,830,486
2017-06-29T16:24:00.000
1
0
1
0
python,windows,anaconda,uninstallation
44,830,542
2
false
0
0
Because the program isn't actually installed, you can honestly just throw all of the files needed TO install Anaconda into your trash or recycle bin. If you go to "Programs and Features" and you don't see Anaconda, then those other files and folders can just be thrown away.
2
1
0
I recently tried installing Anaconda for my Windows 10 laptop. Unfortunately, this didn't seem to work, as there seems to be no actual Anaconda application on my computer, rather just a collection of folders and files on my desktop. I think this problem can be attributed to me downloading the 32-bit version of Anaconda while my OS is 64-bit, though I am not sure this was the problem. Regardless, I would now like to uninstall Anaconda from my computer. How would I go about uninstalling Anaconda from Windows despite Anaconda not fully downloading and only consisting of various folders on my desktop. I'm not super computer-savvy so typing in commands directly into the console seems a bit intimidating.
Anaconda installation problems - how to uninstall Anaconda (Windows)
0.099668
0
0
2,038
44,830,486
2017-06-29T16:24:00.000
0
0
1
0
python,windows,anaconda,uninstallation
44,958,269
2
false
0
0
There also is a Uninstall-Anaconda.exe directly in your Anaconda folder. But I'm not sure if it does something more than just deleting the folder/files.
2
1
0
I recently tried installing Anaconda for my Windows 10 laptop. Unfortunately, this didn't seem to work, as there seems to be no actual Anaconda application on my computer, rather just a collection of folders and files on my desktop. I think this problem can be attributed to me downloading the 32-bit version of Anaconda while my OS is 64-bit, though I am not sure this was the problem. Regardless, I would now like to uninstall Anaconda from my computer. How would I go about uninstalling Anaconda from Windows despite Anaconda not fully downloading and only consisting of various folders on my desktop. I'm not super computer-savvy so typing in commands directly into the console seems a bit intimidating.
Anaconda installation problems - how to uninstall Anaconda (Windows)
0
0
0
2,038
44,831,726
2017-06-29T17:41:00.000
8
0
1
0
python,list,dictionary,set,tuples
44,831,853
1
true
0
0
A list is a sequence of elements in a specific order. You can access elements with a numerical index, e.g. the_list[3]. The time taken for several operations such as testing if the list contains an element is O(n), i.e. proportional to the length of the list. A tuple is basically an immutable list, meaning you can't add, remove, or replace any elements. A set has no order, but has the advantage over a list that testing if the set contains an element is much faster, almost regardless of the size of the set. It also has some handy operations such as union and intersection. A dictionary is a mapping from keys to values where the keys can be all sorts of different objects, in contrast to lists where the 'keys' can only be numbers. So you can have the_dict = {'abc': 3, 'def': 8} and then the_dict['abc'] is 3. They keys of a dict are much like a set: they have no order and you can test for their existence quickly. The elements of a set and the keys of a dict must be hashable. Numbers, strings, tuples, and many other things are hashable. Lists, sets, and dicts are not hashable.
1
1
0
I have confused with lists, tuples, sets and dictionaries someone give me clear cut idea. Give me the difference from your understanding don't give the text book definitions.
What is the difference between lists,tuples,sets and dictionaries?
1.2
0
0
11,110
44,835,126
2017-06-29T21:16:00.000
2
0
0
0
python,file-handling
44,835,396
3
true
0
0
csv is very inefficient for storing large datasets. You should convert your csv file into a better suited format. Try hdf5 (h5py.org or pytables.org), it is very fast and allows you to read parts of the dataset without fully loading it into memory.
2
1
1
I have a huge file csv file with around 4 million column and around 300 rows. File size is about 4.3G. I want to read this file and run some machine learning algorithm on the data. I tried reading the file via pandas read_csv in python but it is taking long time for reading even a single row ( I suspect due to large number of columns ). I checked few other options like numpy fromfile, but nothing seems to be working. Can someone please suggest some way to load file with many columns in python?
Reading file with huge number of columns in python
1.2
0
0
1,206
44,835,126
2017-06-29T21:16:00.000
3
0
0
0
python,file-handling
44,835,474
3
false
0
0
Pandas/numpy should be able to handle that volume of data no problem. I hope you have at least 8GB of RAM on that machine. To import a CSV file with Numpy, try something like data = np.loadtxt('test.csv', dtype=np.uint8, delimiter=',') If there is missing data, np.genfromtext might work instead. If none of these meet your needs and you have enough RAM to hold a duplicate of the data temporarily, you could first build a Python list of lists, one per row using readline and str.split. Then pass that to Pandas or numpy, assuming that's how you intend to operate on the data. You could then save it to disk in a format for easier ingestion later. hdf5 was already mentioned and is a good option. You can also save a numpy array to disk with numpy.savez or my favorite the speedy bloscpack.(un)pack_ndarray_file.
2
1
1
I have a huge file csv file with around 4 million column and around 300 rows. File size is about 4.3G. I want to read this file and run some machine learning algorithm on the data. I tried reading the file via pandas read_csv in python but it is taking long time for reading even a single row ( I suspect due to large number of columns ). I checked few other options like numpy fromfile, but nothing seems to be working. Can someone please suggest some way to load file with many columns in python?
Reading file with huge number of columns in python
0.197375
0
0
1,206
44,835,358
2017-06-29T21:35:00.000
1
0
1
0
python,list,pandas
60,381,721
2
false
0
0
%who_ls DataFrame This is all dataframes loaded in memory as a list all_df_in_mem = %who_ls DataFrame
1
3
1
I've done a lot of searching and can't find anything related. Is there a built-in function to automatically generate a list of Pandas dataframes that I've created? For example, I've created three dataframes: df1 df2 df3 Now I want a list like: df_list = [df1, df2, df3] so I can iterate through it.
Pandas - List of Dataframe Names?
0.099668
0
0
5,562
44,836,123
2017-06-29T22:49:00.000
1
0
0
0
r,conda,python-3.6,rpy2,libiconv
44,935,654
2
true
0
0
I uninstalled rpy2 and reinstalled with --verborse. I then found ld: warning: ignoring file /opt/local/lib/libpcre.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libpcre.dylib ld: warning: ignoring file /opt/local/lib/liblzma.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/liblzma.dylib ld: warning: ignoring file /opt/local/lib/libbz2.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libbz2.dylib ld: warning: ignoring file /opt/local/lib/libz.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libz.dylib ld: warning: ignoring file /opt/local/lib/libiconv.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libiconv.dylib ld: warning: ignoring file /opt/local/lib/libicuuc.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libicuuc.dylib ld: warning: ignoring file /opt/local/lib/libicui18n.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/libicui18n.dylib ld: warning: ignoring file /opt/local/Library/Frameworks/R.framework/R, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/Library/Frameworks/R.framework/R So I supposed the reason is the architecture incompatibility of the libiconv in opt/local, causing make to fall back onto the outdate libiconv in usr/lib. This is strange because my machine should be running on x86_64 not i386. I then tried export ARCHFLAGS="-arch x86_64" and reinstalled libiconv using port. This resolved the problem.
2
2
1
I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8). I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6) as my interpreter, I get the error: Traceback (most recent call last): File "/Users/vincentliu/PycharmProjects/magic/rpy2tester.py", line 1, in from rpy2 import robjects File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/init.py", line 16, in import rpy2.rinterface as rinterface File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/init.py", line 92, in from rpy2.rinterface._rinterface import (baseenv, ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libiconv.2.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so Reason: Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Which first seemed like a problem caused by Anaconda, and so I remove all Anaconda-related files but the problem persists. I then uninstalled rpy2, reinstalled Anaconda and used conda install rpy2 to install, which also installs R version 3.3.2 through Anaconda. I can then change the interpreter to /anaconda/bin/python and can use rpy2 fine, but I couldn't use the R packages I care about because they need R version 3.4 and higher. Apparently, the oldest version Anaconda can install is 3.3.2, so is there any way I can use rpy2 with R version 3.4? I can see two general solutions to this problem. One is to install rpy2 through conda and then somehow change its depending R to the 3.4 one in the system. Another solution is to resolve the error Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 After much struggling, I've found no good result with either.
Installing rpy2 to work with R 3.4.0 on OSX
1.2
0
0
1,095
44,836,123
2017-06-29T22:49:00.000
0
0
0
0
r,conda,python-3.6,rpy2,libiconv
53,839,320
2
false
0
0
I had uninstall the version pip installed and install from source python setup.py install on the download https://bitbucket.org/rpy2/rpy2/downloads/. FWIW not using Anaconda at all either.
2
2
1
I would like to use some R packages requiring R version 3.4 and above. I want to access these packages in python (3.6.1) through rpy2 (2.8). I have R version 3.4 installed, and it is located in /Library/Frameworks/R.framework/Resources However, when I use pip3 install rpy2 to install and use the python 3.6.1 in /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6) as my interpreter, I get the error: Traceback (most recent call last): File "/Users/vincentliu/PycharmProjects/magic/rpy2tester.py", line 1, in from rpy2 import robjects File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/init.py", line 16, in import rpy2.rinterface as rinterface File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/init.py", line 92, in from rpy2.rinterface._rinterface import (baseenv, ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libiconv.2.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/rinterface/_rinterface.cpython-36m-darwin.so Reason: Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Which first seemed like a problem caused by Anaconda, and so I remove all Anaconda-related files but the problem persists. I then uninstalled rpy2, reinstalled Anaconda and used conda install rpy2 to install, which also installs R version 3.3.2 through Anaconda. I can then change the interpreter to /anaconda/bin/python and can use rpy2 fine, but I couldn't use the R packages I care about because they need R version 3.4 and higher. Apparently, the oldest version Anaconda can install is 3.3.2, so is there any way I can use rpy2 with R version 3.4? I can see two general solutions to this problem. One is to install rpy2 through conda and then somehow change its depending R to the 3.4 one in the system. Another solution is to resolve the error Incompatible library version: _rinterface.cpython-36m-darwin.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 After much struggling, I've found no good result with either.
Installing rpy2 to work with R 3.4.0 on OSX
0
0
0
1,095
44,838,519
2017-06-30T04:17:00.000
5
0
0
0
python,django,django-templates,slice
44,838,616
2
true
1
0
slice:"3" and slice:":x" are both same as they will return first 3 elements from the list but if you use slice:"2:x" then it will leave the 2 items from the first of the list and take from 3rd item till the number you mentioned in the x variable, its basically taking a part
1
5
0
I have a myList list with e.g. 5 elements, but I want to slice it in template by using command: {% for item in myList|slice:"3" %} or this command: {% for item in myList|slice:":3" %} What's the difference between slice:"x" and slice:":x"? (I don't have currently access to machine with django installed but I'm curious)
The difference between django slice filter usages
1.2
0
0
544
44,839,204
2017-06-30T05:32:00.000
0
0
1
0
python,django,web-deployment,pythonanywhere
44,839,307
4
false
1
0
Try to type "python3 --version". This can work on linux, but I am not sure whether it works on pythonanywhere
3
2
0
I am trying to deploy my Django application on pythonanywhere through manual-configuration. I selected Python 3.6.When I opened the console and type "python --version" It is showing python 2.7 instead of 3.6. How can I change this? Please help me.
how can I change default python version in pythonanywhere?
0
0
0
3,626
44,839,204
2017-06-30T05:32:00.000
2
0
1
0
python,django,web-deployment,pythonanywhere
44,850,691
4
true
1
0
Python 3.6 is available as python3.6 in a console on PythonAnywhere.
3
2
0
I am trying to deploy my Django application on pythonanywhere through manual-configuration. I selected Python 3.6.When I opened the console and type "python --version" It is showing python 2.7 instead of 3.6. How can I change this? Please help me.
how can I change default python version in pythonanywhere?
1.2
0
0
3,626
44,839,204
2017-06-30T05:32:00.000
2
0
1
0
python,django,web-deployment,pythonanywhere
55,050,163
4
false
1
0
to set your default python version from 2.7 to 3.7 run the command below $ alias python=python3 that's it now check the version $ python --version it should be solved
3
2
0
I am trying to deploy my Django application on pythonanywhere through manual-configuration. I selected Python 3.6.When I opened the console and type "python --version" It is showing python 2.7 instead of 3.6. How can I change this? Please help me.
how can I change default python version in pythonanywhere?
0.099668
0
0
3,626
44,841,470
2017-06-30T07:56:00.000
3
0
1
0
python,windows,python-3.x,windows-10,anaconda
54,505,325
30
false
0
0
I figured out the reason as to why: 1-There seems to be no navigator icon 2-When conducting the above steps of running the "anaconda-navigator" command in prompt (whether cmd or Anaconda) it yields "anaconda navigator is not recognized as an internal or external command" This was very frustrating to me as I'd installed the proper version multiple times with no avail. To solve this problem : in the installation process, there will be an advanced options step with 2 selections one of which is unchecked (the top one). Make sure you check it which will add the navigator to the path of your machine variables. Cheers, Hossam
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0.019997
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
59,745,764
30
false
0
0
In my case; it was available in the anaconda folder in "All App" from main menu
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
4
0
1
0
python,windows,python-3.x,windows-10,anaconda
48,361,910
30
false
0
0
Yet another option which worked in my case on Windows 10: Try uninstalling your previous installation, restart the system and run the installation again. Make sure you don't start any programs before installing Anaconda. You will find the installation finishes without prompting any kind of errors. Type in Anaconda in your Windows 10 Search bar. You will find Anaconda Prompt appear.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0.02666
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
-2
0
1
0
python,windows,python-3.x,windows-10,anaconda
50,551,832
30
false
0
0
I too faced a similar issue when I was not able to find the Anaconda Navigator Desktop app in the start menu. But do not worry , Go to start Menu and Type Anaconda Navigator. Now within the apps menu you will find anaconda navigator with its icon. Click on that. After clicking you will find a command prompt dialog opened and a .exe file runs on your machine. Wait till it completes. The Anaconda Navigator app opens on your machine.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
-0.013333
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
52,259,184
30
false
0
0
I had a similar issue today where only the prompt was available after installation. Finally solved this by un-installing my regular python installation and then install anaconda(anaconda 3 v5.2.0, with python 3.6).
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
-1
0
1
0
python,windows,python-3.x,windows-10,anaconda
53,242,842
30
false
0
0
On windows 10, I faced the same error - only Anaconda Prompt was showing in the startup menu. What I did is i re-installed Anaconda and selected install for all users of the pc (in my initial installation I have installed only for current user).
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
-0.006667
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
2
0
1
0
python,windows,python-3.x,windows-10,anaconda
54,160,061
30
false
0
0
I faced the same problem on Windows 10. As soon as I cleared my Path variable from edit environment variables option, the icon started to appear. It was occurring because I had previously installed python 3.6.1 on my computer and added it to my path variable as C:\Python36;C:\Python36\DLL; and so on. There isn't any need to uninstall Anaconda Navigator and start from scratch if you have correctly followed the steps mentioned at the documentation for it.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0.013333
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
1
0
1
0
python,windows,python-3.x,windows-10,anaconda
54,844,122
30
false
0
0
Try restarting the system! You will be able to find the navigator once you restart the system after installation.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0.006667
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
-2
0
1
0
python,windows,python-3.x,windows-10,anaconda
69,821,480
30
false
0
0
From the Start menu select Anaconda Prompt. type anaconda-navigator. If Anaconda is installed properly, Anaconda Navigator will open.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
-0.013333
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
54,540,532
30
false
0
0
Uninstall your Anaconda, delete the folder where it was. Then reinstall it.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
60,499,128
30
false
0
0
For people from Brazil There is a security software called Warsaw (used for home banking) that must be uninstalled! After you can install it back again. After thousand times trying, installing, uninstalling, cleanning-up the regedit that finally solved the problem.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
66,907,772
30
false
0
0
First Run This Command conda config --set auto_activate_base True Then Run This Command anaconda-navigator.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
66,932,291
30
false
0
0
Turn off your internet connection, then open your terminal and type anaconda-navigator or type anaconda prompt in the search bar and double click on the anaconda prompt. If anaconda is opened, then you can turn on your Wi-Fi.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
54,260,187
30
false
0
0
I also have the issue on windows when i am unable to find the anaconda-navigator in start menu. First you have to check anaconda-navigator.exe file in your anaconda folder if this file is present it means you have installed it properly otherwise there is some problem and you have to reinstall it. Before reinstalling this points to be noticed 1) You have to uninstall all previous python folder 2) Check you environment variable and clear all previous python path After this install anaconda your problem will be resolved if not tell me the full error i will try to solve it
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
1
0
1
0
python,windows,python-3.x,windows-10,anaconda
71,910,521
30
false
0
0
Make sure to run the installer as admin In my case (Windows 11) it was a permission problem. Make sure to run the installer as administrator. You can check this by modification of the path in the installer process. Further steps to handle the problem: Uninstall your actual anaconda version. Delete old Data found in: C:\Users\YourUsername\AppData\Roaming C:\Users\YourUsername (In my case it was called .anaconda and .conda folder) Restart PC Install Anaconda as admin
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0.006667
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
-1
0
1
0
python,windows,python-3.x,windows-10,anaconda
60,691,959
30
false
0
0
What finally worked for me was: Uninstalling Anaconda Deleting all files that has "conda" in them - most of them should be located in: C:\Users\Admin Delete especially the "condarc" file. Reboot Installed 32-bit installer (even though my system is 64-bit) and reboot Finally worked. I have not yet re-tried with 64-bit installer, since I have a time critical project, but will do when again I have spare time. P.S. what broke Anaconda for me was a blue screen I got while updating Anaconda. I guess it did not clear all old files and this broke the new installs.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
-0.006667
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
0
0
1
0
python,windows,python-3.x,windows-10,anaconda
57,794,408
30
false
0
0
This is what I did Reinstall anacoda with ticked first check box Remember to Restart
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
0
0
0
249,576
44,841,470
2017-06-30T07:56:00.000
14
0
1
0
python,windows,python-3.x,windows-10,anaconda
55,661,947
30
false
0
0
How I solved this issue: 1. Be connected to the internet. 2. Open the Anaconda Prompt (looks like a regular command window). If you installed the .exe in your /name/user/ location you should be fine, if not navigate to it. Then start an environment. conda info --envs Then run conda install -c anaconda anaconda-navigator Press y when prompted (if prompted). It will being downloading the packages needed. Then run your newly installed Anaconda Navigator anaconda-navigator It should start, and also appear in your regular windows 10 apps list.
18
41
0
Anaconda (listed as "Python 3.6.0 (Anaconda 4.3.1 64 bit)" ) is in my programs and features list, but there is seeming no Anaconda Navigator desktop app, as their seems to be no icon on my desktop and I am unable to search for it through "Start". Could this be because I have the 32-bit version of Anaconda downloaded and I have a 64-bit OS (I thought I should do this because Python on my computer was 64-bit) or because I downloaded Anaconda under "users" instead of Desktop. I also downloaded Anaconda twice, if that could be causing some of the problem. I have a Windows 10 laptop, if that is any help.
Anaconda Installed but Cannot Launch Navigator
1
0
0
249,576
44,843,175
2017-06-30T09:30:00.000
0
0
0
0
python,random-forest,h2o
44,852,474
1
false
0
0
you could download and take a look at the POJO which lists all the thresholds used for the model h2o.download_pojo(model, path=u'', get_jar=True, jar_name=u'')
1
0
1
I have an h2o random forest in Python. How to extract for each tree the threshold of each features ? My aim is to implement this random forest in c++ Thanks !
How to get the random forest threshold from an h2o random forest object
0
0
0
235
44,843,864
2017-06-30T10:05:00.000
0
0
0
1
python,jenkins,flask,celery
44,846,900
2
false
1
0
Couple of points below to consider to compare between celery and jenkins. Celery is specifically designed and built for running resource intensive tasks in background whereas jenkins is a more general tool for automation. jenkins is built on java so native integration is there although plugins are available whereas celery is built with python so you can directly program the tasks in python and send it to celery or just call your shell tasks from python. Message queuing - Again jenkins does not have builtin support for message brokers so queuing might be difficult for you. Celery uses rabbitmq as default to queue the tasks so your tasks never gets lost. Celery also provide simple callbacks so when task is completed you can run some function after it. Now if you ask about cpu comsumption then celery is not at all heavy
1
0
0
I have a flask application where i submit tasks to celery(worker) to execute it. so that i can get the webpage back after submitting. Can i achieve same if i submit the task to Jenkins instead? Just wanted an opinion why would i use celery when i can ask Jenkins to schedule/execute the job through Jenkins API ? still get my webpage back. I may be wrong with my approach but anyone who can shed light on this would really appreciate. The Main aim is that user submits the form which actually is task to execute and after hitting submit task detachs from web , reloads the form. Meanwhile task runs in background which celery does it efficiently but can it be done via jenkins. Thanks
Flask async job/task submit to Celery or Jenkins
0
0
0
1,256
44,849,883
2017-06-30T15:21:00.000
3
0
1
0
python,jupyter
45,017,248
1
false
0
0
The jupyter console command will provide you with an interpreter environment that you can experiment with code running within the Jupyter environment outside of a notebook. It's not exactly what you're looking for but may provide a better environment for testing and developing code that you can then paste into the appropriate notebook.
1
1
0
I just started using jupyter for a python project. I constantly find myself adding an extra cell just to perform some basic try&error debugging. This way I omit the whole code of the cell is being executed but it still doesn't feel like the right way to do it. Does Jupyter provide something like a static kernel terminal, for example always visible at the bottom of the screen, where I can simply paste code and execute runtime variables? By the way: I did search but didn't find anything looking for static console, and terminal. Maybe I'm just looking in the wrong direction. Thanks!
Debug window in jupyter?
0.53705
0
0
138
44,850,538
2017-06-30T15:55:00.000
2
0
0
1
python,google-app-engine,google-cloud-datastore
44,851,226
1
false
1
0
The logs are telling you exactly what the problem is. The backup you did today at 9:27AM already has the name "datastore_backup_2017_06_30". To do another, it must have a unique name. Try adding a time to the backup filename, or change it to "datastore_backup_2017_06_30_2".
1
0
0
Been trying for the last two hours and it keeps telling me that the job already exists: From the logs: Backup "datastore_backup_2017_06_30" already exists. (/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/datastore_admin/backup_handler.py:853) Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/datastore_admin/backup_handler.py", line 839, in _ProcessPostRequest raise BackupValidationError('Backup "%s" already exists.' % backup) BackupValidationError: Backup "datastore_backup_2017_06_30" already exists. However, there is not a job running as far as I can tell. The last time I did one was this morning: Started: June 30, 2017, 9:27 a.m. Completed: June 30, 2017, 9:28 a.m. Anyone hd similar or have a solution
Can't do manual backup of ndb datastore from datastore admin
0.379949
0
0
19
44,850,709
2017-06-30T16:03:00.000
1
0
1
0
python,python-3.x
44,850,852
3
false
0
0
In Python you can represent types as objects themselves. So yes, they're type objects. For every type there is an object representing that type, like int. It's an instance of the type itself. It's like .getClass() and Class<?> objects in Java: "some string".getClass() == String.class. Java doesn't treat functions as first class values so there's types like Callable, Consumer<T>, etc., but in Python there's a specific type for functions, and with that a specific object representing that type.
1
1
0
When I read Python in a Nutshell, I saw a "type object", "class object", "function object", "module object", .... Does a "xxx object" mean an instance of type "xxx"? For example, does a type object mean an instance of type type? For example, is int a type object? is a specific class a type object? does a class object mean an instance of what? Is a specific class a class object? For example, A class is a Python object with several characteristics: • You can call a class object as if it were a function. The call, often known as instantiation, returns an object known as an instance of the class; the class is known as the type of the instance. • A class can inherit from other classes, meaning it delegates to other class objects the lookup of attributes that are not found in the class itself. does a function object mean an instance of types.FunctionType? Is a specific function a function object? does a module object mean an instance of types.ModuleType? Is a specific module a module object? is there an int object? If yes, does it mean an integer value (e.g. 12) of type int? Thanks.
Meaning of "a type/class/function/module/... object"?
0.066568
0
0
96
44,851,959
2017-06-30T17:24:00.000
0
1
0
0
python,scapy
44,997,621
1
true
1
0
You can directly answer HTTP requests to pages different to that specific webpage with HTTP redirections (e.g. HTTP 302). Moreover, you should only route packets going to the desired webpage and block the rest (you can do so with a firewall such as iptables).
1
0
0
I have built a MITM with python and scapy.I want to make the "victim" device be redirected to a specific page each time it tried to access a website. Any suggestions on how to do it? *Keep in mind that all the traffic from the device already passes through my machine before being routed.
Python : Redirecting device with MITM
1.2
0
1
141
44,852,137
2017-06-30T17:37:00.000
1
0
0
0
python,numpy,tensorflow,tensor
44,855,284
2
false
0
0
You can either provide the value of the datatype you want your resulting tensor to be or to cast a tensor afterwards. tf.fill((3, 3), 0.0) # will be a float 32 tf.cast(tf.fill((3, 3)), tf.float32) # also float 32 The first one is better because you use less operations in the graph
1
1
1
I am trying to use the tf.fill() method to create a tensor of different data types(float16,float32,float64) similar to what you can do with numpy.full(). would tf.constant() be a suitable substitution? or should I create my fill values to be of the data type I want them to be then plug it into the value holder inside tf.fill()
using tensorflow fill method to create a tensor of certain datatype
0.099668
0
0
2,486
44,853,206
2017-06-30T18:58:00.000
3
0
1
0
python-3.x,ipython,pycharm
51,451,407
2
false
0
0
Go to run --> edit configurations... towards the button you'll see a checkbox that says "run with python console" make sure its checked
2
2
0
By default, all scripts in PyCharm seem to execute in a separate python interpreter called "Run", which, as far as I can make out, is pretty much independent of the IPython console running alongside. Now, to execute any snippet of the script after the whole thing has been run, I can copy-paste into the Run pane, but this is not ideal as it is not an actual python/ipython console. If I want to execute in the console, I will need to run the whole thing again inside the console (and not just a snippet) because the console doesn't seem to recognize/store any of the variables when it was run, which is tedious. I've searched for a solution, but the closest I got was to enable "show command line afterwords" in the Run Configurations. This just seems to throw up an error on the lines of "file not found", which makes no sense. I'm running my script through SSH into a remote server, if that helps.
PyCharm: Executing the script in the console rather than 'run'
0.291313
0
0
1,565
44,853,206
2017-06-30T18:58:00.000
0
0
1
0
python-3.x,ipython,pycharm
44,854,484
2
false
0
0
try to edit the run config, via run -> edit configuration. And check the path of the working directory or the script directory.
2
2
0
By default, all scripts in PyCharm seem to execute in a separate python interpreter called "Run", which, as far as I can make out, is pretty much independent of the IPython console running alongside. Now, to execute any snippet of the script after the whole thing has been run, I can copy-paste into the Run pane, but this is not ideal as it is not an actual python/ipython console. If I want to execute in the console, I will need to run the whole thing again inside the console (and not just a snippet) because the console doesn't seem to recognize/store any of the variables when it was run, which is tedious. I've searched for a solution, but the closest I got was to enable "show command line afterwords" in the Run Configurations. This just seems to throw up an error on the lines of "file not found", which makes no sense. I'm running my script through SSH into a remote server, if that helps.
PyCharm: Executing the script in the console rather than 'run'
0
0
0
1,565
44,856,214
2017-07-01T00:04:00.000
6
0
0
1
python,airflow,apache-airflow
44,878,992
4
false
0
0
In my understanding, AIRFLOW_HOME should link to the directory where airflow.cfg is stored. Then, airflow.cfg can apply and set the dag directory to the value you put in it. The important point is : airflow.cfg is useless if your AIRFLOW_HOME is not set
4
35
0
I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2). When I look inside my default, unmodified airflow.cfg (located in ~/airflow), I see that dags_folder is set to /home/alex/airflow/dags. I do cd /home/alex/airflow; mkdir dags; cd dags; cp [...]/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier. However, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs. I would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list. How do I remedy this?
How to add new DAGs to Airflow?
1
0
0
34,425
44,856,214
2017-07-01T00:04:00.000
17
0
0
1
python,airflow,apache-airflow
44,856,401
4
true
0
0
I think the reason for this is because you haven't exported AIRFLOW_HOME. Try doing: AIRFLOW_HOME="/home/alex/airflow/dags" airflow list_dags. If that's not working than do two steps export AIRFLOW_HOME="/home/alex/airflow/dags" airflow list_dags I believe this should work. Give it a go?
4
35
0
I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2). When I look inside my default, unmodified airflow.cfg (located in ~/airflow), I see that dags_folder is set to /home/alex/airflow/dags. I do cd /home/alex/airflow; mkdir dags; cd dags; cp [...]/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier. However, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs. I would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list. How do I remedy this?
How to add new DAGs to Airflow?
1.2
0
0
34,425
44,856,214
2017-07-01T00:04:00.000
1
0
0
1
python,airflow,apache-airflow
55,957,328
4
false
0
0
The issue is that you might have two airflow configs existing in your directories, so check for /root/airflow/dags and if yes you require to change the dags_folder path in both airflow.cfg s
4
35
0
I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2). When I look inside my default, unmodified airflow.cfg (located in ~/airflow), I see that dags_folder is set to /home/alex/airflow/dags. I do cd /home/alex/airflow; mkdir dags; cd dags; cp [...]/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier. However, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs. I would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list. How do I remedy this?
How to add new DAGs to Airflow?
0.049958
0
0
34,425
44,856,214
2017-07-01T00:04:00.000
1
0
0
1
python,airflow,apache-airflow
69,834,752
4
false
0
0
I might be using the latest airflow, the command has changed. What works for me is: export AIRFLOW_HOME="~/airflow" Then run airflow dags list
4
35
0
I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2). When I look inside my default, unmodified airflow.cfg (located in ~/airflow), I see that dags_folder is set to /home/alex/airflow/dags. I do cd /home/alex/airflow; mkdir dags; cd dags; cp [...]/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier. However, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs. I would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list. How do I remedy this?
How to add new DAGs to Airflow?
0.049958
0
0
34,425
44,856,964
2017-07-01T02:56:00.000
1
0
0
0
python,machine-learning,tensorflow,neural-network
44,857,779
1
true
0
0
The tf.contrib.learn.DNNClassifier class has a method called predict_proba which returns the probabilities belonging to each class for the given inputs. Then you can use something like, tf.round(prob+thres) for binary thresholding with the custom parameter thres.
1
0
1
I'm working on a binary classification problem and I'm using the tf.contrib.learn.DNNClassifier class within TensorFlow. When invoking this estimator for only 2 classes, it uses a threshold value of 0.5 as the cutoff between the 2 classes. I'd like to know if there's a way to use a custom threshold value since this might improve the model's accuracy. I've searched all around the web and apparently there isn't a way to do this. Any help will be greatly appreciated, thank you.
Using a custom threshold value with tf.contrib.learn.DNNClassifier?
1.2
0
0
476
44,857,149
2017-07-01T03:47:00.000
0
1
1
0
python,dvd,dvd-burning,cddvd
44,857,380
1
false
0
0
Long story short, anything is possible when it comes to computers. Long story, unless you can find someone who has developed the code to interface with the drivers specific to the computer and such than it would take a lot of work to do. If I understand what you want to do it would likely end up becoming a whole program, which is what you might want to look for. If you are having a lot of issues with LightScribe (which appears to have a free version) consider running a VM and installing it on there as it may be potentially more stable or simply use another computer.
1
0
0
Is it possible to control the path etched by the laser and the speed of the motor through python? I'm trying to not use a microcontroller or disassemble the drive. I'm essentially trying to do LightScribe on the bottom of a DVD without LightScribe. It's a long story.
Controlling DVD Drive using Python
0
0
0
281
44,857,970
2017-07-01T06:19:00.000
3
0
0
0
python,random,pixel
44,858,027
4
false
0
0
I'd suggest making a list of coordinates of all non-zero pixels (by checking all pixels in the image), then using random.shuffle on the list and taking the first 100 elements.
1
4
1
I have a binary image of large size (2000x2000). In this image most of the pixel values are zero and some of them are 1. I need to get only 100 randomly chosen pixel coordinates with value 1 from image. I am beginner in python, so please answer.
how to get random pixel index from binary image with value 1 in python?
0.148885
0
0
3,545
44,859,948
2017-07-01T10:35:00.000
0
0
1
0
python-3.x,ubuntu,pip,virtualenv
44,860,049
2
true
0
0
If you created the virtual environment with --system-site-packages, the virtual environment has access to the global site-packages modules. You need to re-create the virtual environment without --system-site-packages option if you don't want that.
1
2
0
In my python program (run by a virtualenv using python3.5), I need to use the Pillow library to process an image. ImportError: No module named 'Pillow' tells me that Pillow is not installed in the virtualenv. But, when I run pip install Pillow, I get back: Requirement already satisfied: Pillow in /usr/lib/python3/dist-packages If the pip I am using is from the virtualenv, then why is it looking in /usr/lib/python3/dist-packages to check if the package is already installed? Just to make sure, I run type python and type pip to confirm that these 2 programs are from my virtualenv, and they are: python is hashed (/home/nelson/.virtualenvs/MainEnv/bin/python) pip is hashed (/home/nelson/.virtualenvs/MainEnv/bin/pip) sudo was not used when creating the virtualenv (I know because this had already caused problems for me) or when trying to pip install; so where is the flaw in this logic? How can I install Pillow in my virtualenv / How can I import Pillow?
Confusion with virtualenvs and Python packages
1.2
0
0
180
44,860,570
2017-07-01T11:49:00.000
1
0
0
0
multithreading,python-3.x,selenium-webdriver,multiprocessing
44,864,054
1
false
1
0
The main factor in CPython for choosing between Threads of Processes is based on your type of workload. If you have a I/O bound type of workload, where most of your application time is spent waiting for data to come in or to go out, then your best choice is using Threads. If, instead, your application is spending great time using the CPU, then Processes are your tool of choice. This is due to the fact that, in CPython (the most commonly used interpreter) only one Thread at a time can make use of the CPU cores. For more information regarding this limitation just read about the Global Interpreter Lock (GIL). There is another advantage when using Processes which is usually overlooked: Processes allow to achieve a greater degree of isolation. This means that if you have some unstable code (in your case could be the scraping logic) which might hang or crash badly, encapsulating it in a separate Process allows your service to detect the anomaly and recover (kill the Process and restart it).
1
0
0
I need to use selenium for a scraping job with a heap javascript generated webpages. I can open several instances of the webdriver at a time and pass the websites to the instances using queue. It can be done in multiple way.s though. I've experimented with both the Threading module and the Pool- and Process-ways from the multiprocessing module. All work and will do the job quite fast. This leaves me wondering: Which module is generalky prefered in a situation like this?
Threading or multiprocessing for webscraping with selenium
0.197375
0
1
632
44,861,989
2017-07-01T14:24:00.000
1
0
0
0
python,excel,pandas
44,862,175
2
false
0
0
This can not be done in pandas. You will need to use other library to read the xlsx file and determine what columns are white. I'd suggest using openpyxl library. Then your script will follow this steps: Open xlsx file Read and filter the data (you can access the cell color) and save the results Create pandas dataframe Edit: Switched xlrd to openpyxl as xlrd is no longer actively maintained
1
5
1
I have an xlsx file, with columns with various coloring. I want to read only the white columns of this excel in python using pandas, but I have no clues on hot to do this. I am able to read the full excel into a dataframe, but then I miss the information about the coloring of the columns and I don't know which columns to remove and which not.
Reading an excel with pandas basing on columns' colors
0.099668
0
0
7,328
44,862,432
2017-07-01T15:12:00.000
0
1
0
1
python,apache,.htaccess,httpd.conf,macos-sierra
44,862,745
3
false
1
0
1, 2. Ok. Are .htaccess files allowed in /etc/apache2/httpd.conf ? <Directory "widget.py"> — I think you want <File "widget.py">. Look into error log — what is the error?
2
1
0
I am trying to create a python script (widget.py) that will get a JSON feed from an arbitrary resource. I can't get the python script to execute on localhost. Here are the steps I have followed: In etc/apache2/httpd.conf I enabled LoadModule cgi_module libexec/apache2/mod_cgi.so Restarted Apache sudo apachectl restart Added a .htaccess file to my directory: <Directory "widget.py"> Options +ExecCGI AddHandler cgi-script .py </Directory> NOTE: I will eventually need to deploy this on a server where I won't have access to the apache2 directory. Navigated to http://localhost/~walter/widget/widget.py I get a 500 server error. Log file contents: [Sat Jul 01 08:51:00.922413 2017] [core:info] [pid 75403] AH00096: removed PID file /private/var/run/httpd.pid (pid=75403) [Sat Jul 01 08:51:00.922446 2017] [mpm_prefork:notice] [pid 75403] AH00169: caught SIGTERM, shutting down AH00112: Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist AH00112: Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist [Sat Jul 01 08:51:01.449227 2017] [mpm_prefork:notice] [pid 75688] AH00163: Apache/2.4.25 (Unix) PHP/5.6.30 configured -- resuming normal operations [Sat Jul 01 08:51:01.449309 2017] [core:notice] [pid 75688] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' Do I need to enable cgi in /etc/apache2/users/walter/http.conf? Should I?
Executing python script on Apache in MacOS Sierra
0
0
0
3,573
44,862,432
2017-07-01T15:12:00.000
4
1
0
1
python,apache,.htaccess,httpd.conf,macos-sierra
44,865,681
3
false
1
0
Got it to work. Here are the steps that I followed: In etc/apache2/httpd.conf I uncommented: LoadModule cgi_module libexec/apache2/mod_cgi.so Restarted Apache sudo apachectl restart List itemAdded a .htaccess file to my directory with the following contents: Options ExecCGI AddHandler cgi-script .py Order allow,deny Allow from all Added #!/usr/bin/env python to the top of my python script In terminal enabled execution of the python script using: chmod +x widget.py
2
1
0
I am trying to create a python script (widget.py) that will get a JSON feed from an arbitrary resource. I can't get the python script to execute on localhost. Here are the steps I have followed: In etc/apache2/httpd.conf I enabled LoadModule cgi_module libexec/apache2/mod_cgi.so Restarted Apache sudo apachectl restart Added a .htaccess file to my directory: <Directory "widget.py"> Options +ExecCGI AddHandler cgi-script .py </Directory> NOTE: I will eventually need to deploy this on a server where I won't have access to the apache2 directory. Navigated to http://localhost/~walter/widget/widget.py I get a 500 server error. Log file contents: [Sat Jul 01 08:51:00.922413 2017] [core:info] [pid 75403] AH00096: removed PID file /private/var/run/httpd.pid (pid=75403) [Sat Jul 01 08:51:00.922446 2017] [mpm_prefork:notice] [pid 75403] AH00169: caught SIGTERM, shutting down AH00112: Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist AH00112: Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist [Sat Jul 01 08:51:01.449227 2017] [mpm_prefork:notice] [pid 75688] AH00163: Apache/2.4.25 (Unix) PHP/5.6.30 configured -- resuming normal operations [Sat Jul 01 08:51:01.449309 2017] [core:notice] [pid 75688] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' Do I need to enable cgi in /etc/apache2/users/walter/http.conf? Should I?
Executing python script on Apache in MacOS Sierra
0.26052
0
0
3,573
44,863,610
2017-07-01T17:18:00.000
1
0
0
0
python,matplotlib,latex
44,864,201
1
true
0
0
I was importing the seaborn package after setting the matplotlib rcParams, which overwrote values such as the font family. Calling rcParams.update(params) after importing seaborn fixes the problem.
1
0
1
I am using matplotlib.rc('text', usetex=True); matplotlib.rc('font', family='serif') to set my font to serif with LaTeX. This works for the tick labels, however the plot title and axis lables are typeset using the sans-serif CMS S 12 computer modern variant. From what I have found on the web, most people seem to have trouble using the sans-serif font. For me the opposite is the case, I cannot get the serif font to work properly. I have tried a hacky solution of setting the sans-serif font to Computer Modern, which unfortunately does not work either.
Matplotlib is incorrectly rendering axis labels in sans-serif when using LaTeX
1.2
0
0
184
44,866,952
2017-07-02T02:23:00.000
1
0
0
1
python,linux
44,866,968
1
false
0
0
So long as the state is not shared in any way between the different interpreters executing the scripts (re each user running the script gets a different Python interpreter process), there should be no problem. However if there is some shared context (such as a log file each process is simultaneously reading/writing from assuming mutual exclusivity), you will very likely have trouble. The trouble could be mitigated in many ways whether through mutexes or other synchronized access.
1
0
0
I have Python scripts in Linux Server. I have multiple scripts in directory example /home/python/scripts all users use same username "python" to login linux server. If multiple users are run same script is there any issues? Like if one user start execute one script before finishing this script another user also started same script. Is variable got overwrite ? What is best way to handle this kind of things.
Python script to run same script by multiple users
0.197375
0
0
917
44,869,453
2017-07-02T09:44:00.000
0
0
0
0
python,server,broker
44,876,882
1
false
0
0
Yes. I should be possible if you have control over application layer. When opc ua layer gives control to read or write a variable node, just before returning control to opc ua layer do all the necessary action to publish data using any broker. I hope this will help you to solve problem.
1
0
0
Is there any way to combine an OPC-UA Server with a Mosquito Broker? I would like to start a server (OPC UA) which can publish different variables etc. which will be handled by a Broker (Mosquito). Therefore I would like to use the "Python FreeOpcUa" implementation and the official Mosquito Broker.
Is it possible to combine "Python FreeOpcUa" with "Mosquito Broker"?
0
0
0
53
44,869,606
2017-07-02T10:05:00.000
0
1
0
0
python,web-scraping,codeship
45,087,672
1
false
0
0
Just noticed your question here on SO and figured I'd answer it here as well as your support ticket (so others can see the general answer). We're actively working on a brand new API that will allow access to both Basic and Pro projects. The target for general availability is currently at the beginning of Aug '17. We'll be having a closed beta before then (where you're on the list)
1
0
0
I couldn't find any API to configure my codeship project :( I have multiple repositories, and would like to allow the following steps programatically: List repositories of my team create a new repo pipeline (for each microservice I have a seperate repo) Edit pipeline steps scripts for newly created project (and for multiple projects "at once") Customize a project's environment variables delete an existing project Is there any way to do it? How do I authenticate? Even if I have to do by recording network curls - is there a better way to authenticate other than pasting an existing cookie I copy from my own browsing? OAuth, user-password as header, etc.? I'm trying to write a python bot to do it, but will take any example code available!
configuring codeship via code
0
0
1
30
44,870,168
2017-07-02T11:11:00.000
0
0
0
0
beautifulsoup,python-3.6
44,872,494
1
false
1
0
Just noticed - you're importing from BeautifulSoup. Try from bs4 import *
1
0
0
I can import BeautifulSoup using Python 2.7 but not when I try using Python 3.6, even though BeautifulSoup claims to work on both? Sorry this is my first question so apologies if it's trivial or if I haven't used the proper conventions. from BeautifulSoup import * Traceback (most recent call last): File "", line 1, in File "/Users/tobiecusson/Desktop/Python/Ch3-PythonForWebData/AssignmentWeek4/BeautifulSoup.py", line 448 raise AttributeError, "'%s' object has no attribute '%s'" % (self.class.name, attr) ^ SyntaxError: invalid syntax from BS4 import BeautifulSoup Traceback (most recent call last): File "", line 1, in File "/Users/tobiecusson/Desktop/Python/Ch3-PythonForWebData/AssignmentWeek4/BS4.py", line 448 raise AttributeError, "'%s' object has no attribute '%s'" % (self.class.name, attr) ^ SyntaxError: invalid syntax
Python 3.6 Beautiful Soup - Attribute Error
0
0
1
289
44,870,294
2017-07-02T11:26:00.000
6
0
0
0
python,macos,selenium,webdriver,selenium-chromedriver
53,889,254
2
false
0
0
You might need to install it with : brew cask install chromedriver or brew install chromedriver and then do which chromedriver You will get the relevant path.
1
2
0
I am trying to get selenium to use chromedriver on mac. I have downloaded the mac version of chromedriver and added it to the same folder as my python file. I am then using: driver = webdriver.Chrome() however it doesn't seem to be opening. This works fine in windows but just not working on mac. anyone got any ideas? Thanks
Selenium chromedriver in relative path for mac and python
1
0
1
15,055
44,871,312
2017-07-02T13:26:00.000
1
0
1
0
python-3.x,anaconda,spyder
44,871,723
1
false
0
0
(Spyder developer here) Please use the Variable Explorer to visualize Numpy arrays and Pandas DataFrames. That's its main purpose.
1
0
1
Hi on running a code in the console I am getting the display as: runfile('C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing/praCTICE.py', wdir='C:/Users/DX/Desktop/me template/Part 1 - Data Preprocessing') and on viewing a small matrix it is showing up as array([['France', 44.0, 72000.0], ['Spain', 27.0, 48000.0], ['Germany', 30.0, 54000.0], ..., ['France', 48.0, 79000.0], ['Germany', 50.0, 83000.0], ['France', 37.0, 67000.0]], dtype=object) Even though the matrix is pity small how to change it to get the default view when i cun my codes in ipython console I installed the latest version of anaconda
In spyder how to get back default view of running a code in Ipython console
0.197375
0
0
272
44,873,156
2017-07-02T16:56:00.000
1
0
0
0
python,tensorflow,language-model,sequence-to-sequence,perplexity
44,875,134
1
true
0
0
This does not make a lot of sense to me. Perplexity is calculated as 2^entropy. And the entropy is from 0 to 1. So your results which are < 1 do not make sense. I would suggest you to take a look at how your model calculate the perplexity because I suspect there might be an error.
1
0
1
In Tensorflow, I'm getting outputs like 0.602129 or 0.663941. It appears that values closer to 0 imply a better model, but it seems like perplexity is supposed to be calculated as 2^loss, which implies that loss is negative. This doesn't make any sense.
How can the perplexity of a language model be between 0 and 1?
1.2
0
0
298
44,878,653
2017-07-03T05:59:00.000
0
0
1
0
python,process
44,878,817
1
false
0
0
Short answer: it runs on a different process. When you instantiate a Process object, after calling the method start() the current process is going to fork and the new process is going to execute the method run(). If you have a system with multiple cores, you might indeed take advantage of this parallelism. Note: the difference between the classes Pool and Process is that Pool spawns multiple worker processes at once, whereas Process only spawns a single process; however, both use processes that are distinct from your main process.
1
0
0
I am performing multiprocessing in Python. In Python, there are two classes, Pool and Process, to perform multiprocessing. The class Pool executes the processes on multiple cores depending on the availability of the cores. I wanted to know whether the Process class executes the processes in parallel on multiple cores or on a single core??
Does process package from multiprocessing in python executes the processes on multiple cores?
0
0
0
42
44,881,128
2017-07-03T08:40:00.000
0
1
0
0
python,shell,flask
44,884,067
3
false
1
0
Installing wheel using: pip install wheel works, not sure what influenced it.
1
0
0
i have a problem, if you have any idea how it may be fixed, please help) I have a project, which run by the .sh file, and this .sh refers to a python file.py which uses the Flask for html-reports. When i run the file.py (for simplify with 'hello world'), but when i run the .sh file, it writes the error "ImportError: No module named 'flask'". I installed the virtualenv for file.py's folder and .sh folder. Please, give an advice for fix it. Thank's.
Run Flask from run.sh
0
0
0
986
44,882,144
2017-07-03T09:31:00.000
0
1
0
1
python,python-2.7,shell
44,884,687
2
false
0
0
Is there any python/Shell script to make memory 100% usage for 20 minutes. To be technical, we need to be precise. 100% usage of the whole memory by a single process isn't technically possible. Your memory is shared with other processes. The fact that the kernel is in-memory software debunks the whole idea. Plus, a process might start another process, say you run Python from the shell, now you have two processes (the shell and Python) each having their own memory areas. If you mean by that a process that can consume most of ram space, then yes that's not impossible.
1
0
0
Is there any python/Shell script to make memory 100% usage for 20 minutes. Memory size is very big 4TB. Operating System Linux. Python version 2.7
shell/Python script to utilize memory 100% for 20 mins
0
0
0
259
44,882,981
2017-07-03T10:13:00.000
0
0
0
1
python,c
44,883,166
2
false
0
0
What previous answers miss is your question about how to trigger at a specific time, e.g. 9pm. You can use cron for this. It's a *nix utility that executes arbitrary commands on a time schedule you specify. You could set cron to run your C program directly, or to launch a Python wrapper that then starts your C executable. To get started: type crontab -e and follow the examples.
1
0
0
I have a .c file and this file has to be executed only at a particular time interval say 9pm. So, please let me know if there is any possibility of achieving this via python scripting.
Python script to execute a C program at particular time interval
0
0
0
140
44,883,116
2017-07-03T10:19:00.000
0
0
0
0
python,queue,multiprocessing,pytables
44,884,458
1
false
0
0
It's really hard to help you without code. But I just think if you want to find "thin" places in your code you have to write log of it. As I understood one iteration of your worker has to create 268 Series that are made as columns in final dataframe. If these Series are the same shape, then it seems that the issue in queue-worker — and you have to log it in all steps that you can.
1
0
1
Before I state my question, let me put my constraint - I can't post the code as it is related to my job and they don't allow it. So this is just a survey query to see if somebody has seen similar issues. I have a python multiprocessing set up where the workers do the work and put the result in a queue. A special writer worker then accumulates the results from the queue. These results are simple pandas Series. The accumulator puts the results into a pandas dataframe and writes it to a pytable on the disk. The issue is that I randomly see that sometimes a few results are missing in the dataframe, e.g. out of 268 expected columns I will get 267. This has happened around 10 out of 80 times in the last three months. The cure is - simply rerun the code (which means recalculate everything) and it works 100% the second time. I have ensured that there is no error in the calculations, so my guess is that it is related to multiprocessing or pytable data writing. Any hints are appreciated. Sorry for not being able to put the code.
python multiprocessing (using pytable) misses some results from the queue in the final output
0
0
0
54
44,884,515
2017-07-03T11:34:00.000
0
0
1
0
python,pyinstaller,praw
44,884,826
4
false
0
0
I'll recommend to look at pyenv or virtualenv. Activate these env's and install the praw module here. This should work.
1
0
0
I wanted to pack my script using pyInstaller. I run pyinstaller file.py -F, file is created successfully, but when running I get ImportError: No module named 'praw'. So I created new file containing only import praw and run pyinstaller file.py -F --hidden-import=praw but still get the same error when running. I was unable to find anything similar, most issues were solved by using --hidden-import. Any ideas on how it can be solved? EDIT: praw is installed inside virtual environment and running the script directly works as expected.
pyInstaller: ImportError: No module named 'praw'
0
0
0
1,128
44,892,009
2017-07-03T18:32:00.000
0
1
0
0
python,google-sheets,google-api
51,015,293
1
false
0
0
I'm assuming you have some python asset that you would like to use to modify data on Google Sheets. If that's the case, then I think your best option is to use Google APIs to access Google Sheets from python. You can use any of the following two tutorials by Twilio to achieve that. If you want to add the python asset into Google sheets (Ex. as a custom function), then it will be much easier to rewrite it in JS.
1
3
0
What is the easiest way to run a python script from google sheets, or alternatively execute the python script when the google sheets opens?
Execute python script from within google sheets
0
0
0
2,539
44,892,762
2017-07-03T19:30:00.000
0
0
1
0
python-2.7,pycharm,anaconda,python-3.5
44,929,820
2
false
0
0
Thank you for your answers and help. After I've completely removed Anaconda and all the side packages, and reinstalled everything from scratch, the method I mention in the question worked fine and without any setbacks. Although it's frustrating to reinstall everything from scratch, it solved the problem. For some reason, changing the Path backfired and nothing worked at all after that and even when I tried to restore the previous Path, it wouldn't work anymore. So if anyone has the same issue when mentioned above doesn't work, all I can suggest is to fully reinstall the program.
1
0
0
I've been trying to set a virtual environment for some time now, but with no success. I read many topics regarding this manner, but couldn't find a solution for my problem. I am running: Windows 7 Pycharm Community Edition 2017.1.4 Anaconda 2 Python 2.7.13 I am trying to set up an environment of Python 3.5.3 - Anaconda I used this command: conda create -n py35 python=3.5 anaconda (also tried : conda create -n py35 python=3.5.3 anaconda) same result via PyCharm terminal. It does seem that it sets up an environment and I am able to switch between them in the terminal, but when I try to add it as a local Interpreter I keep receiving a message: Cannot set up a python SDK at Python 3.5.3 (C:\Users\Cossack\Anaconda2\envs\py35\python.exe) (C:\Users\Cossack\Anaconda2\envs\py35\python.exe). The SDK seems invalid. I also tried creating conda Env via PyCharm settings, but same error appears. I tried creating environment for python 3.6 just to see if that was the problem source, but again, the same error appeared. I really need both Python 2.7 and 3.5, but I keep getting errors which I have no Idea on how to solve. Thank you very much in advance, hope that someone can help me in solving this issue.
Pycharm - Anaconda2 - Windows7 - py35 environment - SDK error
0
0
0
1,000
44,894,250
2017-07-03T21:47:00.000
2
0
0
0
python,math,statistics,volatility
44,894,389
1
true
0
0
You could use the standard deviation of the list divided by the mean of the list. Those measures have the same units so their quotient will be a pure number, without a unit. This scales the variability (standard deviation) to the size of the numbers (mean). The main difficulty with this is for lists that have both positive and negative numbers, like your List B. The mean could end up being an order of magnitude less that the numbers, exaggerating the stability measure. Worse, the mean could end up being zero, making the measure undefined. I cannot think of any correction that would work well in all cases. The "stability" of a list with both positive and negative numbers is very doubtful and would depend on the context, so I doubt that any general stability measure would work well in all such cases. You would need a variety for different situations.
1
2
1
Wonder if anyone can help. I have a set of lists of numbers, around 300 lists in the set, each list of around 200 numbers. What I wish to calculate is the "relative stability" of each list. For example: List A: 100,101,103,99,98 - the range is v small - so stable. List B: 0.3, 0.1, -0.2, 0.1 - again, v small range, so stable. List C: 0.00003, 0.00002, 0.00007, 0.00008 - stable. Now, I could use standard deviation - but the values returned by the standard deviation will be relative to the values within each list. So std for list C would be tiny in comparison to std for list A - and therefore numerically not give me a comparable measure of stability/volatility enabling me to meaningfully ask: if list A more or less stable than list C? So, I wondered if anyone had any suggestions for a measure that will be comparable across such lists? Many thanks for any help. R
Measure Volatility or Stability Of Lists of Floating Point Numbers
1.2
0
0
361
44,899,119
2017-07-04T07:04:00.000
0
0
0
0
python,pandas
44,899,478
3
false
0
0
Instead of df1.merge(...) try: pd.merge(left=df1, right=df2, on ='e', how='inner')
1
1
1
Sorry I have a very simple question. So I have two dataframes that look like Dataframe 1: columns: a b c d e f g h Dataframe 2: columns: e ef I'm trying to join Dataframe 2 on Dataframe 1 at column e, which should yield columns: a b c d e ef g h or columns: a b c d e f g h ef However: df1.merge(df2, how = 'inner', on = 'e') yields a blank dataframe when I print it out. 'outer' merge only extends the dataframe vertically (like using an append function). Would appreciate some help thank you!
Merging 2 dataframes on Pandas
0
0
0
87
44,902,885
2017-07-04T10:02:00.000
0
0
1
1
python,azure,pyspark,jupyter-notebook,azure-hdinsight
44,903,656
3
false
0
0
Have you tried installing using pip? In some cases where you have both Python 2 and Python 3, you have to run pip3 instead of just pip to invoke pip for Python 3.
1
1
1
I would like to install python 3.5 packages so they would be available in Jupyter notebook with pyspark3 kernel. I've tried to run the following script action: #!/bin/bash source /usr/bin/anaconda/envs/py35/bin/activate py35 sudo /usr/bin/anaconda/envs/py35/bin/conda install -y keras tensorflow theano gensim but the packages get installed on python 2.7 and not in 3.5
how to install python package on azure hdinsight pyspark3 kernel?
0
0
0
2,656
44,905,197
2017-07-04T11:48:00.000
-1
0
0
0
python-2.7,scapy,pcap
62,105,352
2
false
0
0
As you can read everywhere Scape is SUPER slow for pcap files >10MB. I made a couple of tools always using dpkt. Just to give you an idea. Having a 670MB file using dpkt splitting the specific bytes I need compared to full fletched scapy (reading raw packages (the fastest solution with scapy)) is: dpkt needs 150s (6GB ram), scapy >750s (>10GB)
1
0
0
I am trying to build a new layer using scapy. I want to read a pcap file and store each packet in an array so that I can access all the information inside the packet using the index of the array.
Is there a way I can read packets from a pcap file into an array?
-0.099668
0
0
982
44,905,309
2017-07-04T11:53:00.000
1
0
1
0
python,type-conversion
44,905,485
2
false
0
0
From the design of the language, I have the impression so far that it values flexibility more than purity and therefore the above behavior seems kind of non-typical. Your impression is wrong. Python is dynamically-typed, but also strongly-typed, meaning that the type of an object is always fixed. If you need a different type of object, you have to compare it one way or the other. The second question is legitimate, though – it would have been possible. But explicit is better than implicit.
1
1
0
This is probably a bad title but what I mean by it is the following: Since Python is dynamically-typed, why do certain functions insist on being used with certain data types only and do not handle the conversion themselves internally if necessary? What problems could that lead to? Examples: startswith(): Takes both strings and tuples as arguments. Why not lists too? And if there is a good reason for that (which is probably the case), why does it not simply convert them internally? join(): Only accepts containers of string items. Why does it not simply do a [str(x) for x in whatever]? From the design of the language, I have the impression so far that it values flexibility more than purity and therefore the above behavior seems kind of non-typical.. Another example comes from @VPfB "IPv4 address (host, port) must be tuple - not a list - in socket library.". Can it be that tuples are prefered due to their immutability? Is there a case where a tuple is denied and only a list accepted? Preferably part from the standard Python distribution and its modules.
Why isn't Python fexlibe when it comes data-types of function arguments?
0.099668
0
0
66
44,907,749
2017-07-04T13:49:00.000
0
0
1
1
python
44,908,141
2
true
0
0
Found a solution to my problem... I needed to use: set path=%path%;C:\Python27 This switched interpreter. Cheers
1
0
0
I've been developing a set of scripts in PyCharm using C:/Python27/python.exe interpreter... I'm creating a batch file and then running all of the scripts through this file from the cmd shell however the shell is not recognizing most modules because it is using the wrong interpreter (path for anaconda instead)... How do I change the shell to use the C:/Python27/python.exe interpreter as default all the time instead? I've tried looking this up but it all points to just adding the interpreter path, which I have... but the shell still uses the anaconda interpreter. Any help appreicated
Choosing specific python interpreter in the shell
1.2
0
0
185
44,911,066
2017-07-04T16:59:00.000
2
0
0
0
python,postgresql,azure,psycopg2
44,915,875
1
true
1
0
You don't need the specific pg_config from the target database. It's only being used to compile against libpq, the client library for PostgreSQL, so you only need the matching PostgreSQL client installed on your local machine. If you're on Windows I strongly advise you to install a pre-compiled PostgreSQL. You can just install the whole server, it comes with the client libraries. If you're on Linux, you'll probably need the PostgreSQL -devel or -dev package that matches your PostgreSQL version.
1
0
0
Trying to install a postgresql database which resides on Azure for my python flask application; but the installation of psycopg2 package requires the pg_config file which comes when postgresql is installed. So how do I export the pg_config file from the postgresql database which also resides on azure? Is pg_config all psycopg2 need for a successful installation?
How to retrieve the pg_config file from Azure postgresql Database
1.2
1
0
164
44,913,520
2017-07-04T20:17:00.000
1
0
0
0
wxpython
44,931,999
1
true
0
1
The GIL is automatically released and reacquired whenever a call is made to a wx C++ function or method. This is probably a bit of overkill as there are a number of things that are quick and won't ever block and it might be more efficient to not release/reacquire the GIL, but this approach ensures that we don't miss doing it for something that will block. In addition, any time there is an event dispatched to a Python event handler, or when a virtual method overridden in Python is called, or etc. then the GIL is acquired before entry to that code and released again afterwards.
1
0
0
I was wondering the interaction that wxPython has with running things on separate cores and if wxPython releases the GIL when running. I've assumed that when I kick off some command which takes a while that wxPython starts up the wxWidgets c++ code which runs on multiple cores and somehow releases the GIL until the task is finished and then comes back. However I have nothing to base this on. Does this ever happen? What is actually going on under the hood?
Does wxpython run on separate process and release the GIL (Global Interpreter Lock)?
1.2
0
0
59
44,914,376
2017-07-04T21:49:00.000
3
0
0
1
python,bash,python-2.7
44,914,400
1
true
0
0
Use os.getcwd - returns current working directory. Requires import os. Also its slightly unclear, what exactly do you want. You dont really need current working directory in python - every relative path in python will be interpreter as if starting in current directory. So simply use relative paths (in linux those are ones NOT starting with /) - this will work as long as you dont change current directory (which is a bad habbit anyway).
1
0
0
I'm on linux, I wrote a python script that is located in lets say /home/python/main.py. Every time I run it, it asks for a path using raw_input. I run it from a different place every time. say I run it from /home/test like this python /home/python/main.py from terminal. How can I give it the path I'm currently in, if possible. I don't want to hardcode paths. So I want to give it the path from which I'm using the terminal. I don't want to do pwd and copy paths, I'm wondering if there's something like ~ which always points to home directory. Something similar which points to the users current directory.
How to find the user current path?
1.2
0
0
44
44,915,500
2017-07-05T00:27:00.000
1
0
0
0
python,scikit-learn,regression,polynomials
47,966,038
1
false
0
0
Your question is ill defined. If you want, say, 14 features of 34 possible, which 14 should that be? In your place, I would generate a redundant number of features and then would use a feature selection algorithm. It would be a sparse model (like Lasso) or a feature elemination algorithm (like RFE).
1
1
1
I've been using scikit learn, very neat. Fitting a polynomial curve through points (X-Y) is pretty easy. If I have N points, I can chose a polynomial of order N-1 and the curve will fit the points perfectly. If my X vector has several dimension, in scikit-learn I can build a pipeline with a PolynomialFeatures and a LinearRegression. Where I am stuck is having a bit more flexibility with the numbers of features created by PolynomialFeatures (which is not an input); I'd like to be able to specify the total amount of features, with the end goal to have a polynomial that goes through all the points. E.g. in 3D (X has 3 columns), if I have 27 points (square matrix of 3X3X3); I'd like to limit the number of features to 27. (PolynomialFeatures does have an attribute of powers_, but it can't be set. Exploring the source also does not seem to show anything specific). Any ideas? Cheers, N
Python - Fitting a polynomial (multi-dimension) through X points
0.197375
0
0
300
44,916,352
2017-07-05T02:55:00.000
1
0
1
0
python,function
44,916,473
1
false
0
0
If you do a = f() a become what the function f returns. It can be anything, even a function too as long as f() returns a function. If you do a = f and f is a function already defined , then a always be a function as long as you don't re-define a.
1
0
0
Assume f() is defined. In the statement a = f(), a is always a function. Why it is wrong?
Assume f() is defined. In the statement a = f(), a is always a function.
0.197375
0
0
3,467
44,920,110
2017-07-05T07:50:00.000
0
1
1
0
python,python-2.7,python-3.x
45,315,275
1
true
0
0
Seems that pytraj was linked to the wrong libcpptraj Try ldd /home/supriyo/software/amber16/lib/python2.7/site-packages/pytraj/cpp_options.so and delete the libcpptraj.so from that output. It's better to open and issue in github.com/amber-md/pytraj
1
0
0
I am getting the following error when I am tring to import pytaj in python. ImportError: /home/supriyo/software/amber16/lib/python2.7/site-packages/pytraj/cpp_options.so: undefined symbol: _Z15SupressErrorMsgb?
How do I import pytraj in Python?
1.2
0
0
277
44,923,993
2017-07-05T10:42:00.000
0
0
0
0
python,python-3.x,tensorflow,gpu
44,964,036
1
true
0
0
Fresh install is the key but there are some important points: 1. Install CUDA 8.0 toolkit 2. Install cuDNn 5.1 version (not 6.0) 3. Install from source(bazel) and configure to use ensorflow with CUDA Above steps worked for me! Hope it helps anyone.
1
0
1
I am trying to use tensorflow with gpu and installed CUDA 8.0 toolkit and cuDNn v5.1 libraries as described in nvidia website. But when I try to import tensorflow as module in python3.5, it does not load cuDNn libraries (outputs nothing, just loads tensorflow module). And I do not observe speed in processing (same speed I obtained when I use CPU) with GPU.
Tensorflow GPU cuDNn: How do I load cuDDN libraries?
1.2
0
0
784
44,925,309
2017-07-05T11:45:00.000
5
0
1
0
python
44,925,605
1
true
0
0
The syntax ./hello.py is typically used on Unix-like systems (including Linux and OSX); it requires two things: that hello.py has proper rights (execute bit set) that the first line of hello.py is #!/usr/bin/python (or similar, depending on location of your Python interpreter) The other form - python hello.py - does not have such requirements.
1
1
0
Say Suppose, the name of my python script file is hello. How the script can be executed ? Sometimes, I see that most of the python scripts are executed by (python hello.py) and sometimes (./hello.py). Which one of these executing is true? If both are same, why it is mentioned as different commands?
How to run python script?
1.2
0
0
142
44,925,443
2017-07-05T11:51:00.000
0
0
1
0
python,windows,python-2.7,pip,paramiko
44,925,521
2
false
0
0
Can't you just copy the source and execute "pip install ." in the directory ?
1
1
0
I have a machine which is a Windows server, where installation by downloading packages from internet is prohibited. Before that I tried to set up Python 2.7 on my windows machine with internet.I have downloaded and installed pysftp, paramiko, bcrypt, cryptography, pyasn, PyNaCl etc. and also have installed Microsoft visual c++ which was required for pyasn. I have updated pip to 9.0.1 as well. But when I tried to set up on another Windows machine with the help of all packages (unzipped and copied to that machine),installation failed always. I have tried with python setup.py install and pip install So can we install packages without internet connection? Please help me on this. Thanks and regards, Shreeram
How to install packages in Python 2.7 without internet connections in Windows machine?
0
0
0
4,055
44,926,214
2017-07-05T12:24:00.000
0
0
1
1
python,command-line,installation,atom-editor
44,926,753
2
false
0
0
press ctrl+, to open your preferences go to install search for script press install
1
1
0
I struggle with a very basic problem in the Atom editor. I simply want to run a python file but am unable to just open a command line or console. In other forums I read, I need to install Script. I tried to do it in Packages -> Settings View. But if I type 'script' in the install packages field, only this response appears: Searching for “script” failed.Show output… How can I make Atom to find this packages? Thx
Installing Script in Atom
0
0
0
4,217
44,928,429
2017-07-05T14:02:00.000
2
0
1
0
python,regex
44,928,605
2
false
0
0
Just use the regular expression for "in a list of values", and return False if there's a match.
1
0
0
I want to search for a string that doesn't exist in a list of strings by using regex. Is it possible to make a regex for that case without using negative lookahead?
Regular expression for "not in a list of values" without negative lookahead
0.197375
0
0
91
44,931,415
2017-07-05T16:20:00.000
14
0
1
0
python,setuptools,packaging,setup.py
44,931,563
2
false
0
0
python setup.py egg_info will write a package_name.egg-info/requires.txt file which contains the dependencies you want.
1
9
0
I have a third-party package which has a setup.py file that calls setup() in the standard way, passing test_requires, install_requires and extras_require. (It does not use a requirements.txt file.) I am running a Windows machine (on Appveyor) and pip install is notoriously poor on Windows with some of the packages. I would like to use Conda. It seems to me, the ideal way to proceed is: Ask setup.py to list the dependencies it needs, without taking any action. Pass that list to conda to install. Call setup.py with the install or test command, confident that it will check its requirements, and not find anything it needs to install. I thought python setup.py --requires might do the trick, but it is poorly documented and is returning nothing. If this a reasonable approach? If so, is there a way of asking setup.py to evaluated its dependencies, and list them without installing them.
How can I ask setup.py to list dependencies?
1
0
0
4,508
44,932,495
2017-07-05T17:24:00.000
1
0
1
0
python,file,oop,pickle
44,952,297
1
false
0
0
Text files don't have meta data in the same way that a jpg file does. A jpeg file is specifically designed to have ways of including meta data as extra structured information in the image. Text files aren't: every character in the text file is generally displayed to the user. Similarly, every thing in a CSV file is part of one cell in the table represented by the file. That said, there are some things similar to text file metadata that have existed or exist over the years that might give you some ideas. I don't think any of these is ideal, but I'll give some examples to give you an idea how complex the area of meta data is and what people have done in similar situations. Some filesystems have meta data associated with each file that can be extended. As an example, NTFS has streams; HFS and HFSplus have resource forks or other attributes; Linux has extended attributes on most of its filesystems. You could potentially store your pickle information in those filesystem metadata. There are disadvantages. Some filesystems don't have this meta data. Some tools for copying and manipulating files will not recognize (or intentionally strip) meta data. You could have a .txt file and a .pcl file, where the .txt file contains your text representation and the .pkl file contained the other information. Back in the day, some DOS programs would stop reading a text file at a DOS EOF (decimal character 26). I don't think anything behaves like that, but it's an example that there are file formats that allowed you to end the file and then still have extra data that programs could use. With a format like HTML or an actual spreadsheet instead of CSV, there are ways you could include things in meta data easily.
1
0
0
I am writing a function that is supposed to store a text representation of a custom class object, cl I have some code that writes to a file and takes the necessary information out of cl. Now I need to go backwards, read the file and return a new instance of cl. The problem is, the file doesn't keep all of the important parts of cl because for the purpose of this text document parts of it are unnecessary. A .jpg file allows you to store meta data like shutter speed and location. I would like to store the parts of cl that are not supposed to be in the text portion in the meta data of a .txt or .csv file. Is there a way to explicitly write something to the metadata of a text file in Python? Additionally, would it be possible to write the byte-code .pkl representation of the entire object in the metadata?
Store pkl / binary in MetaData
0.197375
0
0
250
44,937,003
2017-07-05T22:30:00.000
1
0
0
0
python,mysql,pymysql
44,937,097
2
false
0
0
The problem here is fact a minor mistake. Thanks to @Asad Saeeduddin, when I try to use print cursor._last_executed to check what has happened. I found that what is in fact executed is SELECT * FROM articles WHERE 'title' LIKE '%steven%', look the quotation mark around the title, that's the reason why I got empty set. So always remember the string after formatting will have a quotation around
1
1
0
What I want is execute the sql select * from articles where author like "%steven%". For the sake of safety, i used like this way : cursor.execute('select * from articles where %s like %s', ('author', '%steven%') Then the result is just empty, not get a syntax error, but just empty set. But I am pretty sure there is some thing inside, I can get result use the first sql. Is there anything run with my code ?
Could not format sql correctly in pymysql
0.099668
1
0
221
44,937,573
2017-07-05T23:35:00.000
3
0
0
0
python,pandas
44,937,815
1
true
0
0
When you perform a groupby/agg operation, it is natural to think of the result as a mapping from the groupby keys to the aggregated scalar values. If we were using plain Python, a dict would be the natural data structure to hold such a mapping from keys to values. Since we are using Pandas, a Series is the natural data structure. Its index would hold the keys, and the Series values would be the aggregrated scalars. If there is more than one aggregated value for each key, then the natural data structure to use would be a DataFrame. The advantage of holding the keys in an index rather than a column is that looking up values based on index labels is an O(1) operation, whereas looking up values based on a value in a column is an O(n) operation. Since the result of a groupby/agg operation fits naturally into a Series or DataFrame with groupby keys as the index, and since indexes have this special fast lookup property, it is better to return the result in this form by default.
1
1
1
When working with groupby on a pandas DataFrame instance, I have never not used either as_index=False or reset_index(). I cannot actually think of any reason why I wouldn't do so. Because my behavior is not the pandas default (indeed, because the groupby index exists at all), I suspect that there is some functionality of pandas that I am not taking advantage of. Can anyone describe cases where it would be advantageous to not reset the index?
What are use cases for *not* resetting a groupby index in pandas
1.2
0
0
40
44,938,099
2017-07-06T00:46:00.000
2
0
1
0
python
44,938,127
1
true
0
0
Multiplication is defined for sequences in general, not just lists, regardless of the type of the contained values. Sure, [1] * 5 -> [5] makes sense for a list of int, but it's nonsensical for a list of str, or a str by itself. They wanted a generic sequence handler that worked for sequences of all types, so they defined it as repeated concatenation, rather than element-wise arithmetic. As juanpa mentions in the comments, this makes for a consistent definition: seq + seq is concatenation, so seq * int is just seq + seq repeated int times (well, seq is repeated int times, with four virtual concatenations), the same way inta * intb is just inta + inta repeated intb times. If you want element-wise arithmetic, take a look at numpy arrays.
1
0
0
I know there is something obvious that I am missing but I would rather clear my confusion than to mug it up. why does the following code: [1]*5 return [1,1,1,1,1] and not [5] or [[1],[1],[1],[1],[1]]
multiplication for lists in python
1.2
0
0
42
44,938,737
2017-07-06T02:12:00.000
2
0
0
0
python-3.x,computer-vision,opencv3.0,dlib
46,657,731
2
false
0
0
Dear past version of self. With Deep learning; Convolutional Neural Networks, This becomes a trivial problem. You can retrain Google's inception V3 Model to classify faces into the the 5 face shapes you mentioned in your question. With Just 500 training images you can attain an accuracy of 98%.
1
0
1
Is there an OpenCV-python, dlib or any python 3 implementation of a face shape detector (diamond, oblong, square)?
Is there a python implementation of a Face shape detector?
0.197375
0
0
2,005
44,939,210
2017-07-06T03:16:00.000
0
0
0
0
python,machine-learning,tensorflow,neural-network,random-forest
44,950,403
2
false
0
0
A useful rule for beginning training models, is not to begin with the more complex methods, for example, a Linear model, which you will be able to understand and debug more easily. In case you continue with the current methods, some ideas: Check the initial weight values (init them with a normal distribution) As a previous poster said, diminish the learning rate Do some additional checking on the data, check for NAN and outliers, the current models could be more sensitive to noise. Remember, garbage in, garbage out.
1
1
1
I am using TensorFlow for training model which has 1 output for the 4 inputs. The problem is of regression. I found that when I use RandomForest to train the model, it quickly converges and also runs well on the test data. But when I use a simple Neural network for the same problem, the loss(Random square error) does not converge. It gets stuck on a particular value. I tried increasing/decreasing number of hidden layers, increasing/decreasing learning rate. I also tried multiple optimizers and tried to train the model on both normalized and non-normalized data. I am new to this field but the literature that I have read so far vehemently asserts that the neural network should marginally and categorically work better than the random forest. What could be the reason behind non-convergence of the model in this case?
TensorFlow RandomForest vs Deep learning
0
0
0
2,520
44,939,999
2017-07-06T04:46:00.000
1
0
1
1
linux,python-2.7,centos7
44,940,122
1
false
0
0
If you set Python 2.7.13 to your PATH and not 2.7.5, the used Python should be 2.7.13. Or you can try to set the PYTHONPATH variable
1
0
0
I have a centos 7 machine , which has 2 python versions , python giver sveriosn 2.7.5 and python2.7 givers version . 2.7.13. I want to make 2.7.13 as default version, such that when I check python --version it gives 2.7.13 and not 2.7.5 . I have added both to PATH.
Different python versions in centos
0.197375
0
0
97
44,940,094
2017-07-06T04:56:00.000
0
0
1
1
python,python-2.7
44,940,434
3
false
0
0
Your path is bugged: C:\Program Files\Git\cmd;"C:\Windows;C:\Windows\System32;C:\Python27";C‌​:\Python27\Scripts should not have those random " characters.
2
0
0
I am very new to python, but it seems like I should be able to activate the python environment by simply typing python. Unfortunately, I keep getting the error "'python' is not recognized as an internal or external command, operable program or batch file." I can access python through it's path at C:\Python27\python, which works fine, but I can't seem to get the shortcut to work. Details: I manually installed Python 2.7, and I run Windows 10. I have tried going to the Advanced System Settings to add the PATH manually, but with no result. None of the tutorials or help articles have suggestions for what to do if adding it manually fails. Is there any way I can fix this? It seems like a small thing, but it's really bugging me.
Unable to add python to PATH
0
0
0
381
44,940,094
2017-07-06T04:56:00.000
1
0
1
1
python,python-2.7
44,940,481
3
true
0
0
Your path configuration is incorrect; your path should look like this: C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\system32;C:\‌​WINDOWS;C:\WINDOWS\S‌​ystem32\Wbem;C:\WIND‌​OWS\System32\Windows‌​PowerShell\v1.0\;C:\‌​Program Files (x86)\MiKTeX 2.9\miktex\bin\;C:\Program Files\jEdit;C:\Program Files\Git\cmd;C:\Python27\;C‌​:\Python27\Scripts;C‌​:\Users\Maria\AppDat‌​a\Local\Microsoft\Wi‌​ndowsApps; After changing the path, make sure you restart the command prompt or any other application that needs to use Python (or you can just restart the computer).
2
0
0
I am very new to python, but it seems like I should be able to activate the python environment by simply typing python. Unfortunately, I keep getting the error "'python' is not recognized as an internal or external command, operable program or batch file." I can access python through it's path at C:\Python27\python, which works fine, but I can't seem to get the shortcut to work. Details: I manually installed Python 2.7, and I run Windows 10. I have tried going to the Advanced System Settings to add the PATH manually, but with no result. None of the tutorials or help articles have suggestions for what to do if adding it manually fails. Is there any way I can fix this? It seems like a small thing, but it's really bugging me.
Unable to add python to PATH
1.2
0
0
381
44,942,264
2017-07-06T07:23:00.000
0
0
0
0
python,mqtt,paho
63,683,554
3
false
0
0
When sending the initial MQTT CONNECT message from a client, you can supply an optional "keep-alive" value. This value is a time interval, measured in seconds, during which the broker expects a client to send a message, such as a PUBLISH message. If no message is sent from the client to the broker during the interval, the broker automatically closes the connection. Note that the keep-alive value you specify is multiplied by 1.5, so setting a 10-minute keep-alive actually results in a 15 minute interval.
1
5
0
Currently developing something like "smart home" and I have few different devices in my home. All of them connected to OpenHab via MQTT. I'm using Paho MQTT library (Python) for my purposes. Generally, MQTT has "keepalive" property. This property describes how much time my client will be connected (AFAIK it sends the ping to the server) to MQTT server when there are no updates on the subscribed topic. But here I have a huge problem. Needed topic could be updated once per hour or even once per few days/months. Let's say that this is indoor alarm. How can I avoid that keepalive timeout or ignore that field? Could it be unlimited?
Unlimited keepalive in MQTT
0
0
0
14,027
44,942,445
2017-07-06T07:31:00.000
1
0
0
1
python,google-cloud-platform,google-cloud-dataflow,apache-beam
44,953,748
1
true
1
0
Groups cannot be split behind the scenes, so using a GroupByKey should work. In fact, this is a requirement since each individual element must be processed on a single machine and after a GroupByKey all values with a given key are part of the same element. You will likely want to assign random keys. Keep in mind that if there are too many values with a given key it may also be difficult to pass all of those values to your program -- so you may also want to limit how many of the values you pass to the program at a time and/or adjust how you assign keys. One trick for assigning random keys is to generate the random number in start bundle (say 1 to 1000) and then in process element just increment this and wrap 1001 to 1000. This avoids generating a random number for every element, and still ensures a good distribution of keys. You could create a PTransform for both this logic (divide a PCollection<T> into PCollection<List<T>> chunks for processing), and that would be potentially reusable in similar situations.
1
0
0
I'm currently working on a larger Apache Beam pipeline with the Python API which reads data from BigQuery and in the end writes it back to another BigQuery task. One of the transforms needs to use a binary program to transform the data, and for that it needs to load a 23GB file with binary lookup data. So starting and running the program takes a lot of overhead (takes about 2 minutes to load/run each time) and RAM, and it wouldn't make sense to start it up for just a single record. Plus the 23GB file would need to be copied locally from Cloud Storage every time. The workflow for the binary would be: Copy 23GB file from cloud storage if it's not there already Save records to a file run the binary with call() read the output of the binary and return it The amount of records the program can process at a time is basically unlimited, so it would be nice to get a somewhat-distributed Beam Transform, where I could specify a number of records to be processed at once (say 100'000 at a time), but still have it distributed so it can run it for 100'000 records at a time on multiple nodes. I don't see Beam supporting this behaviour, it might be possible to hack something together as a KeyedCombineFn operation that collects records based on some split criterion/key and then runs the binary in the merge_accumulators step over the accumulated records. But this seems very hackish to me. Or is it possible to GroupByKey and process groups as batches? Does this guarantee that each group is processed at once, or can groups be split behind the scenes by Beam? I also saw there's a GroupIntoBatches in the Java API, which sounds like what I'd need, but isn't available in the Python SDK as far as I can tell. My two question are, what's the best way (performance-wise) to achieve this use-case in Apache Beam, and if there isn't a good solution, is there some other Google Cloud service that might be better suited that could be used like Beam --> Other Service --> Beam ?
Batch Processing in Apache Beam with large overhead
1.2
0
0
1,299
44,944,636
2017-07-06T09:14:00.000
1
0
0
0
python,selenium,selenium-webdriver,selenium-chromedriver
44,944,883
3
false
0
0
chromedriver.exe must be in python path, probably now python expects that driver exists in "D:\Selenium\Chrome\chromedriver.exe" but it does not. You could try add chromedriver.exe path to windows enviroment path variable, or add path to os.path in python, or add driver to folder of python script.
1
0
0
I want to use Selenium on Python but I have an alert message: driver-webdriver.Chrome("D:\Selenium\Chrome\chromedriver.exe") NameError: name 'driver' is not defined I have installed the Chrome Driver, what else must I do ?
"Driver is not defined" Python/Selenium
0.066568
0
1
20,826
44,945,622
2017-07-06T09:55:00.000
0
0
0
0
python,django,django-admin,django-apps
45,302,632
1
false
1
0
You can override admin/base_site.html and write your own version of {% block sidenav_header %}
1
1
0
I have an app other in django. I have models like PaymentMethods , Currencies , ReasonCodes , DeviceInfo etc in this app.. I want to display few models under a different heading. How can I do this? Do we have to use a library? Tried using django-modeladmin-reorder library. But it doesnt work. Note : I am using Django 1.8 and a library django-material for the admin interface. Eg of what I would want. Other is my app name. It has 4 models under it. PaymentMethods, Currencies, ReasonCodes , DeviceInfo Now in the default admin, all the 4 models are displayed under other app. I need to display like this, verbose name of other will be 'payments' and under this i would want PaymentMethods, Currencies and another section named 'device info' and under this, ReasonCodes , DeviceInfo should be shown.
Display models of same app in different app django
0
0
0
525
44,945,850
2017-07-06T10:05:00.000
0
0
1
0
python,machine-learning,data-science
44,947,979
2
false
0
0
as suggested by @DavidG, the following solution worked: Download the whl file use cmd window and go to the download folder and then install like below: C:\Users\XXXXXXXX>cd C:\Users\XXXXXXXX\Documents\Python Packages C:\Users\XXXXXXXX\Documents\Python Packages>pip install numpy-1.13.0+mkl-cp36-cp36m-win32.whl Processing c:\users\XXXXXXXX\documents\python packages\numpy-1.13.0+mkl-cp36-cp 36m-win32.whl Installing collected packages: numpy Found existing installation: numpy 1.13.0 Uninstalling numpy-1.13.0: Successfully uninstalled numpy-1.13.0 Successfully installed numpy-1.13.0+mkl C:\Users\XXXXXXXX\Documents\Python Packages>pip install scipy-0.19.1-cp36-cp36m -win32.whl Processing c:\users\XXXXXXXX\documents\python packages\scipy-0.19.1-cp36-cp36m- win32.whl Requirement already satisfied: numpy>=1.8.2 in c:\users\XXXXXXXX\appdata\local\ programs\python\python36-32\lib\site-packages (from scipy==0.19.1) Installing collected packages: scipy Successfully installed scipy-0.19.1 C:\Users\XXXXXXXX\Documents\Python Packages>
1
0
1
How to download necessary python packages for data analysis (e.g. pandas,scipy,numpy etc) and machine learning packages (sci-kit learn for starter, tensorflow for deeplearning if possible etc) without using github or anaconda? Our client has permitted us to install python 3.6 and above (32-bit) in our terminals for data analysis and machine learning projects but we cannot access github due to security restrictions and also cannot download anaconda bundle. Please provide suitable weblinks and instructions.
Installing data science packages to vanilla python
0
0
0
931
44,946,737
2017-07-06T10:44:00.000
1
0
0
0
python-2.7,tensorflow,windows-10,keras,coreml
45,007,258
1
true
0
0
A non-optimal solution (the only one I found) in my opinion, is to install a Linux virtual machine. I used VitualBox for it. Then, it is very easy to download Anaconda and Python 2, as well as the right versions of the packages. For example, you can download Tensorflow 1.1.0 using the following command $ pip install -I tensorflow==1.1.0.
1
0
1
I am currently working on an artificial neural network model with Keras for image recognition and I want to convert it using CoreML. Unfortunately, I have been working with Python3 and CoreML only works with Python 2.7 at the moment. Moreover, Tensorflow for Python 2.7 does not seem to be supported by Windows... So my only hope is to find a way to install it. I saw some tips using Docker Toolbox but I did not catch it and I failed when trying this solution, even though it looks like the only thing that works. So, is there any quite simple way to install Tensorflow for Python 2.7 on Windows 10? Thank you very much!
Installing Tensorflow for Python 2.7 for Keras and CoreML conversion on Windows 10
1.2
0
0
1,130