Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
44,464,654
2017-06-09T18:35:00.000
0
1
0
0
python-3.x,python-requests,python-unittest
44,464,795
1
false
0
0
An example of the code you're talking about (with proprietary stuff removed, of course), might help clarify. The variable, self.session, is on the test class itself, rather than the instance? That sounds as if it might end up leaking state between your tests. Attaching it to the instance might help. Beyond that, I generally think it makes sense to move as much out of setUp methods as possible. Authentication is part of the important part of your test, and it should probably be done alongside all the other logic.
1
0
0
I'm working on a unit tests for an API client class. There is a class variable self.session that is supposed to hold the session. In my setup method for my test I create a new instance of the client class and then call its authenticate method. However when the tests themselves go to send requests using this object they all return 401 forbidden errors. If I move the authenticate call (but not the creation of the class) into the tests and out of setup everything works great, but I understand that that defeats the purpose of setup().
Lifetime of Request Sessions in Unit Testing
0
0
1
27
44,466,061
2017-06-09T20:15:00.000
0
1
0
0
php,python
44,466,115
2
false
1
0
You have to expose the data to the Internet - either create an upload script in PHP that will receive data whenever your Python script captures the new weight (you probably don't want this approach), or write a PHP script that will execute the Python script, take its output and send it back to you. You can then reload the whole page or use JavaScript to update the input field Hope this helps you, comment if you have any questions
1
0
0
Is it possible to send the result of a Python script to the input field of a PHP page? I am using a PYTHON script to capture weight data from a scale. I would like to use that data from the python script, placing it into the input field of a PHP page. The name of the input field on the page will remain static, as will the name of the page. To initiate the process (from python script to the input field) I would use an on click command or something similar. I am very new to python and very much appreciate any help. Bob
Send output to a PHP input field
0
0
0
140
44,466,855
2017-06-09T21:20:00.000
6
0
0
0
python,pyautogui
44,466,895
1
true
0
0
Windows 10 has made this more difficult. If the application you are trying to automate is running as Admin, you cannot control it with a program running as a regular user. Try running your python program as administrator. But, yes, it is possible for a program to distinguish between real mouse events and simulated ones. If it is a highly sensitive program, they may have. Or, if it is a video game, they may just poll the hardware directly and ignore windows messages. EDIT: Also, many applications want more than a "click" message. They want the mouseenter/mousemove/mousedown/mouseup. If you don't have all of those messages being sent, it won't activate as a "click". pyautogui.click should simulate it properly, but if you experiment with the app and look at how it responds (click mouse without release, what happens?) you might be able to improve the simulation. Maybe put a delay between pyautogui.mouseDown() and pyautogui.mouseUp(). But my hunch is the app is running as a different user than the python script.
1
2
0
I was working on a project where I wanted to automate a GUI using python, but the windows program I was trying to automate does not respond to pyautogui mouse clicks. Is it possible that the company that made this application intentionally blocked windows API events? The particular program I am trying to automate is Blackbaud's Raiser's Edge. I am asking because I am planning on potentially modding a mouse with a raspberry pi to control mouse clicks and then SSH to it from my computer if there is no other work-around for this issue.
Application not responding to Windows API events?
1.2
0
0
1,736
44,468,527
2017-06-10T01:04:00.000
0
0
1
0
python,amazon-web-services
44,469,173
1
true
1
0
Much of what you are asking depends upon your use-case. For example, if you have work continually arriving then you will need capacity continually available. However, if it is more batch-oriented then you could start/stop capacity and even use the new Amazon Batch service that can allocate resources when needed and remove them when jobs are finished. Some things to note: You can change the Instance Type when an instance is stopped. So, your t2.micro instance can be changed to a large instance (eg m4.xlarge) by stopping it, changing the instance type and starting it again. The t2.micro Instance Type is actually extremely powerful when CPU burst credits are available, but it limited in capability when all CPU Credits are consumed. A good machine for bursty workloads, but not continual workloads. Spot instances are great, but please note that they will be terminated if the Spot Price goes above your bid price. Prices vary by region, Availability Zone within a region and instance type, so you could launch instances with a variety of attributes (different AZs, different instance types) that would make it unlikely that you would lose all capacity at the same time. Spot can save considerable costs and is well worth investigating. Take a look at Amazon EMR, which runs Hadoop to provide parallel processing across a cluster of instances. Stop instances when you don't need them. That's the best way to get good value!
1
0
0
I am new to Amazon Web Services (AWS) & I am using the free tier t2.micro right now ( 1 CPU and 1 GB memory). Doing some backtesting/ simulation stuff and it seems free tier is quite inadequate. Pretty slow actually. Thus thinking of options which will help me to run my code at a faster speed for few hours. Option 1 : I can 1 buy CPU optimized/ higher Memory instance ( 4 cores and 4 GB RAM for example ) and then make an image of my t2.micro and run my stuff in that new one. It will be expensive though if I keep it running, so I need to "stop" the instance when I am not working ( or nothing is running ) to reduce the cost. Option 2 : I can buy spot instances. I am not sure how to use the CPU and RAM of that spot instance from my existing t2.micro. Can I create a temporary grid/cluster where my Head Node will be running in my t2.micro but compute node will be the spot instance ( higher CPU and RAM ), thus all my calculations, etc will be using the spot instance. My question : Is the Option 2 possible ? I program everything in python and I have all the relevant softwares/python IDEs etc are already installed in my t2.micro instance. Any existing grid/cluster software I can use right now ? I don't know C++, Csharp, Java etc. Know only phython & R so any programming stuff I need to do to build a grid/cluster must use Python :) Thank you in advance.
Cluster with t2.micro instances in AWS
1.2
0
0
582
44,469,313
2017-06-10T03:50:00.000
0
0
1
0
python,pandas,ipython-notebook
59,876,018
15
false
0
0
This will also work: dframe.amount.str.replace("$","").astype(int)
1
4
1
I have a column called amount with holds values that look like this: $3,092.44 when I do dataframe.dtypes() it returns this column as an object how can i convert this column to type int?
Price column object to int in pandas
0
0
0
12,214
44,471,853
2017-06-10T09:42:00.000
3
0
0
0
python,numpy,matrix,octave
44,471,880
2
true
0
0
zeros(n,1) works well for me in Octave.
1
0
1
How can we create an array with n elements. zeros function can create only arrays of dimensions greater than or equal to 2? zeros(4), zeros([4]) and zeros([4 4]) all create 2D zero matrix of dimensions 4x4. I have a code in Python where I have used numpy.zeros(n). I wish to do something similar in Octave.
Creating 1D zero array in Octave
1.2
0
0
7,005
44,471,991
2017-06-10T09:58:00.000
0
0
1
0
anaconda,jupyter-notebook,python-3.6,data-science
56,126,772
1
false
0
0
I have uninstalled 64-bit software, removed shortcuts, installed 32-bit software and ran below command at the Anaconda prompt. (base) PS C:\anaconda3\Scripts> conda install -c anaconda anaconda-navigator Once updates done, navigator client opened with no issues.
1
4
0
I downloaded Anaconda 3.6, but when I tried "conda update conda" or tried opening jupyter notebook, it shows Failed to create process. Please help!
How do I resolve "Failed to create process" in Anaconda?
0
0
0
2,487
44,473,829
2017-06-10T13:28:00.000
0
0
1
0
python,pycharm
44,473,989
1
false
0
0
Unless there is any specific requirement to use urllib. I suggest you use python-requests. Both do the same.
1
0
0
I am trying to import urllib.request on python, but when I try to do so, i get the following error: ImportError: No module named setuptools. I tried doing 'sudo apt-get install -y python-setuptools' , but after doing so too I am getting the same error. I am using PyCharm and my Python version is Python 2.7.12+.
ImportError: No module named setuptools
0
0
1
439
44,475,555
2017-06-10T16:25:00.000
0
0
1
0
python-2.7,python-3.x
44,475,561
1
true
0
0
exception RuntimeError Raised when an error is detected that doesn’t fall in any of the other categories. The associated value is a string indicating what precisely went wrong.
1
0
0
When I'm running program I get this error, what does it mean? RuntimeError: illegal event name
Runtime Error illegal event name python
1.2
0
0
41
44,479,568
2017-06-11T01:25:00.000
0
0
0
1
python,unzip
52,304,543
2
false
0
1
I just ran into the same thing and figured out what's causing it. Turns out, if a zip file has a zip comment attached, it will be shown, along with a prompt that hangs your script. Passing -q to unzip will avoid showing the comment and any hangs, though you will lose the list of files being unzipped too. I haven't figured out how to just prevent the comment from showing up and not the rest of the stuff unzip prints.
1
1
0
I want to write a Python script to automatically unzip lots of apk files and then do static analysis. However, when I unzip some apk files, unzip prompted that "Press 'Q' to quit, or any other key to continue". Because it's a script and I haven't press any key then the script hangs. Any command option can solve this problem? Or do I have to handle it in Python? Thanks in advance :D
Unzip prompted "Press 'Q' to quit, or any other key to continue"
0
0
0
292
44,481,443
2017-06-11T07:35:00.000
1
0
1
0
python,beautifulsoup,pip,python-idle
44,481,491
1
true
0
0
It's not beautifulsoup or Beautifulsoup Try this from bs4 import BeautifulSoup
1
1
0
Every single dependency I try to install results in a ton of troubleshooting before I can get IDLE to recognise that it's installed. I'm on a Mac, and I'm using the Terminal's pip install. I also have two versions of IDLE – one for Python 2, the other for 3. pip install says "beautifulsoup" is installed pip3 install says "beautifulsoup" is installed, and yet it doesn't appear that IDLE knows that it is. I've tried "import beautifulsoup4, import beautifulsoup, from bs4 import beautifulsoup..." Why is this happening for every dependency I install?
How to avoid "no module installed" in Python 3
1.2
0
0
69
44,481,930
2017-06-11T08:41:00.000
-1
0
1
0
python,python-2.7,elementtree,raspberry-pi3
44,481,957
2
false
0
0
It is the best way to open a XML file. It just gets everything that is in the file so no opening or closing.
1
0
0
I am using ET.parse(path) to parse a xml file and read from it. Does ET.parse auto closes the xml file after opening ? Is it a safe way to access the file for reading ?
Does ET.parse automatically open and close a XML file?
-0.099668
0
1
1,137
44,484,082
2017-06-11T12:51:00.000
0
1
1
0
python,module,root,gpio,google-assistant-sdk
69,848,425
4
false
0
0
I ended up just installing the python package as sudo and it worked fine. For my case it was sudo pip3 install findpi and then executed as sudo findpi and worked.
1
7
0
I've got a script that uses the Google Assistant Library and has to import some modules from there. I figured out this only works in a Python Virtual Environment, which is really strange. In the same folder I've got a script which uses the GPIO pins and has to use root. They interact with each other, so when I start the GPIO script, the Assistant script is also started. But for some reason the modules in there can't import when the script is started with root. Does anybody know something about this?
Python can't find module when started with sudo
0
0
0
9,151
44,487,269
2017-06-11T18:29:00.000
0
0
0
0
java,python,stanford-nlp
44,512,629
1
false
0
0
If you are using the command line you can use -outputFormat text to get a human readable version or -outputFormat json to get a json version. In Java code you can use edu.stanford.nlp.pipeline.StanfordCoreNLP.prettyPrint() or edu.stanford.nlp.pipeline.StanfordCoreNLP.jsonPrint() to print out an Annotation.
1
0
1
Using the Stanford NLP, I want my text to go through lemmatization and coreference resolution. So for an input.txt: "Stanford is located in California. It is a great University, founded in 1891." I would want the output.txt: "Stanford be located in California. Stanford be a great University, found in 1891." I am also looking to get a table where the first column consists of the name-entities that were recognized in the text, and the second column is the name class they were identified as. Thus, for the example sentence above, it would be something like: 1st Column 2nd Column Stanford Location, Organization California Location Thus, in the table, the name-entities would occur only once. There's nothing I was able to find online about manipulating the default xml output or making direct changes to the input text file using the NLP. Could you give me any tips on how to go about this?
Stanford NLP Output Formatting
0
0
0
784
44,487,963
2017-06-11T19:37:00.000
0
0
1
0
python,command-line,zeromq,importerror
44,570,836
2
false
0
0
i have actually solved it by using the -m option: >python -m, I would like to know why that worked.
1
0
0
Well the subject says it all. I can run the same file in pycharm without problems. Putting import zmq in a file reproduces it. Never had a problem with zmq before running the program from command line.
Why do I get an import error for zmq but only from command line?
0
0
0
323
44,488,174
2017-06-11T19:57:00.000
4
0
1
0
python,windows,python-3.x,portability
44,488,581
2
false
0
0
Please correct me, if I understood it wrong. I think there are at least two ways to do it. suppose you have one portable_run.py script you want to run everywhere on a flashdisk. Make a exe file with pyinstaller for example. you can get a exe file like portable_run.exe. On target windows system what you need to do is to run the exe direcltly protable_run.exe Use a portable python distribution like winpython or python-xy. you just need to copy this portable distribution on the flash disk together with your portable_run.py. To run it on target system flashdisk/path-of-winpython/python portable_run.py Hopefully it could give you some idea.
1
10
0
I want to make a portable app that would have some code and python executable that would run on any Windows even if python is not installed. I would like it to be python 3.6 and so it has only pip and setup tools installed. EDIT: concerning duplicate not quite. I don't want to compile the code. I wanted to give them .py files but realize that Windows won't have python installed on default. I want something that can be carry on a flash drive but will run my code from source not binary.
How to make python portable?
0.379949
0
0
10,780
44,488,349
2017-06-11T20:16:00.000
2
0
1
1
python,python-3.x,anaconda,conda
68,669,748
5
false
0
0
I had a similar problem when using cmd. From your Command prompt 'C:\Users\zkdur\anaconda3\Scripts Now try conda init --help conda init --verbose after that restart your command prompt and conda will be working.
2
9
0
Tried to conda install -c conda-forge requests-futures=0.9.7 but failed with conda is not recognized as an internal or external command, C:\Users\user_name\Anaconda3\Scripts has been set for Path in environment variables under both user and System variables. I installed Python 3.5 as well and it is on Path, I am using Win10 X64. How to fix the issue?
Windows 10 conda is not recognized as an internal or external command
0.07983
0
0
39,426
44,488,349
2017-06-11T20:16:00.000
0
0
1
1
python,python-3.x,anaconda,conda
69,712,123
5
false
0
0
After installing Anaconda on windows 10, you can use Anaconda prompt from start menu to activate a conda enabled terminal window.
2
9
0
Tried to conda install -c conda-forge requests-futures=0.9.7 but failed with conda is not recognized as an internal or external command, C:\Users\user_name\Anaconda3\Scripts has been set for Path in environment variables under both user and System variables. I installed Python 3.5 as well and it is on Path, I am using Win10 X64. How to fix the issue?
Windows 10 conda is not recognized as an internal or external command
0
0
0
39,426
44,490,309
2017-06-12T01:38:00.000
1
0
1
0
python,anaconda,jupyter-notebook
44,759,932
1
true
0
0
just check "Bypass proxy server for local addresses" in proxy settings.
1
1
0
On my Windows 7 64-bit, I installed Anaconda3 v4.4.0 in C:\Anaconda3. Now, after launching Anaconda Navigator, I can't launch jupyter notebook from there. What should I do? I installed Anaconda3 using an admin account, and then switched to a normal user account to use it. I'm not sure if this could have any effect.
Can't open jupyter notebook from anaconda
1.2
0
0
696
44,491,067
2017-06-12T03:35:00.000
0
0
1
0
python,pandas,dataframe,indexing
50,835,528
2
false
0
0
A simple way would be to use slicing with iloc all but last column would be: df.iloc[:,:-1] all but first column would be: df.iloc[:,1:]
1
4
1
Let's day I have a pandas dataframe df where the column names are the corresponding indices, so 1, 2, 3,....len(df.columns). How do I loop through all but the last column, so one before len(df.columns). My goal is to ultimately compare the corresponding element in each row for each of the columns with that of the last column. Any code with be helpful! Thank you!
How to loop over all but last column in pandas dataframe + indexing?
0
0
0
5,300
44,497,733
2017-06-12T10:58:00.000
2
0
1
0
python,conditional-statements,eval
44,498,023
3
true
0
0
As @scotty3785 mentioned you need to create a separate function for checking the input for certain operations you need. Then you pass the input to ast.literal_eval(node_or_string). I would avoid using eval() at all.
1
0
0
I have text input in this form from a textbox: gzip == True gzip == False and count >= 100 gzip == True or msg == "Hello!" I use eval() to get the result of the condition. However there are the obvious "security concerns" with eval like code injection. Is there any way I can limit it to conditions? I dont need it for anything else.
python eval() to only accept conditions
1.2
0
0
253
44,500,275
2017-06-12T13:07:00.000
0
1
1
0
python
44,500,328
2
false
0
0
There isn't anything built-in. You'd have to implement it yourself, or use a third-party library.
1
6
0
Is there a way for manipulating complex numbers in more than floating point precision using python? For example to get a better precision on real numbers I can easily use the Decimal module. However it doesn't appear to work with complex numbers.
Decimal module and complex numbers in Python
0
0
0
1,543
44,500,526
2017-06-12T13:20:00.000
0
0
0
0
python,machine-learning,tensorflow,keras,autoencoder
44,530,760
1
false
0
0
The dataset I used was a single .mat file, created by using scipy's savemat and loaded with loadmat. It was created on my Macbook and distributed via scp to the other machines. It turned out that the issue was with this .mat file (I do not know exactly what though). I have switched away from the .mat file and everything is fine now.
1
2
1
I have written a Variational Auto-Encoder in Keras using Tensorflow as backend. As optimizer I use Adam, with a learning rate of 1e-4 and batch size 16. When I train the net on my Macbook's CPU (Intel Core i7), the loss value after one epoch (~5000 minibatches) is a factor 2 smaller than after the first epoch on a different machine running Ubuntu. For the other machine I get the same result on both CPU and GPU (Intel Xeon E5-1630 and Nvidia GeForce GTX 1080). Python and the libraries I'm using have the same version number. Both machines use 32 bit floating points. If I use a different optimizer (eg rmsprop), the significant difference between machines is still there. I'm setting np.random.seed to eliminate randomness. My net outputs logits (I have linear activation in the output layer), and the loss function is tf.nn.sigmoid_cross_entropy_with_logits. On top of that, one layer has a regularizer (the KL divergence between its activation, which are params of a Gaussian distribution, and a zero mean Gauss). What could be the cause of the major difference in loss value?
Tensorflow major difference in loss between machines
0
0
0
203
44,504,140
2017-06-12T16:16:00.000
0
0
0
0
python,padding,cntk
45,828,029
1
false
0
0
There is a new pad operation (in master; will be released with CNTK 2.2) that supports reflect and symmetric padding.
1
1
1
In the cntk.layers package we have the option to do zero padding: pad (bool or tuple of bools, defaults to False) – if False, then the filter will be shifted over the “valid” area of input, that is, no value outside the area is used. If pad=True on the other hand, the filter will be applied to all input positions, and positions outside the valid region will be considered containing zero. Use a tuple to specify a per-axis value. But how can I use other types of padding like reflect or symmetric padding? Is it possible to integrate my own padding criterion in the cntk.layers? I'm a beginner in cntk and really grateful for every help.
CNTK & Python: How to do reflect or symmetric padding instead of zero padding?
0
0
0
177
44,506,675
2017-06-12T18:47:00.000
0
0
1
0
python,python-import,arcpy
44,506,760
2
false
0
0
Go inside the environment with arcpy, look for the environment var PYTHON_PATH. and just add that path to the PYTHON_PATH in your new environment.
1
0
0
We have a software called ArcGIS that comes with a python environment, which has a library called arcpy When you execute the python.exe from that environment, it imports arcpy with no issue. But I needed to create another python enviroment that contains the same library as this one, but I just couldn't find anything named arcpy in the enviroment's folders I even copied the whole Lib folder from the original enviroment to the one I'm trying to create, but it still won't import arcpy I know this is kinda of a shot in the dark, as it is a proprietary library and I can't be sharing much info, but does anyone knows what could it be? It seems they use Anaconda too
What other ways can python look for modules?
0
0
0
62
44,508,145
2017-06-12T20:25:00.000
0
0
1
1
python,django
44,508,252
1
false
0
0
Make sure you're spelling the file name correctly. maanage.py != manage.py
1
0
0
Whenever I use python manage.py runserver on Windows PowerShell it causes Python to crash: "Python has stopped working" window pops up. Any idea why? I've tried rebooting, which causes a different type of error: python.exe: can't open file '.\maanage.py': [Errno 2] No such file or directory. I created another project and tried runserver and again it caused the first error. All installation commands ran smoothly, but why am I facing this error?
Python localhost server
0
0
0
39
44,508,254
2017-06-12T20:32:00.000
-1
0
1
0
python,memory-management
64,046,834
3
false
0
0
Try to update your py from 32bit to 64bit. Simply type python in the command line and you will see which your python is. The memory in 32bit python is very low.
2
20
0
I am currently using a function making extremely long dictionaries (used to compare DNA strings) and sometimes I'm getting MemoryError. Is there a way to allot more memory to Python so it can deal with more data at once?
Increasing memory limit in Python?
-0.066568
0
0
69,022
44,508,254
2017-06-12T20:32:00.000
25
0
1
0
python,memory-management
44,508,341
3
false
0
0
Python doesn’t limit memory usage on your program. It will allocate as much memory as your program needs until your computer is out of memory. The most you can do is reduce the limit to a fixed upper cap. That can be done with the resource module, but it isn't what you're looking for. You'd need to look at making your code more memory/performance friendly.
2
20
0
I am currently using a function making extremely long dictionaries (used to compare DNA strings) and sometimes I'm getting MemoryError. Is there a way to allot more memory to Python so it can deal with more data at once?
Increasing memory limit in Python?
1
0
0
69,022
44,508,737
2017-06-12T21:08:00.000
0
0
0
1
python,tornado
44,511,338
1
false
1
0
Tornado's HTTP clients do not currently provide this information. Instead, you can pass follow_redirects=False and handle and record redirects yourself.
1
0
0
I'm using python Tornado to perform asynchronous requests to crawl certain websites and one of the things I want to know is if a URL results in a redirect or what it's inital status code is (301, 302, 200, etc.). However, right now I can't figure out a way to find that information out with a Tornado response. I know a requests response object has a history attribute which records the redirect history, is there something similar for Tornado?
Is there a way to get the redirect history from a Tornado response?
0
0
1
65
44,508,869
2017-06-12T21:17:00.000
1
0
1
0
python,python-3.x,machine-learning,anaconda
44,509,630
2
true
0
0
When you install Anaconda Python, it installs into its own area and wouldn't conflict with an existing Python installation. If you already have additional Python packages installed, you will need to reinstall them for the new Python installation, preferably using a Python virtual environment. You can't use a Python virtual environment from an existing Python installation and would need to create a new one against Anaconda Python if already using one. If your own personal code works with Python 3.5, it likely will work with Python 3.6 no problems. So with the above caveats on re-installing additional Python packages, there shouldn't be any reason why you couldn't use Anaconda Python 3.6.
1
1
0
I have the 3.5 version of python. I want to install Anaconda, but it says on the Anaconda website the latest version of it is for Python 3.6. My question is could I still use the packages for Python 3.5, or should I install Python 3.6?
Can the latest Anaconda package for Python 3.6 work for Python 3.5?
1.2
0
0
1,392
44,510,246
2017-06-12T23:38:00.000
1
0
0
1
python,docker,dockerfile,docker-swarm
44,510,805
1
false
0
0
There are a few things to consider: Where is python installed in the container and what version? Does that match your dev environment? Look at your dockerfile - what is your working directory? Did you set one? Perhaps, you are running your python code from one directory, but trying to import a module from another. Is your PYTHONPATH set in your container? Have you installed the modules in the container that you're attempting to use? Perhaps with a requirements.txt file or manually? If so, are you executing your python code with the same python version/path that you installed the modules with? Are you using a virtual environment? Has it been sourced? What user is your container running as? Does it have access to the python modules? You may need to chown the site-packages path or run as a different user or root.
1
0
0
How can I read files within a python module while executing in docker? I have a python module which I import in my code. normally in order to fetch the relative path of the module one can do <<module_name>>.__path__. however this approach does not work in docker but works locally. Is there a common way in which I can read the files from the module in docker as well as in local?
how to read files from a python module inside docker
0.197375
0
0
789
44,512,968
2017-06-13T05:25:00.000
13
0
1
0
ipython,ipython-notebook
48,066,889
2
false
0
0
run jupyter notebook --port=8090 change 8090 for the port you want
2
9
0
I would like to have multiple instances of ipython notebook running on different ports for the same user. Is it possible? Something like a list of ports for 'NotebookApp.port' with a default one.
ipython notebook multiple instances on different ports
1
0
0
6,549
44,512,968
2017-06-13T05:25:00.000
8
0
1
0
ipython,ipython-notebook
44,513,056
2
true
0
0
Just run jupyter notebook a second time; it will automatically select another port to use.
2
9
0
I would like to have multiple instances of ipython notebook running on different ports for the same user. Is it possible? Something like a list of ports for 'NotebookApp.port' with a default one.
ipython notebook multiple instances on different ports
1.2
0
0
6,549
44,513,019
2017-06-13T05:29:00.000
3
0
1
1
python,centos,virtualenv,rpm
44,519,576
2
true
0
0
By default virtual environments don't access modules in site-packages. You either need to allow such access (toggleglobalsitepackages in virtualenvwrapper) or recreate you virtualenv allowing such access with option --system-site-packages.
1
0
0
I have a python application that's run inside a virtualenv on CentOS. This application needs a python library that's distributed and installed as an rpm. When the application runs I just get no module named .... I've verified that the rpm is installed correctly, and I've also installed the rpm in the site-packages directory of the virtualenv but that didn't help. What is the correct way to install an rpm so that an application running in a virtual environment has access to it?
Application can't find python library installed as rpm
1.2
0
0
599
44,514,898
2017-06-13T07:25:00.000
0
0
1
0
python-3.x,dictionary,nltk
44,608,170
1
true
0
0
I hope this is what you are looking for https://github.com/sujitpal/nltk-examples/tree/master/src/cener
1
0
1
I am new to python. I have to build a chatbot using python nltk -- my use case and expected output is this: I have a custom dictionary of some categories (shampoo,hair,lipstick,face wash), some brands (lakme,l'oreal,matrix), some entities ((hair concern: dandruff, hair falling out), (hair type: oily hair, dry hair), (skin type: fair skin, dark skin, dusky skin), etc.). I want to buy shampoo for hair falling out and dry hair or Show me best lipsticks for fair skin and office wear How do I extract values by category: shampoo, hair concern: hair falling out, hair type: dry hair I am using python nltk.
How to tokenize and tag those tokenized strings from my own custom dictionary using python nltk?
1.2
0
0
203
44,515,532
2017-06-13T07:58:00.000
2
0
0
0
python,tensorflow
44,518,795
1
true
0
0
Let's suppose that you got images that's a [n, W, H] numpy nd-array, in which n is the number of images and W and H are the width and the height of the images. Convert images to a tensor, in order to be able to use tensorflow functions: tf_images = tf.constant(images) Convert tf_images to the image data format used by tensorflow (thus from n, W, H to n, H, W) tf_images = tf.transpose(tf_images, perm=[0,2,1]) In tensorflow, every image has a depth channell, thus altough you're using grayscale images, we have to add the depth=1 channell. tf_images = tf.expand_dims(tf_images, 2) Now you can use tf.image.resize_image_with_crop_or_pad to resize the batch (that how has a shape of [n, H, W, 1] (4-d tensor)): resized = tf.image.resize_image_with_crop_or_pad(tf_images,height,width)
1
1
1
I want to call tf.image.resize_image_with_crop_or_pad(images,height,width) to resize my input images. As my input images are all in form as 2-d numpy array of pixels, while the image input of resize_image_with_crop_or_pad must be 3-d or 4-d tensor, it will cause an error. What should I do?
Tensorflow resize_image_with_crop_or_pad
1.2
0
0
2,981
44,515,769
2017-06-13T08:09:00.000
2
0
1
0
python,anaconda,conda,data-science
48,477,898
16
false
0
0
If you don't want to add Anaconda to env. path and you are using Windows try this: Open cmd; Type path to your folder instalation. It's something like: C:\Users\your_home folder\Anaconda3\Scripts Test Anaconda, for exemple type conda --version. Update Anaconda: conda update conda or conda update --all or conda update anaconda. Update Spyder: conda update qt pyqt conda update spyder
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
0.024995
0
0
525,921
44,515,769
2017-06-13T08:09:00.000
7
0
1
0
python,anaconda,conda,data-science
46,250,881
16
false
0
0
If you have a newer version of the Anaconda Navigator, open the Anaconda Prompt program that came in the install. Type all the usual conda update/conda install commands there. I think the answers above explain this, but I could have used a very simple instruction like this. Perhaps it will help others.
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
1
0
0
525,921
44,515,769
2017-06-13T08:09:00.000
8
0
1
0
python,anaconda,conda,data-science
56,908,068
16
false
0
0
In addition to adding C:\Users\yourusername\Anaconda3 and C:\Users\yourusername\Anaconda3\Scripts, as recommended by Raja (above), also add C:\Users\yourusername\Anaconda3\Library\bin to your path variable. This will prevent an SSL error that is bound to happen if you're performing this on a fresh install of Anaconda.
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
1
0
0
525,921
44,515,769
2017-06-13T08:09:00.000
2
0
1
0
python,anaconda,conda,data-science
54,809,641
16
false
0
0
I have Windows 10 64 bit, this worked for me, This solution can work for both (Anaconda/MiniConda) distributions. First of all try to uninstall anaconda/miniconda which is causing problem. After that delete '.anaconda' and '.conda' folders from 'C:\Users\' If you have any antivirus software installed then try to exclude all the folders,subfolders inside 'C:\ProgramData\Anaconda3\' from Behaviour detection. Virus detection. DNA scan. Suspicious files scan. Any other virus protection mode. *(Note: 'C:\ProgramData\Anaconda3' this folder is default installation folder, you can change it just replace your excluded path at installation destination prompt while installing Anaconda)* Now install Anaconda with admin privileges. Set the installation path as 'C:\ProgramData\Anaconda3' or you can specify your custom path just remember it should not contain any white space and it should be excluded from virus detection. At Advanced Installation Options you can check "Add Anaconda to my PATH environment variable(optional)" and "Register Anaconda as my default Python 3.6" Install it with further default settings. Click on finish after done. Restart your computer. Now open Command prompt or Anaconda prompt and check installation using following command conda list If you get any package list then the anaconda/miniconda is successfully installed.
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
0.024995
0
0
525,921
44,515,769
2017-06-13T08:09:00.000
3
0
1
0
python,anaconda,conda,data-science
59,958,555
16
false
0
0
This problem arose for me when I installed Anaconda multiple times. I was careful to do an uninstall but there are some things that the uninstall process doesn't undo. In my case, I needed to remove a file Microsoft.PowerShell_profile.ps1 from ~\Documents\WindowsPowerShell\. I identified that this file was the culprit by opening it in a text editor. I saw that it referenced the old installation location C:\Anaconda3\.
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
0.037482
0
0
525,921
44,515,769
2017-06-13T08:09:00.000
2
0
1
0
python,anaconda,conda,data-science
60,249,412
16
false
0
0
I have just launched anaconda-navigator and run the conda commands from there.
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
0.024995
0
0
525,921
44,515,769
2017-06-13T08:09:00.000
1
0
1
0
python,anaconda,conda,data-science
66,704,439
16
false
0
0
if you use chocolatey, conda is in C:\tools\Anaconda3\Scripts
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
0.012499
0
0
525,921
44,515,769
2017-06-13T08:09:00.000
43
0
1
0
python,anaconda,conda,data-science
44,517,592
16
false
0
0
I found the solution. Variable value should be C:\Users\dipanwita.neogy\Anaconda3\Scripts
8
209
0
I installed Anaconda3 4.4.0 (32 bit) on my Windows 7 Professional machine and imported NumPy and Pandas on Jupyter notebook so I assume Python was installed correctly. But when I type conda list and conda --version in command prompt, it says conda is not recognized as internal or external command. I have set environment variable for Anaconda3; Variable Name: Path, Variable Value: C:\Users\dipanwita.neogy\Anaconda3 How do I make it work?
'Conda' is not recognized as internal or external command
1
0
0
525,921
44,516,089
2017-06-13T08:27:00.000
3
0
1
0
python,cherrypy
44,516,368
1
true
0
0
I think I found it... Sorry for posting too soon. I actually added the decorator overruling @cherrypy.config(**{'tools.json_out.on': False}) in the wrong place first. It needed to be placed in front off the other decorators of the method (@myservice.expose in my code) . Now it works. Hope this info will help somebody else in the future.
1
0
0
I have a CherryPy service replying with json responses. For this I've implemented the @cherrypy.tools.json_out() decorator at the top of my class. I now have one method in the class that needs to respond with an image. The method sets the cherrypy.response.headers['Content-Type']corresponding to the image mime-type. It doesn't work if the tools.json_out() decorator is in my code. Without the decorator it works (but than I need to implement all my other methods differently). I tried setting the tools.json_out() off in the config of CherryPy with a decorator on the method, but that doesn't overrule it. What is the problem with my approach? Thank you for any pointers.
How to switch cherrypy.tools.json_out() off for one method in class?
1.2
0
0
575
44,516,289
2017-06-13T08:36:00.000
5
0
1
0
python,generator,yield
44,516,469
4
false
0
0
Neither while nor for are themselves generators or iterators. They are control constructs that perform iteration. Certainly, you can use for or while to iterate over the items yielded by a generator, and you can use for or while to perform iteration inside the code of a generator. But neither of those facts make for or while generators.
2
6
0
In an interview , the interviewer asked me for some of generators being used in Python. I know a generator is like a function which yield values instead of return. so any one tell me is for/while loop is an example of generator.
is for/while loop from python is a generator
0.244919
0
0
3,488
44,516,289
2017-06-13T08:36:00.000
0
0
1
0
python,generator,yield
44,517,144
4
false
0
0
for and while are loop structures, and you can use them to iterate over generators. You can take certain elements of a generator by converting it to a list.
2
6
0
In an interview , the interviewer asked me for some of generators being used in Python. I know a generator is like a function which yield values instead of return. so any one tell me is for/while loop is an example of generator.
is for/while loop from python is a generator
0
0
0
3,488
44,517,122
2017-06-13T09:13:00.000
0
0
0
0
python,runtime-error,cntk
44,535,536
1
false
0
0
This line cloneModel.parameters[0] = cloneModel.parameters[0]*4 tries to replace the first parameter with an expression (a CNTK graph) that multiplies the parameter by 4. I don't think that's the intent here. Rather, you want to do the above on the .value attribute of the parameter. Try this instead: cloneModel.parameters[0].value = cloneModel.parameters[0].value*4
1
1
1
I have trained a model in CNTK. Then I clone it and change some parameters; when I try to test the quantized model, I get RuntimeError: Block Function 'softplus: -> Unknown': Inputs 'Constant('Constant70738', [], []), Constant('Constant70739', [], []), Parameter('alpha', [], []), Constant('Constant70740', [], [])' of the new clone do not match the cloned inputs 'Constant('Constant70738', [], []), Constant('Constant70739', [], []), Constant('Constant70740', [], []), Parameter('alpha', [], [])' of the clonee Block Function. I have no idea what this error means or how to fix it. Do you have any ideas? P.S. I clone and edit the model by doing clonedModel = model.clone(cntk.ops.CloneMethod.clone) cloneModel.parameters[0].value = cloneModel.parameters[0].value*4 then when I try to use cloneModel I get that error above.
CNTK: The new clone do not match the cloned inputs of the clonee Block Function
0
0
0
115
44,517,306
2017-06-13T09:20:00.000
3
0
1
0
python,pypi
60,975,048
3
false
0
0
Yes you can reupload the package with same name. I had faced similar issue what I did was increased the version number in setup.py and delete the folders generated by running python setup.py sdist i.e. dist and your_package_name-egg.info and again run the commands python setup.py sdist to make the package upload ready. I think pypi tracks the repo from folder generated by sdist i.e. dist and your_package_name-egg.info so you have to delete it.
1
16
0
I upload a package to pypi, but I got some trouble after upload, so I delete it completely, and I tried to re-upload, but there are some error after upload again: HTTP Error 400: This filename has previously been used, you should use a different version. error: HTTP Error 400: This filename has previously been used, you should use a different version. It seems pypi can track the upload activity, I delete project and account and upload again, but I can see the previous record. Why? How can I solve the problem?
How can I re-upload package to pypi?
0.197375
0
0
8,113
44,520,153
2017-06-13T11:23:00.000
2
0
1
0
python
44,520,234
2
true
0
0
This code is usually called global scope code, module level code or top-level code but it doesn't have proper naming convention but programmers would understand what you mean when you use any of these.
1
1
0
The title pretty much says it. I am wondering what I should call code that is just sitting in a Python file, not in any function at all. For some context - in the particular module I am concerned with, there are some functions with definitions that have been defined, but there is also some code sitting at the end that is executed whenever the module is imported. What is this code called?
Is there a name for Python code which is not in a function?
1.2
0
0
220
44,521,638
2017-06-13T12:31:00.000
1
0
0
0
python,excel,python-2.7,xlsxwriter
44,523,601
1
false
0
0
This is a standard Excel warning to alert users to the fact that repeated and adjacent formulas are different since that may be an error. It isn't possible to turn off this warning in XlsxWriter.
1
0
0
In my file there are MAX and MIN formulas in a row. Sample CELLS - | A | B | C | D | E | F | G | H | ROW: | MAX | MIN | MIN | MAX | MIN | MIN | MAX | MIN |MIN If the excel sheet is opened a green triangle is displaying with a warning message "Inconsistent Formula".
How to Ignore "Inconsistent Formula" warning showing in generated .xlsx file using the python xlsxwriter?
0.197375
1
0
523
44,526,642
2017-06-13T16:10:00.000
0
0
0
0
python-2.7,nonlinear-functions
44,526,860
1
false
0
0
It looks more like a math problem to me here, since you ask "how to start". you know that a function's plot is just a lot of points (x, y) where y=f(x). And I know that for any two pairs of points (not vertically aligned), I have an infinity of second-degree functions (parabolas) going through these two points. they are given by y=ax^2+bx+c You want the parabola to go through your 2 points, so you can substitute x and y for each of the 2 points, that will give you 2 equations (where a, b and c are the unknown) . Then you can add a random point (I would suggest on the y-axis : (0; r) ). This will give you a third equation. With these 3 equations, solve for a, b and c. (in function of r) now, for any value of r, you will have some a, b and c that define a parabola going through your 2 known points. Once you understand how to solve this math problem, the python part is completely independant.
1
0
1
I have two given points (3.0, 3.2) and (7.0, 4.59) . My job here is very simple but I don't even know how to start. I just need to plot 4 nonlinear functions that go through these two points. Did somebody have a similar problem before? How does one even start?
Generate a random nonlinear function going through given points in python
0
0
0
591
44,528,223
2017-06-13T17:43:00.000
3
0
1
0
python,python-2.7,python-3.x,pip,packages
44,571,919
1
false
0
0
pip3 install and python3 -m pip install — both work perfectly and don't have any impact on Python 2. You can have as many Pythons in your system as you want; I for one have Python 2.7, 3.4, 3.5 and 3.6. To distinguish different versions of pip I use versioned names: pip3.4 install. And of course I use virtual environments and virtualenvwrapper quite intensively.
1
2
0
I have been using Python 2.7 for a while now and installing packages using pip install without any issue. I just started using python 3 for a certain code and realized how confusing having different versions of Python can get. I have Fedora 25, the default Python version is 2.7.13 and the default Python 3 version is Python 3.5.3, I want to be able to use python 2.7 and python 3, my general question is: What are the best practices when installing packages for both Python 2 and Python 3 on one machine? As I mentioned using pip install in Python 2.7 works fine, but what about Python 3? I can: use pip3 install use python3 -m pip install Which one should I use and how does it affect the python 2 version of the module? pip3 is not installed on Fedora 25, which raises a new question: how should I install it? as I understand I can: use dnf install python3-pip (it is unclear if that actually works when pip for Python 2.7 is installed) use python3 get-pip.py Finally, would it be a good idea to create a Python 2 and a Python 3 virtual environment to address this issue? From what I have read on the internet there does not seem to be a clear consensus on these questions, I hope this thread will clarify.
Package management for coexisting Python 2 and 3
0.53705
0
0
170
44,530,305
2017-06-13T19:49:00.000
1
0
1
1
python,docker,pip,containers
44,545,311
1
true
0
0
The problem was that the docker process had problems connecting to the internet. So the installation of pip manually had errors. Solution Process: Restart the docker process. (Not working) Restart the computer. (problem solved)
1
1
0
So I tried to download pip inside a docker container by first copying the installation file via docker cp get-pip.py dock:get-pip.py and then I went into the container docker exec -it 58 bash I then tried to python get-pip.py the file and i get the following error. Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',)': /simple/pip/
Error Installing pip Inside Docker Container
1.2
0
0
1,840
44,530,668
2017-06-13T20:12:00.000
1
1
0
0
python,opencv,raspberry-pi3
44,534,024
1
true
0
0
I tried ORB on Raspberry Pi and got like around 5 FPS at 640 x 480 I think, it was on single thread, could probably get up to like at least 15-20 fps with threads. You're better off using ORB with something like Raspberry Pi. I doubt you can get good FPS using SIFT/SURF.
1
1
0
Does anyone know if the raspberry pi 3 is powerful enough to run the SIFT or SURF algorithms for a real-time app (traffic signs recognition) or should I look for something else ?
SIFT SURF on raspberry
1.2
0
0
509
44,532,112
2017-06-13T21:46:00.000
0
0
1
1
python,python-2.7,python-3.x,rodeo
50,547,525
1
false
0
0
I got this error attempting to run a source file having more than 500 lines (code, comments and empty lines) in Rodeo 2.5.2. I am not sure what is the actual maximum (?) number of lines allowed (if any) but removing some comments and thus reducing the total number of lines to 514 allowed to run that file.
1
4
0
I'm a complete noob in Python, so please forgive this question if it's completely stupid. I had Canopy 1.4.7 installed on my system, working with Python 2.7. I just upgraded to Canopy 2.1.2, with Python 3.5. I'd been using Rodeo 2.5.2 as my environment. It worked like a charm with 1.4.7, but since the upgrade, I haven't been able to get it to work. All I get is a message saying "Unable to execute." The Rodeo terminal then has to be restarted. As a matter of fact, any code input doesn't work. I tried to put code into the Rodeo terminal; it doesn't even register the input. I can't press "Enter," nothing happens. I tried to install a package; nothing happened. I've tried reinstalling both Canopy and Rodeo, but to no effect. I've also tried turning it off and on again (thanks, Roy). Mind you, I tried the same codes in the Canopy environment, and they worked fine. So I'm assuming it's an issue in Rodeo.
"Unable to execute" message in Rodeo with Python 3.5
0
0
0
942
44,532,770
2017-06-13T22:46:00.000
0
0
0
0
python,tkinter
44,532,978
1
false
0
0
Tkinter does not provide a way to get at information about the decorations added by the window manager or os. You will have to find some platform-specific method of getting that information.
1
0
0
I am computing the width of a Toplevel window based on the title and the text inside. For the text inside, there is no problem, since I am the one who set the font property of the text and so I can use the measure method of the font. In contrast, there does not seem to be a way to set the font of the title. But how can I at least get the font that is being used?
How do I find out the font of the title of Toplevel?
0
0
0
57
44,533,383
2017-06-14T00:07:00.000
2
0
0
0
python,python-3.x,tkinter,key-bindings
44,533,510
1
true
0
1
You cannot do what you want with tkinter. Tkinter bindings only work within windows created by tkinter.
1
1
0
In Python Tkinter, I have successfully made a keybind but it only works when I am clicked into the tkinter window. I want to be able to use the keybinds even when I am interacting with other programs even when they are full screen. (I am making an auto clicker and it is not possible to open the tkinter window and then click the key when you are mid game.)
In Python Tkinter, I have successfully made a keybind but it only works when I am clicked into the tkinter window.
1.2
0
0
125
44,539,192
2017-06-14T08:15:00.000
0
0
0
1
google-app-engine,google-cloud-platform,google-app-engine-python,app-engine-flexible
44,711,051
2
false
1
0
There is an even easier way to do this that doesn't require creating a separate service :) Since you are only developing your second application, you do not need to set it as the default version to serve traffic to the public. You only need to access it yourself, (or perhaps whoever you give the URL to). The only drawback really is that you will not be able to access your project at the default url, usually .appspot.com Therefore, you can simply deploy your flexible app engine project, but make sure you don't give it any traffic sharing or promote it to the main version!! You can then access your version simply by clicking on the version in the google cloud console, or by visiting http://-dot-.appspot.com/ (It was not possible to access individual versions like this for flexible for quite some time, but this has now been resolved.)
1
0
0
I have an standard appengine app currently running. I am currently developing another flask app which will use flexible runtime. I am wondering is it possible for me to have both apps in same project?
Is it possible to have both appengine fleixble and standard in single project
0
0
0
460
44,547,072
2017-06-14T14:04:00.000
0
1
1
0
python,eclipse,github,pydev,egit
44,783,503
1
false
0
0
On git, you always work with the repo as a whole (even if you see only a part of it on Eclipse). So, to do what you want, you have to actually create a new repo and copy the sources you want and then push from there (there are ways to do that with git saving the history too if that's important to you). You might want to take a look at git submodules too...
1
0
0
So I have a few Packages that I have made and I want to share them with my friends and I want to put them in separate github repositories, now I know how to make a project in eclipse, I already have my packages in the project and I also cloned the empty github repository in my local computer now when i connect the project to the local repository and push it into github it actually copies the complete project into the repository but i want only the packages to be copied i.e. right now its like githubrepository/pythonproject/pythonpackage but i want it to be githubrepository/pythonpackage can someone suggest a link or some ways to solve it?am i making a mistake?
How do I properly share my Python Packages using Eclipse+pydev+egit?
0
0
0
21
44,548,176
2017-06-14T14:53:00.000
4
0
0
0
python,user-interface,tkinter,screen-resolution
65,176,022
1
false
0
1
What I usually do is that I import this module ctypes and type in ctypes.windll.shcore.SetProcessDpiAwareness(True). This will make the window of a higher quality. Hope it works for you!
1
4
0
On Mac, all widgets and canvas items appear of high quality on Retina display. However, on Windows 4K display, Tkinter has poor quality, and renders unnecessarily badly (pixelated) as if from 2009. How do I fix the quality of Tkinter on Windows 10? I have tried using scaling, but this just makes all sorts of elements all sorts of different sizes.
how to fix the low quality of tkinter render
0.664037
0
0
1,352
44,550,004
2017-06-14T16:21:00.000
1
0
0
0
python,nltk,corpus
44,553,514
2
false
0
0
Rethink your approach. Any collection of English texts will have a "long tail" of words that you have not seen before. No matter how large a dictionary you amass, you'll be removing words that are not "non-English". And to what purpose? Leave them in, they won't spoil your classification. If your goal is to remove non-English text, do it at the sentence or paragraph level using a statistical approach, e.g. ngram models. They work well and need minimal resources.
1
0
1
I'm building a text classifier that will classify text into topics. In the first phase of my program as a part of cleaning the data, I remove all the non-English words. For this I'm using the nltk.corpus.words.words() corpus. The problem with this corpus is that it removes 'modern' English words such as Facebook, Instagram etc. Does anybody know another, more 'modern' corpus which I can replace or union with the present one? I prefer nltk corpus but I'm open to other suggestions. Thanks in advance
Find 'modern' nltk words corpus
0.099668
0
0
507
44,550,304
2017-06-14T16:39:00.000
1
0
0
1
python-3.x,subprocess
44,550,449
1
true
0
0
This is wall-clock time (real), not time spent in either userland (user) or the kernel (system). You can test this yourself by running a process such as sleep 60, which uses almost no user or system time at all, and observing that it still times out.
1
0
0
Which time measurement is used for the timeout by the Python 3 subprocess module on UNIX/Linux OSes? UNIX like OSes report 3 different times for process execution: real, user, and system. Even with processes that will be alive for only a few milliseconds the real time is often several hundred percent longer than the user and system time. I'm making calls using subprocess.call() and subprocess.check_output() with the timeout set to a quarter of a second for processes that the time utility reports taking 2-18 milliseconds for the various times reported. There is no problem and my enquiry is purely out of interest.
Which 'time' is used for the timeout by the subprocess module on UNIX/Linux OSes?
1.2
0
0
56
44,550,574
2017-06-14T16:53:00.000
10
0
1
0
python,jupyter-notebook,markdown
52,961,020
2
false
0
0
Just double click on the markdown cell. Edit what you want to and Run. It will reflect the changes. Save your notebook.
2
16
0
I got a strange problem with jupyter-notebook. I was practicing with notebook which has markdown and code cells. When I save and reopen the notebook, I can edit code cells but not the markdown cells. Attempts: reload the page. Make the notebook trusted. Try to change the cell type from markdown to code or raw but still can not edit. NOTE: I can delete some letters on markdown but I can not add any letters. Also if I hit enter it will create new lines, but I can not write anything there. Question How can we edit the markdown cell of a jupyter-notebook ?
How to edit markdown cell in jupyter-notebook ( Could not edit markdown cell in Jupyter notebook)
1
0
0
19,542
44,550,574
2017-06-14T16:53:00.000
14
0
1
0
python,jupyter-notebook,markdown
44,598,308
2
true
0
0
In case anybody also encountered the same problem, I am keeping this question and my solution to it, instead of deleting the question. What I did is : a) First go to markdown cell. b) Double click the cell, now we can only delete the letters, can not edit it. c) Go to command mode (press esc) and again come back to edit mode (Enter). d) Now we can edit the markdown cell. This solved my problem.
2
16
0
I got a strange problem with jupyter-notebook. I was practicing with notebook which has markdown and code cells. When I save and reopen the notebook, I can edit code cells but not the markdown cells. Attempts: reload the page. Make the notebook trusted. Try to change the cell type from markdown to code or raw but still can not edit. NOTE: I can delete some letters on markdown but I can not add any letters. Also if I hit enter it will create new lines, but I can not write anything there. Question How can we edit the markdown cell of a jupyter-notebook ?
How to edit markdown cell in jupyter-notebook ( Could not edit markdown cell in Jupyter notebook)
1.2
0
0
19,542
44,553,860
2017-06-14T20:08:00.000
2
0
0
0
python,html,selenium,xpath
44,554,005
1
false
0
0
It's within an iframe so you need to have Selenium switch to it. driver.switch_to.frame('auth-frame') Once you do that you should be able to locate it by id or xpath.
1
1
0
I'm new to Python and I'm trying to make a program that involves me logging into Gmail and iCloud, using Selenium. I've completed the Gmail part, so I know I'm not completely off track, but I can't seem to surmount the error that occurs when I try to locate the login/password fields on the iCloud website. I keep getting: NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="appleId"]"} I've attempted to use WebDriverWait, and I've done waits for like 30 seconds just to see if timing was the issue, but I keep getting the same error even if I try to locate the login/password fields using Xpath, ID name, CSS selector, etc.
iCloud Website - NoSuchElementException: Unable to Locate Element
0.379949
0
1
192
44,554,135
2017-06-14T20:25:00.000
0
0
0
0
python
44,554,231
1
false
0
0
You can use Pandas to import a CSV file with the pd.read_csv function.
1
0
1
I have a 1 column excel file. I want to import all the values it has in a variable x (something like x=[1,2,3,4.5,-6.....]), then use this variable to run numpy.correlate(x,x,mode='full') to get autocorrelation, after I import numpy. When I manually enter x=[1,2,3...], it does the job fine, but when I try to copy paste all the values in x=[], it gives me a NameError: name 'NO' is not defined. Can someone tell me how to go around doing this?
Import a column from excel into python and run autocorrelation on it
0
0
0
57
44,555,485
2017-06-14T22:06:00.000
1
0
0
0
python,scikit-learn,multiprocessing,pyspark,cluster-computing
55,951,963
1
false
0
0
True, Spark does have the limitations you have mentioned, that is you are bounded in the functional spark world (spark mllib, dataframes etc). However, what it provides vs other multiprocessing tools/libraries is the automatic distribution, partition and rescaling of parallel tasks. Scaling and scheduling spark code becomes an easier task than having to program your custom multiprocessing code to respond to larger amounts of data + computations.
1
6
1
A newbie question, as I get increasingly confused with pyspark. I want to scale an existing python data preprocessing and data analysis pipeline. I realize if I partition my data with pyspark, I can't treat each partition as a standalone pandas data frame anymore, and need to learn to manipulate with pyspark.sql row/column functions, and change a lot of existing code, plus I am bound to spark mllib libraries and can't take full advantage of more mature scikit-learn package. Then why would I ever need to use Spark if I can use multiprocessing tools for cluster computing and parallelize tasks on existing dataframe?
Python multiprocessing tool vs Py(Spark)
0.197375
0
0
2,667
44,555,995
2017-06-14T22:59:00.000
2
1
0
1
python,linux,unix
44,556,361
2
true
0
0
Your file has DOS line endings (CR+LF). It works if you run python sample.py but doesn't work if you run ./sample.py. Recode the file so it has Unix line endings (pure LF at the end of every line).
1
1
0
I have a self-installed python in my user directory in a corporate UNIX SUSE computer (no sudo privilege): which python <user>/bin/python/Python-3.6.1/python I have an executable (chmod 777) sample.py file with this line at the top of the file: #!<user>/bin/python/Python-3.6.1/python I can execute the file like this: python sample.py But when I run it by itself I get an error: /full/path/sample.py /full/path/sample.py: Command not found I have no idea why it's not working. I'm discombobulated as what might be going wrong since the file is executable, the python path is correct, and the file executes if I put a python command in the front. What am I missing? EDIT: I tried putting this on top of the file: #!/usr/bin/env python Now, I get this error: : No such file or directory I tried this to make sure my env is correct which env /usr/bin/env EDIT2: Yes, I can run the script fine using the shebang command like this: <user>/bin/python/Python-3.6.1/python /full/path/sample.py
Executable .py file with shebang path to which python gives error, command not found
1.2
0
0
1,040
44,558,269
2017-06-15T03:56:00.000
0
1
0
0
php,python,ajax,import,directory
44,577,194
1
false
1
0
I finally succeed. In fact tweepy use the library call "six" which is not in my current folder. So i import all the python library in my folder, so i get no more error. But i still don't understand why python does not go search the library in his normal folder instead in the current folder.
1
0
0
i'm actually making ajax request that call a php file which call a python file. My main problem is with the imports in the python scripts. I actually work on local. I'm on linux. When i do "$ php myScript.php" ( which call python script inside ) it's work but when it come from the ajax call then the import of the python files does not work. So i moved some libraries in the current folder of the php and python script. First the import will only work if the library is in a folder, impossible to call a function from my other python script. Then i can't do " import tweepy " even if the library is in the current folder. But for pymongo its worked because I do " from pymongo import MongoClient ". All my script worked when call from php or when executed with python throw command line. Thoses libraries are also in my current python folder on linux but throw ajax call it never go there. I specify this at the beginning of each python file "#!/usr/bin/env python2.7" Here the path of my files folder ----- script.php ----- script.py ----- pymongo[FOLDER] ----- tweepy[FOLDER] Ps : Sorry english is not my main language
executing python throw ajax , import does not work
0
1
0
104
44,558,311
2017-06-15T04:00:00.000
2
0
1
0
dronekit-python,dronekit,dronekit-android
48,434,270
1
true
0
0
Yes, you can get the flying state of the drone for some of the cases you list. Of course this will all depend on how you've programmed your flight behavior with dronekit. Here is what I would do: Hovering: self.vehicle.mode.name == "LOITER" Flying: self.vehicle.mode.name in ("GUIDED","AUTO") Landing: self.vehicle.mode.name == "LAND" Landed: self.vehicle.armed == False (the quadcopter props should automatically disarm once the drone has completed the landing procedure) Taking off: no straightforward answer here but you could infer it from the altitude of your drone. If you've sent a takeoff(target_alt) instruction and the drone has not reached the target_alt then you're probably still taking off.
1
0
0
Is there a way to get the flying state of the drone using dronekit ? and by flying state I mean: Landed, Taking off, hovering, flying, landing
Get drone flying state vi Drone kit
1.2
0
0
210
44,559,717
2017-06-15T06:02:00.000
0
0
0
0
java,python,random,distribution,uniform-distribution
44,561,811
2
false
1
0
Just use Random rnd = new Random(); rnd.nextInt(int boundary); rnd.nextDouble(double boundary); rnd.next(); If you want a list of randoms the best way is probably ti write your own little method, just use an array and fill it with a for loop.
1
0
1
I'm trying port a python code into java and am stuck at one place. Is there any method in java that is equivalent to numpy.random.uniform() in python?
Equivalent method in Java for np.random.uniform()
0
0
0
674
44,560,315
2017-06-15T06:37:00.000
0
0
0
0
python,mysql,csv
44,560,766
1
true
0
0
since it seems like range of years for fiscal postion. I would suggest to use two Integer field to store data. and and years will be in 4 numbers so use Type SMALLINT this way you use half of the storage space then INT field.
1
0
0
I have data in CSV, in which one column is for fiscal year. eg. 2017 - 2019 . Please specify how to form the CREATE TABLE query and INSERT query with the Fiscal Year as field.
How to store fiscal year (eg. 2017-2020) in mysql?
1.2
1
0
130
44,561,973
2017-06-15T08:04:00.000
2
0
1
0
python,anaconda,canopy
44,571,349
2
false
0
0
I have not used Canopy but use system installed Python and Anaconda a lot so I can explain some issues people run into. When you have 2 different python installations there will be a problem of which Python is used(Type python at the command prompt and which one opens the interpreter?). Usually the executable Python location is added to the PATH so if 2 are in your PATH it will use the first one. With this you will likely have a mess with environments. If you go to use Canopy's Python you will not access Anaconda's Python packages and vice versa. Other weird issues can come up if one python package picks up a .so or .dylib file that doesn't work or isn't the specific version. One installation may remove a version of these in favor of it's dependent version and then another piece of code no longer works.
1
2
0
I need to install Canopy, but I have Anaconda already installed. If I install Canopy will there be conflict or not? an if will be what are the possible problems?
Can I install Anaconda alongside Canopy?
0.197375
0
0
2,546
44,569,033
2017-06-15T13:37:00.000
1
0
1
0
python-3.x,tensorflow,pycharm,importerror
45,875,329
1
false
0
0
Since you have in the log Library not loaded: @rpath/libcublas.8.0.dylib I would say you've installed TF with CUDA support but didn't install CUDA libraries properly. Try to install TF CPU only.
1
1
1
for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. Process finished with exit code 1
Problems using Tensorflow in PyCharm-keep getting ImportError
0.197375
0
0
223
44,569,938
2017-06-15T14:18:00.000
0
0
0
0
python,memory,computer-vision,keras,convolution
44,624,518
2
false
0
0
While using train_generator(), you should also set the max_q_size parameter. It's set at 10 by default, which means you're loading in 10 batches while using only 1 (since train_generator() was designed to stream data from outside sources that can be delayed like network, not to save memory). I'd recommend setting max_q_size=1for your purposes.
1
2
1
I am very new to ML using Big Data and I have played with Keras generic convolutional examples for the dog/cat classification before, however when applying a similar approach to my set of images, I run into memory issues. My dataset consists of very long images that are 10048 x1687 pixels in size. To circumvent the memory issues, I am using a batch size of 1, feeding in one image at a time to the model. The model has two convolutional layers, each followed by max-pooling which together make the flattened layer roughly 290,000 inputs right before the fully-connected layer. Immediately after running however, Memory usage chokes at its limit (8Gb). So my questions are the following: 1) What is the best approach to process computations of such size in Python locally (no Cloud utilization)? Are there additional python libraries that I need to utilize?
Memory Issues Using Keras Convolutional Network
0
0
0
841
44,571,764
2017-06-15T15:44:00.000
9
0
0
0
python,flask
44,572,046
1
true
1
0
Yes, it is safe. The request object is unique per request. Typically g is used because it is an empty namespace specifically created for holding data during a request. request is an internal object with existing meaning, although it is common for libraries to use it for storage, leaving g for "application level" data.
1
6
0
Can the flask.request object be used similarly to flask.g to store per-request data? I am writing a library that is compatible with both Flask and Django. I plan to store information on the request object in both cases. Can I safely store objects on the request, say request.user, with the assurance that it will not be shared between different requests?
Is it safe to store per-request data on flask.request?
1.2
0
0
1,251
44,571,879
2017-06-15T15:50:00.000
0
1
1
0
python,windows,python-2.7
44,571,993
1
true
0
0
Ultimately you're going to have to trust your users to follow what development process you establish. You can create tools to make that easier, but you'll always end up having some trust. Things that have been helpful to a number of people include: All frozen/shipped builds of an executable are built on a central machine by something like BuildBot or Jenkins, not by individual developers. That gives you a central point for making sure that builds are shipped from clean checkouts. Provide scripts that do the build and error out if there are uncommitted changes. Where possible it is valuable to make it possible to point PYTHONPATH at your distribution's source tree and have things work even if there is a setup.py that can build the distribution. That makes tests easier. As always, make sure that your tools for building shipped versions check for this and fail if it happens. I don't personally think that a distribution has a lot of value over a clean tagged subversion checkout for a library included in closed-source applications. You can take either approach, but I think you will find that the key is in having good automation for whichever approach you choose, not in the answer to distribution vs subversion checkout
1
2
0
Basically we have a Python library with modules and functions that we use in many of our programs. Currently, we checkout the SVN repository directly into C:\Python27\Lib so that the library is in the Python path. When someone make modifications to the library, everyone will update to get those modifications. Some of our programs are frozen (using cx-Freeze) and delivered so we have to keep tracking of the library version used in the deliveries, but cx-Freeze automatically packages the modules imported in the code. I don't think it is a good idea to rely on people to verify that they have no uncommitted local changes in the library or that they are up to date before freezing any program importing it. The only version tracking we have is the commit number of the library repository, which is not linked anywhere to the program delivery version, and which should not be used as a delivery version of the library in my opinion. I was thinking about using a setup.py to build a distribution of a specific version of that library and then indicate that version in a requirements.txt file in the project folder of the program importing it, but then it becomes complicated if we want to make modifications to that library because we would have to build and install a distribution each time we want to test it. It is not that complicated but I think someone will freeze a program with a test version of that library and it comes back to the beginning... I kept looking for a best practice for that specific case but I found nothing, any ideas?
What is best practice for working on a Python library package?
1.2
0
0
65
44,571,938
2017-06-15T15:53:00.000
2
0
1
0
python,multithreading,python-3.x,multiprocessing,callstack
44,572,070
1
true
0
0
No they don't, subprocesses get forked/spawned as separate entities so each Process starts off effectively as a completely new Python instance. Python hides away some of the nastiness of that by transparently pickling/unpickling data to be transferred between processes but they all get their own stack, their own GIL and all that goes with it. Multithreading is a different story - threads do share the underlying stack but Python partitions them to appear as each has their own stack so inspect.stack() results can be confusing/unpredictable..
1
1
0
I am writing a code in python which includes some multiprocessing and multithreading. My question is: Does the threads or the processes share the same call stack? I am using the inspect module and I'm afraid that it will return me the wrong value in inspect.stack().
Does the calling stack in python's shared between the threads or the processes in multithreading and multiprocessing respectively?
1.2
0
0
92
44,573,728
2017-06-15T17:41:00.000
0
1
0
0
python,python-3.x,raspberry-pi,raspberry-pi3
44,573,796
4
false
0
0
I'd firstly turn it off and on again.. If it wont help ps aux | grep -i python killall python - youll probably need to tweak the killall command with the python script name instead or in addition to "python"
3
10
0
I have made a dreadful error and am looking for your help! I have set up my raspberry pi to run a python script at start up by editing the rc.local file. This would be fine except I have written my script to reboot the raspberry pi when it exits. Now I am stuck in an infinite loop and I can't edit anything. Every time my script ends it reboots the pi and starts again! My program uses Pygame as a GUI and I have a Raspberry Pi 3 running the NOOBS OS that came with it. If you need anymore info please ask. Any help stopping my script so I can access the pi without losing any data will be greatly appreciated. Edit - What an amazing community. Thank you everyone for sharing your knowledge and time. I was in a bit of a panic and you all came to my assistance really quick. If you are reading this because you are in a similar predicament I found Ben's answer was the quickest and easiest solution, but if that doesn't work for you I think FrostedCookies' idea would be the next thing to try.
Python script runs on boot then reboots at end - How to regain control?
0
0
0
724
44,573,728
2017-06-15T17:41:00.000
5
1
0
0
python,python-3.x,raspberry-pi,raspberry-pi3
44,573,870
4
true
0
0
I'm not sure if this will work (I don't have a Pi right now), but if you can't access a terminal normally while the script is running, try the keyboard shortcut Ctrl+Alt+F1 to open one, then type sudo pkill python to kill the script (this will also kill any other python processes on your machine). Then use a terminal text editor (vi or nano perhaps) to edit your rc.local file so this doesn't happen again.
3
10
0
I have made a dreadful error and am looking for your help! I have set up my raspberry pi to run a python script at start up by editing the rc.local file. This would be fine except I have written my script to reboot the raspberry pi when it exits. Now I am stuck in an infinite loop and I can't edit anything. Every time my script ends it reboots the pi and starts again! My program uses Pygame as a GUI and I have a Raspberry Pi 3 running the NOOBS OS that came with it. If you need anymore info please ask. Any help stopping my script so I can access the pi without losing any data will be greatly appreciated. Edit - What an amazing community. Thank you everyone for sharing your knowledge and time. I was in a bit of a panic and you all came to my assistance really quick. If you are reading this because you are in a similar predicament I found Ben's answer was the quickest and easiest solution, but if that doesn't work for you I think FrostedCookies' idea would be the next thing to try.
Python script runs on boot then reboots at end - How to regain control?
1.2
0
0
724
44,573,728
2017-06-15T17:41:00.000
8
1
0
0
python,python-3.x,raspberry-pi,raspberry-pi3
44,573,812
4
false
0
0
Probably the easiest way is to take out the SD card from your Pi, mount the SD filesystem onto another computer running linux and edit your rc.local script from there to remove the infinite boot loop. You can also backup your data that way incase something goes wrong.
3
10
0
I have made a dreadful error and am looking for your help! I have set up my raspberry pi to run a python script at start up by editing the rc.local file. This would be fine except I have written my script to reboot the raspberry pi when it exits. Now I am stuck in an infinite loop and I can't edit anything. Every time my script ends it reboots the pi and starts again! My program uses Pygame as a GUI and I have a Raspberry Pi 3 running the NOOBS OS that came with it. If you need anymore info please ask. Any help stopping my script so I can access the pi without losing any data will be greatly appreciated. Edit - What an amazing community. Thank you everyone for sharing your knowledge and time. I was in a bit of a panic and you all came to my assistance really quick. If you are reading this because you are in a similar predicament I found Ben's answer was the quickest and easiest solution, but if that doesn't work for you I think FrostedCookies' idea would be the next thing to try.
Python script runs on boot then reboots at end - How to regain control?
1
0
0
724
44,574,674
2017-06-15T18:40:00.000
0
1
1
0
python,pip,py2exe
44,575,124
1
false
0
0
Yes. The paramiko folder - or any non-standard imported functionality - is located in the directory C:\path_to_your_script's_folder\script_folder\build\bdist.win32\winexe\collect-2.7\paramiko This folder holds all of the .pyc files that are associated to that imported file (in this case paramiko). Thanks to @Artyer and @Ofer Sadan for their help!
1
0
0
I'm about to convert my python script into an executable with py2exe but I'm concerned that a few modules that I installed via pip (paramiko & xlrd) won't be included in that executable. Does anyone know if those modules that are not from the standard library are included in the script when you move it over to .exe format?
pip installed modules when packaging python scripts
0
0
0
58
44,577,583
2017-06-15T21:49:00.000
0
0
1
1
python,windows,py2exe
44,632,191
1
true
0
0
I solved the problem myself and I'm going to share the answer if someone ever has the same mistake. I just had to download a 32-bit version of Canopy (with Python 2.7) and py2exe in order for them to work on Windows 7.
1
5
0
I created a .exe file using Py2exe on Windows 10 but when I try to run it on a Windows 7 computer it says that the os version is wrong. Can anyone tell me how to fix this? (like using another Python or Py2exe version or setting a specific configuration inside setup.py)
Py2exe - Can't run a .exe created on Windows 10 with a Windows 7 computer
1.2
0
0
850
44,582,210
2017-06-16T06:25:00.000
0
0
0
0
python,csv,matplotlib,output
44,589,585
2
false
0
0
If you plotted the data using numpy array, you can use numpy.savetxt.
1
0
1
Using matplotlib.pyplot, I plotted multiple wave functions w.r.t time series, showing the waves in multiple vertical axes, and output the graph in jpg using savefig. I want to know the easiest way in which I can output all wave functions into a single output data file maybe in CSV or DAT in rows and columns.
How to Save Plotted Graph Data into Output Data File in Python
0
0
0
2,703
44,583,740
2017-06-16T07:49:00.000
6
0
0
1
python-2.7,azure,ansible
45,960,185
2
false
0
0
The above error is occurred due to your environment doesn't have packaging module. To solve this issue by installing packaging module. pip install packaging The above command will install packaging module of 16.8 version
2
4
0
I am using a CentOS 7.2 and trying to provision a VM in azure through Ansible using the module "azure_rm_virtualmachine" and getting the error as "No module named packaging.version" Below is my error Traceback (most recent call last): File "/tmp/ansible_7aeFMQ/ansible_module_azure_rm_virtualmachine.py", line 445, in from ansible.module_utils.azure_rm_common import * File "/tmp/ansible_7aeFMQ/ansible_modlib.zip/ansible/module_utils/azure_rm_common.py", line 29, in ImportError: No module named packaging.version fatal: [localhost]: FAILED! => { "changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_7aeFMQ/ansible_module_azure_rm_virtualmachine.py\", line 445, in \n from ansible.module_utils.azure_rm_common import *\n File \"/tmp/ansible_7aeFMQ/ansible_modlib.zip/ansible/module_utils/azure_rm_common.py\", line 29, in \nImportError: No module named packaging.version\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 0 } Below is my playbook and I am using a ansible version 2.3.0.0 and python version of 2.7.5 and pip 9.0.1 name: Provision new VM in azure hosts: localhost connection: local tasks: name: Create VM azure_rm_virtualmachine: resource_group: xyz name: ScriptVM vm_size: Standard_D1 admin_username: xxxx admin_password: xxxx image: offer: CentOS publisher: Rogue Wave Software sku: '7.2' version: latest I am running the playbook from the ansible host and I tried to create a resource group through ansible but I get the same error as "No module named packaging.version" .
No module named packaging.version for Ansible VM provisioning in Azure
1
0
0
4,088
44,583,740
2017-06-16T07:49:00.000
0
0
0
1
python-2.7,azure,ansible
45,527,155
2
false
0
0
You may try this, it solved for me sudo pip install -U pip setuptools FYI: My ENVs are Ubuntu 16.04.2 LTS on Windows Subsystem for Linux (Windows 10 bash) Python 2.7.12 pip 9.0.1 ansible 2.3.1.0 azure-cli (2.0.12)
2
4
0
I am using a CentOS 7.2 and trying to provision a VM in azure through Ansible using the module "azure_rm_virtualmachine" and getting the error as "No module named packaging.version" Below is my error Traceback (most recent call last): File "/tmp/ansible_7aeFMQ/ansible_module_azure_rm_virtualmachine.py", line 445, in from ansible.module_utils.azure_rm_common import * File "/tmp/ansible_7aeFMQ/ansible_modlib.zip/ansible/module_utils/azure_rm_common.py", line 29, in ImportError: No module named packaging.version fatal: [localhost]: FAILED! => { "changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_7aeFMQ/ansible_module_azure_rm_virtualmachine.py\", line 445, in \n from ansible.module_utils.azure_rm_common import *\n File \"/tmp/ansible_7aeFMQ/ansible_modlib.zip/ansible/module_utils/azure_rm_common.py\", line 29, in \nImportError: No module named packaging.version\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 0 } Below is my playbook and I am using a ansible version 2.3.0.0 and python version of 2.7.5 and pip 9.0.1 name: Provision new VM in azure hosts: localhost connection: local tasks: name: Create VM azure_rm_virtualmachine: resource_group: xyz name: ScriptVM vm_size: Standard_D1 admin_username: xxxx admin_password: xxxx image: offer: CentOS publisher: Rogue Wave Software sku: '7.2' version: latest I am running the playbook from the ansible host and I tried to create a resource group through ansible but I get the same error as "No module named packaging.version" .
No module named packaging.version for Ansible VM provisioning in Azure
0
0
0
4,088
44,586,049
2017-06-16T09:43:00.000
0
0
1
1
python,anaconda,default
44,586,285
1
true
0
0
Set the environment path variable of your default python interpreter in system properties. or if this doesn't work do: in cmd C:\Python27\python.exe yourfilename.py in the command first part is your interpreter location and second is your file name
1
2
0
windows power shell or cmd uses anaconda python instead of the default windows installation how to make them use the default python installation? my os is win 8.1 python 3.6 anaconda python 3.6
How to change cmd python from anaconda to default python?
1.2
0
0
1,228
44,587,813
2017-06-16T11:10:00.000
0
0
0
0
python,tensorflow,deep-learning,keras,keras-layer
44,651,045
1
true
0
0
At the time it seems to be impossible to actually access the data within the symbolic tensor. It also seems unlikely that such functionality will be added in the future since in the Tensorflow page it says: A Tensor object is a symbolic handle to the result of an operation, but does not actually hold the values of the operation's output. Keras allows for the creation of personalized layers. However, these are limited by the available backend operations. As such, it is simply not possible to access the batch data.
1
0
1
I am trying to replicate a neural network for depth estimation. The original authors have taken a pre-trained network and added between the fully connected layer and the convolutional layer a 'Superpixel Pooling Layer'. In this layer, the convolutional feature maps are upsampled and the features per superpixel are averaged. My problem is that in order to successfully achieve this, I need to calculate the superpixels per image. How can I access the data being used by keras/tensorflow during batch processing to perform SLIC oversegmentation? I considered splitting the tasks and working by pieces i.e. feed the images into the convolutional network. Process the outputs separately and then feed them into a fully connected layer. However, this makes further training of the network impossible.
Accessing Input Layer data in Tensorflow/Keras
1.2
0
0
486
44,588,770
2017-06-16T12:00:00.000
1
0
0
0
xml,python-2.7,google-bigquery
44,590,333
1
false
0
0
The easiest option is probably to convert your XML file either to CSV or to JSON and then load it. Without knowing the size and shape of your data it's hard to make a recommendation, but you can find a variety of converters if you search online for them.
1
0
0
Im trying to load xml file into the google bigquery ,can any one please help me how to solve this . I know we can load JSON ,CSV and AVRO files into big query . I need suggestion/help, Is the any way can i load xml file into bigquery
how to load xml file into big query
0.197375
1
1
834
44,590,807
2017-06-16T13:42:00.000
0
0
1
0
python,datetime,import,module
44,590,890
1
false
0
0
the short answer is speed. Why access any information if you dont have to? Especially that much information. Dictating what information (library) youd like to access is a very intuitive design.
1
0
0
I get that you need to import modules into Python for additional functionality. But, if you've already downloaded all of Python's modules when you first installed Python, why do you need to import specific modules in order to use them? Or does Python import modules from the Internet? Where do the imported modules come from exactly? Example: if you type datetime.datetime.now(), why doesn't Python know that datetime is a module that will be need to be accessed, without having to "import" it?
Why do you need to import modules into Python?
0
0
0
110
44,591,775
2017-06-16T14:27:00.000
3
0
0
1
python,django,git,docker
44,591,974
1
true
1
0
For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image. For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production. Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.
1
1
0
I'm on Docker for past weeks and I can say I love it and I get the idea. But what I can't figure out is how can I "transfer" my current set-up on Docker solution. I guess I'm not the only one and here is what I mean. I'm Python guys, more specifically Django. So I usually have this: Debian installation My app on the server (from git repo). Virtualenv with all the app dependencies Supervisor that handles Gunicorn that runs my Django app. The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done. But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process? Hope you get the confusion of old school dude. I bet Docker guys were thinking about that. Cheers!
Docker vs old approach (supervisor, git, your project)
1.2
0
0
284
44,592,972
2017-06-16T15:29:00.000
4
0
0
1
python,django,celery,django-celery
44,593,395
1
true
0
0
These are completely separate and different things. subprocess.Popen() simply spawns (by calling fork and exec) new OS process for a specific command you passed to it. So it's perfect for cases when you need something to execute in separate process and (optionally) get result of execution (in somewhat awkward way, via pipe). Queues (like Celery or ActiveJob) gives you two main things: Storage (more precisely, an interface to some existing storage, like PostgreSQL or MongoDB) for your tasks (or messages), that are going to be serialized automatically before going to that storage. Workers that polling this storage and actually perform those tasks (deserializing them before performing, also automatically). So, it's possible to have a lot of workers, even maybe in distributed environment. It gives you not only a vertical scalability but a horizontal one (by keeping your workers on separate machines). On the other hand, queues are more suited for asynchronous processing (i.e. for jobs that need to be executed later and you don't need results right now) and are more heavyweight than simple process spawning. So, if you have simple one-off jobs just to be executed somewhere outside your main process - use processes. If you have a bunch of different jobs that are needed to be executed asynchronously and you want to have an ability to scale that process, you should use queues, they'll make life easier.
1
1
0
I read that message queue is preferred over subprocess.Popen(). It is said that message queue is scalable solution. I want to understand how is it so. I just want to list the benefits of message queue over subeprocess.Popen() so that I can convince my superiors to use message queue instead of subprocess
python subprocess.Popen() vs message queue (celery)
1.2
0
0
1,304
44,594,309
2017-06-16T16:50:00.000
1
1
0
0
python,discord,discord.py
44,830,944
3
false
0
0
In Discord, you're never going to 100% sure who invited the user. Using Invite, you know who created the invite. Using on_member_join, you know who joined. So, yes, you could have to check invites and see which invite got revoked. However, you will never know for sure who invited since anyone can paste the same invite link anywhere.
2
3
0
I am currently trying to figure out a way to know who invited a user. From the official docs, I would think that the member class would have an attribute showing who invited them, but it doesn't. I have a very faint idea of a possible method to get the user who invited and that would be to get all invites in the server then get the number of uses, when someone joins the server, it checks to see the invite that has gone up a use. But I don't know if this is the most efficient method or at least the used method.
Discord.py show who invited a user
0.066568
0
1
13,355
44,594,309
2017-06-16T16:50:00.000
1
1
0
0
python,discord,discord.py
45,571,128
3
false
0
0
Watching the number of uses an invite has had, or for when they run out of uses and are revoked, is the only way to see how a user was invited to the server.
2
3
0
I am currently trying to figure out a way to know who invited a user. From the official docs, I would think that the member class would have an attribute showing who invited them, but it doesn't. I have a very faint idea of a possible method to get the user who invited and that would be to get all invites in the server then get the number of uses, when someone joins the server, it checks to see the invite that has gone up a use. But I don't know if this is the most efficient method or at least the used method.
Discord.py show who invited a user
0.066568
0
1
13,355
44,595,736
2017-06-16T18:24:00.000
1
1
0
1
python,unix,operating-system
44,595,853
3
true
0
0
Python 3.6 has pathlib and its Path objects have methods: is_dir() is_file() is_symlink() is_socket() is_fifo() is_block_device() is_char_device() pathlib takes a bit to get used to (at least for me having come to Python from C/C++ on Unix), but it is a nice library
1
6
0
I would like to get the unix file type of a file specified by path (find out whether it is a regular file, a named pipe, a block device, ...) I found in the docs os.stat(path).st_type but in Python 3.6, this seems not to work. Another approach is to use os.DirEntry objects (e. g. by os.listdir(path)), but there are only methods is_dir(), is_file() and is_symlink(). Any ideas how to do it?
Get unix file type with Python os module
1.2
0
0
1,711
44,596,612
2017-06-16T19:25:00.000
1
0
1
0
python,pip
44,596,648
2
true
0
0
Try using pip3 --version. Normally pip3 is the symlink to the pip associated with Python3. If that doesn't work, please provide more details as to how you installed the python versions.
1
0
0
I tried using pip --version but it will give me the version of only pip associated with python and not python3.
I have python 2.7 and 3.6 installed on my mac. How to find the version of pip associated with Python3?
1.2
0
0
74
44,597,555
2017-06-16T20:33:00.000
2
0
0
0
python,deep-learning
44,609,082
2
false
0
0
They are useful for on-the-fly augmentations, which the previous poster mentioned. This however is not neccessarily restricted to generators, because you can fit for one epoch and then augment your data and fit again. What does not work with fit is using too much data per epoch though. This means that if you have a dataset of 1 TB and only 8 GB of RAM you can use the generator to load the data on the fly and only hold a couple of batches in memory. This helps tremendously on scaling to huge datasets.
1
4
1
When and how should I use fit_generator? What is the difference between fit and fit_generator?
How to use model.fit_generator in keras
0.197375
0
0
3,009
44,599,119
2017-06-16T23:07:00.000
0
0
0
0
python,c++,qt,user-interface,tcl
44,599,173
1
false
0
1
Check that in your tcl\python program you actually flush the data to the file using f.close() \ f.flush() \ using the 'with' statement. P.s. Sometimes python waits until end of execution to actually write the data to the files. If this is what's happening here, the files won't be changed until the tcl\python program ends its execution and thus the signal won't be emitted until then aswell.
1
1
0
I am trying to detect changes in files modified by another tcl/python application. I used QFileSystemWatcher addPath for the files. It does not emit fileChanged(QString) signal for changes in files. However, when I manually edit these files, fileChanged signal gets emitted and slot is executed.
QFileSystemWatcher not emitting fileChanged signal for changes done by another application
0
0
0
352
44,600,170
2017-06-17T02:15:00.000
4
0
0
0
python,normalization,word,tf-idf
44,600,391
1
true
0
0
Generally you want to do whatever gives you the best cross validated results on your data. If all you are doing to compare them is taking cosine similarity then you have to normalize the vectors as part of the calculation but it won't affect the score on account of varying document lengths. Many general document retrieval systems consider shorter documents to be more valuable but this is typically handled as a score multiplier after the similarities have been calculated. Oftentimes ln(TF) is used instead of raw TF scores as a normalization feature because differences between seeing a term 1and 2 times is way more important than the difference between seeing a term 100 and 200 times; it also keeps excessive use of a term from dominating the vector and is typically much more robust.
1
2
1
When using TF-IDF to compare Document A, B I know that length of document is not important. But compared to A-B, A-C in this case, I think the length of document B, C should be the same length. for example Log : 100 words Document A : 20 words Document B : 30 words Log - A 's TF-IDF score : 0.xx Log - B 's TF-IDF score : 0.xx Should I do normalization of document A,B? (If the comparison target is different, it seems to be a problem or wrong result)
tf-idf : should I do normalization of documents length
1.2
0
0
2,290