Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
42,658,330
2017-03-07T21:05:00.000
1
0
0
0
python,pandas
42,658,405
1
true
0
0
by definition, these are functions that are computationally intensive on huge datasets. So there is very little hope to speed this up. Something you can try is to save the corresponding series as a .csv, do the smoothing in Pandas, and then merge back to your huge dataframe. Sometimes that can help as carrying a large dataframe in memory back and forth is costly.
1
0
1
I found pandas ewm function quite slow when applied to huge data. Is there any way to speed this up or use alternative functions for exponential weighted moving averages?
Speeding up exponential moving average in python
1.2
0
0
416
42,660,299
2017-03-07T23:29:00.000
0
0
0
0
javascript,jquery,python,ajax,http
42,662,740
1
false
1
0
Many (if not all) server side technology can solve your problem: CGI, Java Servlet, NodeJS, Python, PHP etc. The steps are: In browser, upload file via AJAX request. In server, receive file sent from browser, save it somewhere in server disk. After file is saved, invoke your python script to handle the file. As your current script is written by Python, I guess Python is the best choice for server side technology.
1
0
0
I'm trying to start a python script that will parse a csv file uploaded from the UI by the user. On the client side, how do I make a call to start the python script (I've read AJAX http requests work)? And then secondly, how do I take the user input (just a simple user upload with the HTML tag) which will be read by the python script? The back end python script works perfectly through the command line, I just need to create a front end for easier use.
Start python script from client side
0
0
1
899
42,661,846
2017-03-08T02:23:00.000
0
0
1
1
python,apache-spark,pip,apache-toree
46,944,308
3
false
0
0
my situation is similier to you,your jupyter client version is higher than the toree can find,try uninstalled jupyter client before.
1
3
0
I wanted to pip install Toree package, but I ended up with the following error msg: Could not find a version that satisfies the requirement toree (from versions: ) No matching distribution found for toree I couldn't find any documentation on requirements for toree. Also, pip doesn't seem to be the issue here either since it successfully installed other packages I tested. Here are my systems: 1. Mac 10.11.16 2. Pip 9.0.1 3. Python 3.5
Toree Installation Issue
0
0
0
1,230
42,663,496
2017-03-08T05:14:00.000
0
0
0
1
python,batch-file,cmd
42,663,541
2
false
0
0
For running the the batch file again after it has been edited or modified, you can write a script. That script can be executed using daemon services or launchd in mac os x.
1
0
0
I have edited the contents of my batch file in a python program however when I try to execute the .bat in python it doesn't follow the instructions. It opens the console and then closes but nothing happens. Instead I am looking at an alternative route- automatically running after the code has been saved or changed. The reason I need it to run is because it updates an mp3, so if it's not running properly the mp3 doesn't change. I think one of the reasons may be down to not being able to run as administrator in python. I did create a shortcut and set it to run as admin every time but python wouldn't allow the .ink file for subprocess.Popen() and os.system()
How to make a batch file automatically run itself after being updated
0
0
0
451
42,664,493
2017-03-08T06:32:00.000
0
0
0
0
python,python-2.7,machine-learning,tensorflow
42,665,677
1
true
0
0
Yes and no. Tensorflow is a graph computation library mostly suited for neural networks. You can create a neural network that determines if a face is in the image or not... You can even search for existing implementations that use Tensorflow... There is no default Haar feature based cascade classifier in Tensorflow...
1
1
1
For example, using OpenCV and haarcascade_frontal_face.xml, we can predict if a face exists in an image. I would like to know if such a thing (detecting an object of interest) is possible with Tensorflow and if yes, how?
Can Tensorflow be used to detect if a particular feature exists in an image?
1.2
0
0
540
42,665,753
2017-03-08T07:50:00.000
0
0
0
0
python,python-3.x,modbus
42,667,114
1
false
0
0
As you say that you can use the manufacturer Modbus program correctly, I'd suggest using a sniffer to catch the packets of that communication. For Modbus TCP you could use Wireshark, for RTU you could use HHD Software serial port monitor. If you think the controller is a master, you can also try with a slave simulator that shows you if a master is connected and the requests. For example ModRSsim on Sourceforge.
1
0
0
I'am trying to communicate with a MODBUS controller that has internal and external BUS, so there are MASTER and SLAVE modes. I cant read anything from it with MinimalModbus or Modbus-tk. But it reads and works with manufactors own modbus tool. I have been using MinimalModbus and Modbus-tk successfully with other devices that are for sure in SLAVE MODE, but i just can't get anything from that controller. I wonder what i could try here?
Python: MODBUS Communication failing
0
0
0
285
42,666,255
2017-03-08T08:20:00.000
4
0
0
0
python,scikit-learn,k-means
42,684,721
2
true
0
0
You have access to the n_iter_ field of the KMeans class, it gets set after you call fit (or other routines that internally call fit. Not your fault for overlooking that, it's not part of the documentation, I just found it by checking the source code ;)
1
2
1
I am trying to construct clusters out of a set of data using the Kmeans algorithm from SkLearn. I want to know how one can determine whether the algorithm actually converged to a solution for one's data. We feed in the tol parameter to define the tolerance for convergence but there is also a max_iter parameter that defines the number of iterations the algorithm will do for each run. I get that the algorithm may not always converge within the max_iter times of iterations. So is there any attribute or a function that I can access to know if the algorithm converged before the max_iter iterations ?
Sklearn K means Clustering convergence
1.2
0
0
2,975
42,667,584
2017-03-08T09:32:00.000
0
1
0
0
python,nose,allure
42,684,514
2
false
0
0
How about adding the decorator to the test classes instead? Not sure if it will work, but sometimes works nicely for @patch.
1
0
0
I use nosetests and allure framework for reporting purposes. In order to make the report look like I want, I have to add @nose.allure.feature('some feature') decorator to each test. The problem is that I have over 1000 test. Is there any way to modify tests before execution? I was thinking about custom nose plugin, but not sure how can it be implemented.
Nosetest: add decorator for tests before execution
0
0
0
279
42,669,274
2017-03-08T10:50:00.000
1
0
0
0
python,mqtt,mosquitto
42,685,991
1
true
0
0
If you send "EVERY" message to broker with retain message = True, then you can: Connects to server with subscribe '#' Check all the retain message and their topic (then you can got all topics) Unsubscribe '#' Subscribe the topic you want This solution subscribe twice, may not fit your original requirement (only subscribe once), but can do what you want
1
1
0
If a message is sent to the topic in an mqtt broker, I want to know the topic by Python. In order to use client.subscribe (), I have to manually enter a topic, so I need to know the topic before client.subscribe() dynamically. Is there a way to know what broker topics are?
How to know mqtt topics without client.subscribe() in python
1.2
0
0
294
42,671,780
2017-03-08T12:46:00.000
0
0
1
1
python,bash,macos
42,671,889
1
false
0
0
Just change the path in your source command to match the location of the script, which should be where pip installed it, that is in /usr/local/bin if you used sudo pip install to install it system wide, or wherever the bin directory associated with your python environment is located. That would /path/to/virtualenv/local/bin if you are using a virtualenv, or /path/to/anaconda/bin if you are using anaconda's python distribution.
1
0
0
So I'm trying to install virtualenvwrapper and then as a requirement for the task I'm trying to implement I'm supposed to update my .bash_profile file to contain the lines source /usr/local/bin/virtualenvwrapper.sh But after activating the changes to the file I get -bash: /usr/local/bin/virtualenvwrapper.sh: No such file or directory So that's because using pip install virtualenv the package gets installed in ./Library/Python/2.7/lib/python/site-packages . My question is, is it okay to manually relocate the packages? What would be the way to do so?
Is it okay to relocate packages from ./Library/Python/2.7/lib to /usr/local/lib?
0
0
0
158
42,674,340
2017-03-08T14:44:00.000
10
1
0
0
python,telegram,telegram-bot,python-telegram-bot
42,696,153
3
true
0
0
Till today, only the Channel Creator can add a bot (as Administrator or Member) to the Channel, whether public or private. Even the other Channel Administrators cannot add a normal member leave alone adding a bot, rather they can only post into the channel. As far as joining the bot via the invite link, there is yet no such method in Bot API to do so. All such claims of adding the bot to a channel by non Creator are false.
1
16
0
My question is: how to join my telegram bot to a telegram public channel that I am not administrator of it, and without asking the channel's admin to add my bot to the channel? maybe the chatId of channel or thru link of channel? Thank you in advance :) edit------ I have heard that some people claim to do this join their bot to channels, and scrape data. So if Telegram does not allow it, how can they do it? can you think of any work around? Appreciate your time?
How to join my Telegram Bot to PUBLIC channel
1.2
0
1
24,623
42,675,835
2017-03-08T15:48:00.000
0
0
0
0
python,memory-management
42,676,591
1
false
0
0
This question makes me remember the early 80's. Memory used to be expensive and we invented swapping. The (high level part of the) OS sees more memory than actually present, and pages are copied on disk. When one is needed another one is swapped off, and the page is copied back into memory. Performances are awful, but at least it works. Your question is rather broad, but a rule of thumb says that if you can process your data in batches, explicitely loading batches of data is much more efficient, but if the algorythm is too complex or requires actions on any data at any moment, just let the swap care for it. So add a swap file of significantly greater than the memory you think you need (with the given sizes, I would try 100 or 200 Gbytes), start the processing before leaving office, and you could have results next day in the morning.
1
0
1
I have to process a large volume of data ( feature maps of individual layers for around 4000 images) which sizes more than 50 GB after some point. The processing involves some calculation after which around 2MB file is written to the HDD. Since the free ram is around 40GB my process crashes after some point. Can anyone suggest a better approach to either divide or process this 50GB data such that the computation can be made within the available ram. For e.g. some in memory compression approach I am just looking for hints to the possible approaches to this problem. Thanks
How to handle huge volume of data in limited RAM
0
0
0
313
42,678,184
2017-03-08T17:38:00.000
0
0
0
0
python,autocomplete,geany
42,867,379
2
true
0
1
From what I have been able to find, this is beyond what Geany can do. I asked how to get Geany to do this and I am not looking for any alternatives to Geany, nor am I interested in using anything else. Therefore, this is the accepted answer, unless someone posts a way to make it work in Geany, at that point I will change the accepted answer.
1
8
0
How can I get Geany to autocomplete an object's constraints? For example, I type: self.window.set_position(gtk.WIN_ And I want the list of possible constraints to show up such as WIN_POS_NONE and WIN_POS_CENTER etc. NOTE: CTRL+SPACE or CTRL+SHIFT+SPACE does not show constraints. Autocompletion works fine for functions and symbols, just not constraints, unless I've used it once already before. This saves me the time of looking at documentation. Sometimes I can partially remember the constraint, and it would be nice to be able to browse the options. I would basically like it to work like it does in Sublime Text, which is a near-perfect editor for me, but I'm looking for something free/opensource to use. EDIT: I've also tried Ninja-IDE which can also display constraints, but it locks up sometimes and is not as lightweight as Geany... EDIT 2: I'm not looking for an alternative to Geany, I'm looking to make this functionality work via a mod or plug-in.
Geany autocomplete Python constraints
1.2
0
0
2,193
42,678,217
2017-03-08T17:40:00.000
1
0
0
0
python,sqlite,full-text-search,fts3,fts4
42,679,503
1
true
0
0
If your Python has the Porter tokenizer compiled in, you do not need to register it. To register a user-defined tokenizer, you have to call the fts3_tokenizer() SQL function with a pointer to a C structure containing C function pointers; this cannot be easily done from Python.
1
0
0
How is the default tokenizer 'porter' in fts3 module registered. one way to register user defined tokenizers is fts.register_tokenizer() but what are its arguments? since porter being a in built tokenizer does it even need to be registered?
How to register porter tokenizer in python
1.2
0
0
205
42,678,845
2017-03-08T18:13:00.000
-1
0
1
0
python-2.7,python-3.x,ipython,anaconda,jupyter-notebook
42,682,359
2
false
0
0
try changing the file directory from the python 3.5 to python 2.7.e.g C:\Users\python36 to C:\Users\python27 in ipython settings or preferences
1
0
0
I am using jupyter notebook with anaconda3 package but I want to use jupyter with anaconda2 and the package I have already installed! how can I add anaconda2 to jupyter kernel?
how can I change ipython kernel to python 2.7
-0.099668
0
0
668
42,679,780
2017-03-08T19:03:00.000
0
0
1
1
python,macos,anaconda
46,385,845
1
false
0
0
I have been using Anaconda Python. I had a problem with the default Python installed on my Mac OSX 10.11 at one point, because of the numpy package. It was a problem for me when I tried to run a script in Anaconda Python which relies on a numpy version higher the Mac default version and I wasn't able to get it working using conda install, pip install, or by changing the PATH/PYTHONPATH. I was able to install the package but Anaconda Python would not recognize the new version. I ended up removing the entire numpy that came with the Mac. But I do not think this would be a problem in the other way (i.e., using mostly the Mac python but occasionally install other packages for Anaconda Python) because the default Python does not look at the Ancondoa package directory.
1
0
0
I've just installed Anaconda on my macOS machine and it has changed my PATH so that now Python 3.6 is the default version (i.e. the Python 3.6 interpreter opens when I type python in the Terminal). I'm fine with this since this is the version I usually use, but I was wondering if there is the possibility of this messing up with the system functionalities relying on having 2.7 as default. I suppose that there will no problems since 2.7 is still in /usr/bin, but I would like to be sure.
Is there any possibile issue in having Anaconda Python 3.6 as default Python version in macOS?
0
0
0
61
42,682,326
2017-03-08T21:34:00.000
0
0
1
1
python,pip,homebrew
42,702,937
2
false
0
0
I found the answer in the documentation for homebrew. For Homebrew python, you must use "pip3 install " instead of "python -m pip install " There were two other issues that complicated this. 1. I had previously manually installed python 3.5. The bash profile was configured to point to this before /usr/local/bin. 2. In the documentation of pip, it mentions that the CLI command "pip" points to the last version of python that used it. So using "pip" alone was causing pip to load the modules into the 2.7 version of the python. To fix this, I deleted the manually installed version, I removed the garbage from the bash profile, then it everything seemed to work.
1
1
0
I have a mac running OS X. Although it has Python 2.7 preinstalled, I used home-brew to install Python 3.5, which works great. Now I'm looking to add modules using pip. Trouble is, when I used pip in the terminal, it looks like the module was installed, however, my Python 3.5 doesn't see it. After a bit of digging, I suspect the problem is that my pip is pointed at the the Apple 2.7 Python version and I realize the answer is I need to change the config on pip to point at the 3.5 version of python, but I can't make any sense of the brew file structure in order to know where to point it at. And, as I dig through the Cellar, I see multiple versions of pip, so I'm not even sure I'm using the right one, but not sure how to call the right one from the terminal. I'm sure this is very straightforward to experienced users, but I'm lost.
Understanding pip and home-brew file structure
0
0
0
152
42,683,208
2017-03-08T22:34:00.000
0
0
0
0
python,image,canvas,tkinter,resize
42,683,329
1
false
0
1
subsample and zoom are the only way to resize images with the tkinter PhotoImage objects. Whey you call them, they will always shrink or grow the image. It is up to you to determine which one to call, and what arguments that will give you the closest approximation to the desired size.
1
0
0
I am making a program where pictures are displayed to the user. Some of these pictures are, however, too long and run off the canvas. The canvas is a set size(width=600, height=150). I am wondering if there is anything to make sure the longer pictures do not exceed this width and are shrunk down when they do so they fit. The images are in .gif format. I have tried using subsample() and zoom() but these seem to shrink the images even when they already fit on the canvas making them too small.
Is there a way, without PIL, to have an images put onto a Canvas to always be resized to fit in?
0
0
0
218
42,686,125
2017-03-09T03:29:00.000
1
0
0
1
python,celery
42,686,288
1
true
0
0
What you need is a virtual environment. A virtual environment encapsulates a Python install, along with all the pip packages And executable files such as celery. check out the virtualenv and virtualenvwrapper Python packages.
1
0
0
I used celery + requests first in python2.7,and it works fine,but I heard celery + aiohttp is faster,so I test it in python3, and it really fast,but then I found I can't use celery to start my program write in python2.7,because there are changes between them ,I use command line to start celery only get errors I guess I should just uninstall the celery of python3? Is there a better way to do this? In fact,I guess since there are many package works for both p2,p3,and use commandline to start,there must have a good solution.
how to use command line to start celery when I install it both in python2,python3
1.2
0
0
60
42,687,327
2017-03-09T05:19:00.000
0
0
0
0
python,tensorflow,deep-learning
45,650,018
1
true
0
0
each node in the graph_def doesn't contains the shape of output tensor, after importing graph_def into memory (with tf.import_graph_def), the shape of each tensor in the graph is automatically determined
1
1
1
I downloaded the inception v3 model(a frozen graph) from website and imported it into a session then I found that the shape of inputs and outputs of all nodes in this graph_def are already fully known, but when I freeze my own graph contaning tf.Examples queues as inputs, the batch_size info seems to be lost and replaced with a ?, my question is how I can fix or change the unknown shape when I try to freeze a graph? edit: the node.attr of some nodes in graph_def contains the shape info but why not all nodes?
How does graph_def store the shape info of input and output of a node
1.2
0
0
438
42,692,077
2017-03-09T09:55:00.000
-2
0
1
0
python,pycharm
58,465,792
1
false
0
0
In order to debug (and stop at a breakpoint) you need to use Run > Debug (Alt+Shift+F9), not Run > Run (Alt+Shift+F10).
1
2
0
I am trying to debug python code in Pycharm. I assign a break-point on a line using ctr+F8 and then debug using the debug icon on top right corner. However the execution does not stop at the breakpoint and whole code is executed. I am trying to stop the execution at breakpoint and then execute the code line by line while checking the variable values. What possibly I am doing wrong? It does not work on other code files which I created for checking.
Pycharm-Execution does not stop at breakpoints during debugging
-0.379949
0
0
1,903
42,696,960
2017-03-09T13:42:00.000
0
0
1
1
python,macos,python-2.7
42,697,960
1
false
0
0
python is alias name for current python binary. It's symlink to some version of python binary called Python. Something like /Library/Frameworks/Python.framework/Version/2.7/Python /Library/Frameworks/Python.framework/Version/3.5/Python Currently code for 2.7+ and 3.0+ may conflict (like use print(x) instead of print x or range for generators instead xrange in 2.7. etc). So if your scripts are not ported for newest version you will probably catch a lot of errors while executing python my_cool_script.py because you wrote code for 2.7 and after installation you trying to execute it with 3.5 version. So you can change symlink back to Version/2.7/Python and execute the same command and it will work like you code it and version conflict will be solved.
1
0
0
I have a macOS Sierra 10.12.3 and I have installed Python 2.7.13 by downloading it from the official Python site. When I type which python I get /Library/Frameworks/Python.framework/Version/2.7/bin/python. The python file referenced in this result is a shortcut for python2.7 file located in the same directory. I'm wondering what is the difference between Python (with the capital "P") file located in /Library/Frameworks/Python.framework/Version/2.7 and the one mentioned above? Thanks.
difference between "Python" file and "python2.7" file on macOS
0
0
0
33
42,700,342
2017-03-09T16:19:00.000
-1
0
1
0
python,pycharm,virtualenv
42,700,472
2
false
0
0
I use PyCharm, the best IDE I thought for Python, virtualenv should be created manually by you, and PyCharm can detect it automally if the virtualenv is created under the directory of your project. Or you can choose the proper virtualenv as Python Interpreter in Pycharm Preferences. I got you point, but creating virtual can also pass other parameters including the python version etc., usually we just use the default command. Pycharm cannot even create a default one for you.
1
3
0
I want my working areas to be self-contained and independent. For this reason, I'd like to be able to set up a virtualenv inside a project directory and have the project use it as I'm creating the project. Is this possible? To answer my own question: yes sort of. Here's the procedure I came up with: Select "Create New Project" Select "Create VirtualEnv" from the interpreter menu. Open a file browser to select the virtualenv location. Browse to the directory to contain the new project. Create a new directory. This will be the project directory. Create a new directory for the virtialenv Click OK Select the parent of the virtualenv as the project directory. Click Create PyCharm complains that the directory isn't empty and offers to use existing sources. Select "No". At that point, there's a new project containing the virtualenv. I guess this isn't too bad, but I wonder if there's a better way.
Is there a way to get PyCharm to create a virtualenv for a project in a project when creating a project?
-0.099668
0
0
6,536
42,701,912
2017-03-09T17:35:00.000
1
1
1
0
python,encoding,character-encoding,language-agnostic,base64
42,821,384
4
false
0
0
You are confusing what things are being compared. There are 2 statements, both comparing different things: "base64 encoding is 133% bigger than original size" "An 11 character base64 string can encode a huge number" In the case of 1, they are normally referring to a string encoded maybe with ASCII using 8bits a character, and comparing that with the same string encoded in base64. That is 133% bigger, because in base64 you can't use all 255 bit combinations in every byte. In the case of 2, they are comparing using a numeric identifier, and then either encoding it as base64, or base10. In this case, base64 is a lot shorter than base10. You can also think of the (1) case as comparing base256 against base64, and the (2) case as comparing base10 against base64.
2
1
0
I've been reading about base64 conversion, and what I understand is that the encoded version of the original data will be 133% of the original size. Then, I'm reading about how YouTube is able to have unique identifiers to their videos like FJZQSHn7fc and the reason was: an 11 character base64 string can map to a huge number. Wait, say a huge number contains 20 characters, then wouldn't a base64 encoded string be 133% of that size, not shorter? I'm very confused. Are there different types of base64 conversion (string to base64 vs. decimal to base64), once resulting in a bigger, and the other in a smaller resulting string?
Base64 conversion decimals
0.049958
0
0
2,669
42,701,912
2017-03-09T17:35:00.000
1
1
1
0
python,encoding,character-encoding,language-agnostic,base64
42,816,749
4
false
0
0
Think of it like this: you have a 64bit number (called long in Java, for example). Now, you can print that number in different ways: As a binary number (base 2), printing 64 '0' or '1' As a decimal number (base 10), printing up to 20 decimal digits As a hexadecimal number (base 16), printing 16 hexadeciaml digits As a number in base 64, printing 11 "digits" in that base. You can use any graphical symbols as digits. ... you understand by now that there are many more possibilities ... It seems like they use the same base-64 numbers as the ones that are used in base64 encoding, that is, uppercase and lowercase letters, ordinary digits and 2 extra chars. Each character represents a 6-bit value. So you get 66 bits, and depending on the algorithm used, either the leading or trailing 2 bits are cut off to get a nice long value back.
2
1
0
I've been reading about base64 conversion, and what I understand is that the encoded version of the original data will be 133% of the original size. Then, I'm reading about how YouTube is able to have unique identifiers to their videos like FJZQSHn7fc and the reason was: an 11 character base64 string can map to a huge number. Wait, say a huge number contains 20 characters, then wouldn't a base64 encoded string be 133% of that size, not shorter? I'm very confused. Are there different types of base64 conversion (string to base64 vs. decimal to base64), once resulting in a bigger, and the other in a smaller resulting string?
Base64 conversion decimals
0.049958
0
0
2,669
42,703,506
2017-03-09T19:07:00.000
0
0
0
0
python-3.x,numpy
42,703,588
2
false
0
0
What is happening is that you are overflowing the register such that it is overwriting itself. You have exceeded the maximum value that can be stored in the register. You will need to use a different datatype that will most likely not be compatible with exp(.). You might need a custom function that works with 64-bit Integers.
1
1
1
I'm using function: numpy.log(1+numpy.exp(z)) for small values of z (1-705) it gives identity result(1-705 {as expected}), but for larger value of z from 710+ it gives infinity, and throw error "runtimeWarning: overflow encountered in exp"
overflow in exp(x) in python
0
0
0
3,068
42,703,962
2017-03-09T19:36:00.000
3
0
0
0
python,django
42,706,616
4
false
1
0
Django's philosophy / best practices encourage "fat models thin controllers" (controllers being Views in Django). Having tried both ways, it definitely works better with the "fat models" approach. Keeping the logic as close to the data models makes it more reusable and you can there are many features in Django that work better that way. One example would be returning a paginated list view. If you need to calculate something for every object in a queryset, you could loop over it in the view doing the calculation or you could add a model method then call it on every iteration in the template. Looping over the queryset in the view will do the calculation on the whole queryset - not good if you are only showing 10 objects from a list of 1000. Calling a model method from the template, the calculation will only be done on the 10 objects on that page. Obviously you can add some more code to the view to only do that calculation on the objects on that page, but then thats extra code that isn't needed if you go the other route. If you need the same calculation on another page, keeping the logic in a model method will be reusable without any alteration, while you will need to cut and paste it in the view, or create a new method. While its not a huge difference, lots of small things like this start to add up.
4
2
0
I am new to Django, but I recently created my first application. I am wondering if I put my logic in the wrong places. From the Django book, I got that logic should but put into the views and data in models. But I have recently read that views should be as small as possible and let models handle the logic. My problem is my views handle all my logic while my models only handle data going to and from my database. Have I messed up when creating this app, and if so, how would I fix it?
Django logic and where to put it?
0.148885
0
0
1,476
42,703,962
2017-03-09T19:36:00.000
0
0
0
0
python,django
42,704,220
4
false
1
0
The way to do it is simple. And I would say that it is not actually messed up if your logic lies in views mostly. The most repetitive functions and logic which you have used in views in multiple functions, you can create a function defined in model and then call that function in views. (you can do it step by step) For example if it is a social network and you are setting the city of the user. Now as your app grows, possibly there will be many functions and many places through which city is being set. If you are doing it all in views repitively then you would be stuck at some point of time because to make smallest of changes, you might have to edit all the functions. The good way would be to define set_cities(self, cities) in user model and call it whenever needed like user.set_cities(cities_list). So if you have to trigger other functions everytime city is being set (maybe like sending notifications or updating other tables), you have to define it once only in set_cities(). In any case, all logic cannot be only in models so do not worry much about it. Just keep on trying to simplify the views and keep on shifting any repetition of logic to models.
4
2
0
I am new to Django, but I recently created my first application. I am wondering if I put my logic in the wrong places. From the Django book, I got that logic should but put into the views and data in models. But I have recently read that views should be as small as possible and let models handle the logic. My problem is my views handle all my logic while my models only handle data going to and from my database. Have I messed up when creating this app, and if so, how would I fix it?
Django logic and where to put it?
0
0
0
1,476
42,703,962
2017-03-09T19:36:00.000
1
0
0
0
python,django
42,704,100
4
false
1
0
It is better to have "Fat Models, Skinny Views." Many of Django experts will give you this tip. Google this phrase and you will find some resources that are saying "Fat Models, Skinny Views," or "Fat Models, Skinny Controllers." By the way, Django creators named Controllers as Views, and Views as Templates which maybe will cause some misunderstanding while reading articles about MVC which is MTV in Django.
4
2
0
I am new to Django, but I recently created my first application. I am wondering if I put my logic in the wrong places. From the Django book, I got that logic should but put into the views and data in models. But I have recently read that views should be as small as possible and let models handle the logic. My problem is my views handle all my logic while my models only handle data going to and from my database. Have I messed up when creating this app, and if so, how would I fix it?
Django logic and where to put it?
0.049958
0
0
1,476
42,703,962
2017-03-09T19:36:00.000
0
0
0
0
python,django
42,704,005
4
false
1
0
No you haven't mess it up. You can find your solution through experience (years of developing). So, either using fat models and thin views or the opposite its up to you and your application requirements. As you learn you'll discover new techniques and methods that will help you extend your "logic" and your app implementation. At the beggining you'll make mistakes, but that's alright. We need them to become better developers, coders etc. So my advice is: keep calm and learn (good practices)!
4
2
0
I am new to Django, but I recently created my first application. I am wondering if I put my logic in the wrong places. From the Django book, I got that logic should but put into the views and data in models. But I have recently read that views should be as small as possible and let models handle the logic. My problem is my views handle all my logic while my models only handle data going to and from my database. Have I messed up when creating this app, and if so, how would I fix it?
Django logic and where to put it?
0
0
0
1,476
42,706,316
2017-03-09T22:01:00.000
0
0
0
0
python,django,websocket
42,706,481
2
false
1
0
you might run into database performance issues if the traffic from both apps overloads the database but that's a solvable problem
1
1
0
Is it a good practice to just import the models of my Django app in the secondary app and query the database? Does it have any performance issues or something? Actually the second application is a simple lightweight websocket server.
How to share a Django database with another Python app?
0
0
0
192
42,708,129
2017-03-10T00:40:00.000
0
0
0
0
python,csv,apache-spark,pyspark
42,713,556
2
false
0
0
thx for @himanshulllTian's great answer. And I want so say some more. If you have several columns in your file; then you just want to remove record based on the key column. Also, I don't know whether your csv files have same schema. Here is one way to deal with this situation. Let me borrow the example from himanshulllTian. First, Let's find the records that share some key: val dupKey = df1.join(df2, "key").select("key"). Then we can find the part we hope to remove in each dataframe: val rmDF1 = df1.join(dupKey, "key"). Finally, same except action: val newDF1 = df1.except(rmDF). This maybe trivial. But work. Hope this help.
1
0
1
Newbie to apache spark. What I want to do is to remove both the duplicated keys from two csv files. I have tried dropDuplicates() and distinct() but all the do is remove one value. For example if key = 1010 appears in both the csv files, I want both of them gone. How do I do this?
How to remove both duplicated values from two csv files in apache spark?
0
0
0
368
42,708,388
2017-03-10T01:10:00.000
0
0
0
0
python,csv,pandas
42,708,813
1
false
0
0
add this : lineterminator = ':'
1
0
1
I am using pandas read_csv to open a csv file 1327x11. The first 265 rows are only 4 columns wide. Here is row 1 to 5 DWS_LENS1.converter,"-300.0,5.5; -0.1,5.5; 10.0,-5.5; 300.0,-5.5",(mass->volts),: DWS_LENS1.mass_dependent,false,: DWS_LENS1.voltage.reading,-5.12642,V,: DWS_LENS1.voltage.target,-4.95000,V,: DWS_LENS2.converter,"-300.0,20.0; -10.0,20.0; 10.0,-20.0; 300.0,-20.0",(mass->volts),: and here are some other rows : 157955,SAMPLE,,,,1760.5388,,,,: ,: Summary,: ,: Analyte,H3O+ (ppb),NO+ (ppb),O2+ (ppb),O- (ppb),OH- (ppb),: toluene,1872.7367,,,,,: isobutane,,1945.7385,,,,: hexafluorobenzene,,,1951.0644,2121.6486,,: tetrafluorobenzene,,,,,1599.5802,: I receive Error tokenizing data. C error: Expected 4 fields in line 266, saw 11 I tried df=pd.read_csv(test,error_bad_lines=False) but it skips most rows and returns a 491x4 table. If I use pd.read_csv(test,delim_whitespace=True,error_bad_lines=False) I obtain a 1300x4 table but it fails splitting some data. How can I have the 11 columns back?
Pandas read_csv fails
0
0
0
1,382
42,710,496
2017-03-10T04:52:00.000
1
0
1
0
python,spyder
50,607,613
2
false
0
0
You can bring back console by following steps - Go to View -> Panes Panes menu will show you list of options Select 'IPython Console' Try running some print statement, you will be able to see output in console. Keyboard shortcut for doing same is - Ctrl+ Shift+ I
2
3
0
I am using Spyder in Python, and my output is probably 4000 lines, and the code runs fine without any errors, and while the code is running i can even see the output being produced. It takes like 2 seconds to produce the output, but only the last 100 lines I guess are being shown in the output. The rest of the output just disappeared. It's seem there is a page limit or something associated with Spyder, so it only shows the last 100 lines or some number of lines. How do I see my entire output?
The output in the console window not being shown
0.099668
0
0
7,582
42,710,496
2017-03-10T04:52:00.000
2
0
1
0
python,spyder
42,710,649
2
true
0
0
I recommend to print out the text to a document, later you can refer to it. You won't miss, even a single line.
2
3
0
I am using Spyder in Python, and my output is probably 4000 lines, and the code runs fine without any errors, and while the code is running i can even see the output being produced. It takes like 2 seconds to produce the output, but only the last 100 lines I guess are being shown in the output. The rest of the output just disappeared. It's seem there is a page limit or something associated with Spyder, so it only shows the last 100 lines or some number of lines. How do I see my entire output?
The output in the console window not being shown
1.2
0
0
7,582
42,710,628
2017-03-10T05:05:00.000
0
0
0
0
python,flask,flask-login,flask-security
42,752,122
1
true
1
0
I was using the same type of browser to try and log into different accounts. Such as two firefox browsers and I tried two firefox incognito browsers. Which in both cases I think they shared the same cookies. After trying with one Chrome and one Firefox it worked correctly.
1
3
0
I currently have a flask application that uses Flask-Security to handle user login and registration. I'm trying to test a chatroom I made so I want to login to two different accounts in different windows to check if it works. However I can't do that because when I login to account2 it simply logs out account1 in my other browser. I'm certain this has something to do with Flask-Login and user sessions but I'm not sure how to fix this issue. If anyone could point me in the right direction that'd be awesome. I tried looking at the LoginManger docs on Flask-Login's site but can't figure out how to disable cookies.
Flask multiple login from same computer
1.2
0
0
1,395
42,711,002
2017-03-10T05:36:00.000
0
0
0
0
python,interpolation,curve-fitting,lmfit
42,730,875
2
false
0
0
There is not a built-in way to automatically interpolate with lmfit. With a lmfit Model, you provide the array on independent values at which the Model should be evaluated, and an array of data to compared to that model. You're free to interpolate or smooth the data or perform some other transformation (I sometimes Fourier transform data and model to emphasize some frequencies), but you'll have to include that as part of the model.
1
2
1
I am trying to fit a curve with lmfit but the data set I'm working with does not contain a lot of points and this makes the resulting fit look jagged instead of curved. I'm simply using the line: out = mod.fit(SV, pars, x=VR) were VR and SV are the coordinates of the points I'm trying to fit. I've tried using scipy.interpolate.UnivariateSpline and the fitted the resulting data but I want to know if there is a built-in or faster way to do this. Thank you
Interpolate with lmfit?
0
0
0
424
42,711,310
2017-03-10T06:00:00.000
2
0
0
0
python,numpy,matrix
42,711,798
2
true
0
0
Just take the reciprocals of the nonzero elements. You can check with a smaller diagonal matrix that this is what pinv does.
1
1
1
If I have a diagonal matrix with diagonal 100Kx1 and how can I to get its pseudo inverse? I won't be able to diagonalise the matrix and then get the inverse like I would do for small matrix so this won't work np.linalg.pinv(np.diag(D))
How to get the pseudo inverse of a huge diagonal matrix in python?
1.2
0
0
945
42,714,356
2017-03-10T09:07:00.000
1
0
1
1
python,windows,deployment
42,714,418
1
true
0
0
I was in a similar position and i combine pyinstaller with fabric. So i build a "compile" version of the project and with fabric, i deploy like the client wants. Fabric support roles definition, several configuration for several clients.
1
0
0
I develop a distributed application which is based on RabbitMQ and multiple python applications. System is pretty complex so it is very likely that we will need to update deployed solution multiple times. Customer wants that we use his servers which are running windows. So the question is how to deploy and update python part of this system. And as sub-question is it better to deploy sources or use pyinstaller to get executables and then deploy them? On my test server I just use git pull when I have some changes which is probably not the case for production system.
How to deploy python applications to remote machines running Windows
1.2
0
0
496
42,715,005
2017-03-10T09:39:00.000
0
0
0
0
python,flask
42,715,400
1
true
1
0
This is best done by some asynchronous mechanism. The best solution will depend on your exact use-case. Webhook - If user 1 and user 2 are other applications using your api, the simplest way would be with a webhook mechanism. Where user 1 and user 2 would subscribe to the api by depositing a url that your application calls with the results once both inputs are sent. Polling - You provide an endpoint that both users need to poll to check if the api is ready to send back the results. Email - You simply email both users the results once you receive both inputs. Or SMS, or IM message ... Persistent connections - With a mechanism like websockets or http2 push. You can achieve this with a python application. But it this is the most complex solution and in most cases not needed.
1
1
0
I am attempting to write an endpoint in Python Flask that requires inputs from 2 users to run the function. I would like to have it so that user 1 would send a request with inputs to the backend and then wait for user 2 to send inputs as well. The endpoint would then calculate a result and output it to both users. What is the most efficient way to do this?
Multiple Users Connecting to an API Endpoint
1.2
0
1
124
42,716,807
2017-03-10T11:02:00.000
-1
0
0
0
python,scrapy,web-crawler,e-commerce,screen-scraping
42,729,874
1
false
1
0
Categories and subcategories are usually in the breadcrumbs. In general the css selector for those will be .breadcrumb a and that will probably work for 80% of modern e-commerce websites.
1
0
0
I need to develop an application that takes as input an url of an e-commerce website and scrap the products titles, prices with the categories and sub-categories. Scrapy seems like a good solution for scraping data, so my question is how can I tell scrapy where the titles, prices, cat and sub categories are to extract them knowing that websites have different structures and don't really use the same tags? EDIT: I gotta change my question to this, can't we write a generic spider that takes the start url, allowed domains, and xpath or css selectors as arguments?
Scraping products data with categories from e-commerce
-0.197375
0
1
1,132
42,724,795
2017-03-10T17:53:00.000
1
0
0
0
python,numpy,numpy-ufunc
42,725,593
2
false
0
0
tl;dr You don't want that. Details First let's note that you're actually building a triangular matrix: for the first element, compare it to the rest of the elements, then repeat recursively to the rest. You don't use the triangularity, though. You just cut off the diagonal (each element is always equal to itself) and merge the rows into one list in your example. If you sort your source list, you won't need to compare each element to the rest of the elements, only to the next element. You'd have to keep the position with element using a tuple, to keep track of it after sorting. You would sort the list of pairs in O(n log n) time, then scan it and find all the matches if O(n) time. Both sorting and finding the matches are simple and quick in your case. After that, you'd have to create your 'bit vector', which is O(n^2) long. It would contain len(your vector) ** 2 elements, or 57600 million elements for a 240k-element vector. Even if you represented each element as one bit, it would take 53.6 Gbit, or 8.7 GBytes of memory. Likely you don't want that. I suggest that you find a list of pairs in O(n log n) time, sort it by both first and second position in O(n log n) time, too, and recreate any portion of your desired bitmap by looking at that list of pairs; binary search would really help. Provided that you have much fewer matches than pairs of elements, the result may even fit in RAM.
1
2
1
I have a numpy array of strings, some duplicated, and I'd like to compare every element with every other element to produce a new vector of 1's and 0's indicating whether each pair (i,j) is the same or different. e.g. ["a","b","a","c"] -> 12-element (4*3) vector [1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1] Is there a way to do this quickly in numpy without a double loop through all pairs of elements? My array has ~240,000 elements, so it's taking a terribly long time to do it the naive way. I'm aware of numpy.equal.outer, but apparently numpy.equal is not implemented on strings, so it seems like I'll need some more clever way to compare them.
Fastest way to compare every element with every other in np array of strings
0.099668
0
0
2,678
42,724,953
2017-03-10T18:03:00.000
0
0
0
0
python,vizard
42,817,712
1
true
0
0
So I figured it out. It turns out the .py script that you're running with Vizard needs to be in the same folder as the files you want to load into, with the exception being the models that come with Vizard and are somewhere in the installation folder.
1
0
0
I'm trying to load a .osgb file in vizard with viz.addChild('filename.osgb') but I always get the error ERROR: Failed to load model I've tried to use the relative path, the absolute path, but to no avail. This is on a Windows machine as well.
Vizard fails to load .osgb file
1.2
0
0
172
42,726,719
2017-03-10T19:57:00.000
7
0
0
0
python,django,secret-key
42,772,208
2
false
1
0
So, to answer my own question, changing the assigned key is done the same way you'd change any other variable. Just create a 50 character (ideally random) string and set SECRET_KEY equal to it. SECRET_KEY = "#$%&N(ASFGAD^*(%326n26835625BEWSRTSER&^@T#%$Bwertb" Then restart the web application. My problem was completely unrelated. It occurred because I set the path python uses to locate packages to a weird location. Sorry about that guys.
1
8
0
So, I'm trying to deploy a Django Web App to production, but I want to change the secret key before doing so. I've attempted to generate a new key using a randomizing function and insert that new key in place of the old one. When I do so, I get an error that says the following: AttributeError 'module' object has no attribute 'JSONEncoder' ... Exception Location .../django/contrib/messages/storage/cookie.py in , line 9 I've deleted the browser cache and restarted the server, but the error persists. I've also attempted to change the key back, after deleting the browser cache and restarting, the error still persists. Any idea how to resolve this issue? Edit: Python version is 2.6.6 and Django version is 1.3.1
How can i properly change the assigned secret key in a Django Web Application
1
0
0
4,502
42,729,062
2017-03-10T22:52:00.000
1
1
0
0
python,raspberry-pi
42,729,722
1
false
0
0
You can use messaging protocol like RabbitMQ, MQTT tech to make an easy communication between the raspberries. But another Simplest way is to develop HTTP REST endpoints if you don't have stron background in messaging protocols (MQTT). Easy way is develop HTTP REST endpoints using python flask. Suppose you have a method in python flask as turnOnLED() bind with a URL as /on on Raspberry PI X. Now you can call this REST Endpoint using the IP of this raspberry X from another Raspberry Y. You can similarly write a method in python to interact with **GPIO** and make that method available through your URL (ip/endpoints) to another Raspberry. From other Raspberry you can call that method by calling the URL for the first one. Make research on RESTful APIs using Python, GPIO, PGPIOD, WiringPI, Pythong flask or any other framework to write REST Endpoints rapidly. You need knowledge in all these buddy.
1
0
0
I am trying to program a raspberry pi 3 to run a traffic light on a breadboard.I also have a sensor that detects color of the traffic light, which is connected to the same raspberry pi. Can anyone help me with this? How would I do that, and also HOW can I send that detected information to another raspberry pi? Thank you!
Running Traffic Light with Pi and Sending that info to another Pi
0.197375
0
0
89
42,730,059
2017-03-11T00:50:00.000
1
1
0
1
python,django
42,730,116
1
false
1
0
Have you thought about asking the admin to start a virtualenv for you and give you permissions to work in that environment?
1
0
0
I've studied Python and Django, building a homepage. And I've been using a virtual memory on Ubuntu server(apache2 2.4.18/ php-7.0/ MariaDB 10.0.28 with phpMyAdmin/ FTP) offered for developers. The server hadn't allowed users to use python, but I asked the server administrator to give me a permission and I got it. The problem was, however, that I was not allowed to use not only any sudo command line but also basic commands like apt-get and python. The only administrator can do so, therefore it seems that I cannot install any neccessary things-virtualenv, django, and so on- by myself. Just to check whether .py file works or not, I added <?php include_once"test.py" ?> on the header of index.php about the test.py where only print "python test"(meaning only python 2 is installed on this server) is written. It works. That is, I guess, all I can do is uploading .py file with Filezilla. In this case, can I make a homepage with Python on this server efficiently? I was thinking about using Bottle Framework, but also not sure. I am confused with wondering whether I should use PHP on this server and using Python on PythonAnywhere in the end. I am a beginner. Any advice will be appreciated :)
Is it possible to build a homepage with python on virtual server when the sudo command is disabled?
0.197375
0
0
38
42,730,894
2017-03-11T03:09:00.000
0
0
1
0
python,dictionary
42,730,914
4
false
0
0
You can put a dictionary inside a dictionary. Try it: print({a:{'pop':5}})
1
0
0
Lets say we have {'Pop': 5} and we have a variable a = 'store' how could I get the output: {'store': {'pop': 5}} Is there a easy way?
Putting a dictionary inside a dictionary?
0
0
0
553
42,735,670
2017-03-11T13:18:00.000
0
0
0
0
python,google-sheets,google-sheets-api,quota
59,957,826
3
false
1
0
I had this issue with a long-running script...I am putting batches of data in spreadsheets, and every 100k rows I start on a new spreadsheet. The data is rolled up on a separate spreadsheet using IMPORTRANGE(). the first 3 were fine but the 4th was bombing with the "Resource has been exhausted" error. I noticed that when I saw this error, the IMPORTRANGE() was also failing in the browser. The error must be indicating something wrong with the server where the spreadsheet is stored/served, and is not API-related. Switching to a new spreadsheet fixed the error for me.
1
3
0
I have been updating about 1000 sheets using Python. Each takes about 2-3 minutes to update. The job ran most of the day yesterday (~8hrs). And when I look at my quotas for Google Sheets API in console.developers.google.com, I have used about 3k in the read group and 4k in the write group. Not nearly close to the 40k quota that is given. Now all of the 1000 sheets interact with one sheet because all of the keys are on that one sheet. In fact, I have tried using 2 different project sign ins, one through my company domain and one through my gmail, that both have access to these files. When I run it with the company credentials. It also gives me a HttpError 429, and 0 requests have been made with that credential. Is there some hidden quota I don't know about? Like calls to one spreadsheet? That's what it seems like. Google, are you cutting me off to the spreadsheet because I accessed it for 8hrs yesterday? It is bombing on spreadsheets().values().update and spreadsheets().batchUpdate
Google sheets API v4: HttpError 429. "Resource has been exhausted (e.g. check quota)."
0
0
1
10,665
42,737,716
2017-03-11T16:33:00.000
1
1
0
0
python,antlr,antlr4
42,750,891
3
false
0
0
The problem was that antlr4 was only installed for Python3 and not Python2. I simply copied the antlr4 files from /usr/lib/python3.6/site-packages/ to /usr/lib/python2.7/site-packages/ and this solved the problem!
1
5
0
I would like to use ANTLR4 with Python 2.7 and for this I did the following: I installed the package antlr4-4.6-1 on Arch Linux with sudo pacman -S antlr4. I wrote a MyGrammar.g4 file and successfully generated Lexer and Parser Code with antlr4 -Dlanguage=Python2 MyGrammar.g4 Now executing for example the generated Lexer code with python2 MyGrammarLexer.py results in the error ImportError: No module named antlr4. What could to be the problem? FYI: I have both Python2 and Python3 installed - I don't know if that might cause any trouble.
No module named antlr4
0.066568
0
1
13,812
42,738,434
2017-03-11T17:38:00.000
0
0
1
0
python,multithreading,python-2.7,loops,sleep
42,745,207
2
true
0
0
Silly me, threading works but printing doesn't work across threads. Thanks for all the help!
1
1
0
Many people here tell you to use threading but how do you have the rest of the program running while that thread sleeps, and reruns, and sleeps again.. etc. I have tried normal threading with things like a while loop but that didn't work for me edit: so the question is: how do you sleep a thread without pausing the whole program in python, if possible could you give me a example of how to do it? edit 2: and if possible without tkinter edit 3: fixed it, it already worked but i didn't see it because printing doesn't work across threads... Silly me.
python how to loop a function with wait without pausing the whole program
1.2
0
0
1,227
42,739,844
2017-03-11T19:49:00.000
0
0
1
0
python,multithreading,jvm,jython,gil
43,002,199
1
true
0
0
Yes, Jython uses Java-Threads (even if you're using the threading modul of Python) and so it has no GIL. But this isn't the answer (otherwise it has to be 42, because the question is unclear :^) ). The better Question is, what criteria you have and if CPython or Jython would be better. If you want real multithreadding, it's your thing. If you want to use Java and Python, use it. If you want fast execution times .... then are other languages maybe better (you can try to messure the time in a thread task in Python and the same code in Jython, but I guess even with GIL CPython would be faster). Greets, Zonk
1
3
0
CPython uses GIL to prevent problems such as mutual exclusion. However, the consequence is that the interpreter is not able to take advantage of a multi-core CPU. I also learnt that Jython does not require a GIL because its implementation is already thread-safe. Does it mean that Jython is a superior implementation when it comes to concurrent programming and utilizing a multi-core CPU?
Global Interpreter lock: Jython vs CPython
1.2
0
0
467
42,740,284
2017-03-11T20:31:00.000
0
1
1
0
python,python-3.x,nul
58,441,607
2
false
0
0
another equivalent way to get the value of \x00 in python is chr(0) i like that way a little better over the literal versions
1
1
0
I have question that I am having a hard time understanding what the code might look like so I will explain the best I can. I am trying to view and search a NUL byte and replace it with with another NUL type byte, but the computer needs to be able to tell the difference between the different NUL bytes. an Example would be Hex code 00 would equal NUL and hex code 01 equals SOH. lets say I wanted to create code to replace those with each other. code example TextFile1 = Line.Replace('NUL','SOH') TextFile2.write(TextFile1) Yes I have read a LOT of different posts just trying to understand to put it into working code. first problem is I can't just copy and paste the output of hex 00 into the python module it just won't paste. reading on that shows 0x00 type formats are used to represent that but I'm having issues finding the correct representation for python 3.x Print (\x00) output = nothing shows #I'm trying to get output of 'NUL' or as hex would show '.' either works fine --Edited so how to get the module to understand that I'm trying to represent HEX 00 or 'NUL' and represent as '.' and do the same for SOH, Not just limited to those types of NUL characters but just using those as exmple because I want to use all 256 HEX characters. but beable to tell the difference when pasting into another program just like a hex editor would do. maybe I need to get the two programs on the same encoding type not really sure. I just need a very simple example text as how I would search and replace none representable Hexadecimal characters and find and replace them in notepad or notepad++, from what I have read, only notepad++ has the ability to do so.
Nul byte representation in Python
0
0
0
4,065
42,742,115
2017-03-11T23:58:00.000
0
0
1
0
python,python-3.x,sys,pyperclip
42,742,138
3
false
0
0
Type "which python" and "pip --version" and check that their folders match. For example, my "which python3" == "/usr/local/bin/python3", and my "pip3 --version" == "/usr/local/lib/python3.6/site-packages".
1
0
0
I'm getting a ModuleNotFoundError for my module pyperclip. When I run pip install on it, I get a message saying I've already installed it so I'm not exactly sure why Python isn't finding it. When I run my program on PyCharm, it runs fine. How do I change the path or solve it in some other way so it can find it? My environment variables are all pointing to the Anaconda directories in a Windows machine. Thanks in advance.
Solving a ModuleNotFoundError
0
0
0
1,089
42,742,673
2017-03-12T01:26:00.000
2
0
1
0
python
42,742,692
2
false
0
0
It can be [[x * y for y in b] for x in a] Or [x * y for x in a for y in b] if you want a flattened result
1
0
0
I'm new to python and my problem is most likely very easily solved, but I couldn't figure it out and I couldn't find any topics that matched my specific issue. I have 2 lists of numbers in python: Eg. a=[0.01,0.02,0.03,0.04] b=[0.02,0.03,0.04,0.05] I would like to multiply every element in list "a" with all the elements from list "b" and produce ,in this case, 4 new lists: a0=a[0]*b a1=a[1]*b a2=a[2]*b a3=a[3]*b What is the best way to do that?
consecutively multiplying the elements of a list with another list python
0.197375
0
0
53
42,743,279
2017-03-12T03:08:00.000
1
0
1
0
python-2.7,spyder
42,743,487
1
false
0
0
I had finally found an answer Recently I named a file csv.py, I deleted it and everything is fine now
1
0
0
I opened my anaconda prompt and I got this message everytime without typing anything I am using python 2.7 for some time but suddenly this error appeared today without updating anything I had tried number of solutions but those did not work for me when I tried set SPYDER_DEBUG=2 spyder --show-console I get 'import sitecustomize' failed; use -v for traceback 'import sitecustomize' failed; use -v for traceback Traceback (most recent call last): File "C:\Users\aditya royal\Anaconda2\Scripts\spyder-script.py", line 5, in sys.exit(spyder.app.start.main()) File "C:\Users\aditya royal\Anaconda2\lib\site-packages\spyder\app\start.py", line 104, in main mainwindow.main() File "C:\Users\aditya royal\Anaconda2\lib\site-packages\spyder\app\mainwindow.py", line 2955, in main or options.optimize) File "C:\Users\aditya royal\Anaconda2\lib\site-packages\spyder\utils\windows.py", line 33, in set_attached_console_visible return bool(ShowWindow(console_window_handle, flag[state])) KeyError: 3 I had tried solving my error by adding the path in my environment variable but I still see the error and I also tried stopping my firewall for sometime, I also tried spyder --reset, spyder --default and the problem still persists
'runfile' is not defined in python 2.7 spyder 2.7
0.197375
0
0
227
42,746,732
2017-03-12T11:18:00.000
2
0
1
0
python,pycharm,conda
65,169,873
9
false
0
0
To use Conda environment as PyCharm interpreter activate Conda environment from Conda navigator open PyCharm from the navigator tool list in Conda Add interpreter section choose existing Conda environment and it automatically recognises the path of that environment's python.exe file
4
89
0
Conda env is activated using source activate env_name. How can I activate the environment in pycharm ?
Use Conda environment in pycharm
0.044415
0
0
101,055
42,746,732
2017-03-12T11:18:00.000
4
0
1
0
python,pycharm,conda
57,959,444
9
false
0
0
I had the same problem i am on windows 10 professional 64 bit my solution was to start Pycharm as adminstrator and it worked
4
89
0
Conda env is activated using source activate env_name. How can I activate the environment in pycharm ?
Use Conda environment in pycharm
0.088656
0
0
101,055
42,746,732
2017-03-12T11:18:00.000
3
0
1
0
python,pycharm,conda
50,944,805
9
false
0
0
It seems important to me to know, that setting project interpreter as described in wasabi's comment does not actually activate the conda environment. I had issue with running xgboost (that I installed with conda) inside PyCharm and it turned out that it also need some folders added to PATH. In the end I had to make do with an ugly workaround: Find out what are the additional folders in PATH for given environment (with echo %PATH% in cmd) In the file I wish to run put to the top before anything else: import os os.environ["PATH"] += os.pathsep + os.pathsep.join(my_extra_folders_list) I know this is not at all proper solution, but i was unable to find any other beside what Mark Turner mentioned in his comment.
4
89
0
Conda env is activated using source activate env_name. How can I activate the environment in pycharm ?
Use Conda environment in pycharm
0.066568
0
0
101,055
42,746,732
2017-03-12T11:18:00.000
12
0
1
0
python,pycharm,conda
50,785,196
9
false
0
0
As mentioned in one of the comments above, activating an environment can run scripts that perform other actions such as setting environment variables. I have worked in one environment that did this. What worked in this scenario was to: open a conda prompt activate the environment run pycharm from the conda prompt Pycharm then had access to the environment variables that were set by activating the environment.
4
89
0
Conda env is activated using source activate env_name. How can I activate the environment in pycharm ?
Use Conda environment in pycharm
1
0
0
101,055
42,746,745
2017-03-12T11:19:00.000
1
0
1
1
python,python-3.x
42,748,323
2
false
0
0
The "script" you use is only the human friendly representation you see. Python opens that script, reads lines, tokenizes them, creates a parse and ast tree for it and then emits bytecode which you can see using the dis module. The "script" isn't loaded, it's code object (the object that contains the instructions generated for it) is. There's no direct way to affect that process. I have never heard of a script being so big that you need to read it in chunks, I'd be surprised if you accomplished it.
2
2
0
I'm running a python script using python3 myscript.py on Ubuntu 16.04. Is the script loaded into memory or read and interpreted line by line from the hdd? If it's not loaded all at once, is there any way of knowing or controlling how big the chunks are, that are loaded into Memory?
Does executing a python script load it into memory?
0.099668
0
0
1,125
42,746,745
2017-03-12T11:19:00.000
5
0
1
1
python,python-3.x
42,746,771
2
true
0
0
It is loaded into memory in its entirety. This must be the case, because a syntax error near the end will abort the program straight away. Try it and see. There does not need to be any way to control or configure this. It is surely an implementation detail best left alone. If you have a problem related to this (e.g. your script is larger than your RAM), it can be solved some other way.
2
2
0
I'm running a python script using python3 myscript.py on Ubuntu 16.04. Is the script loaded into memory or read and interpreted line by line from the hdd? If it's not loaded all at once, is there any way of knowing or controlling how big the chunks are, that are loaded into Memory?
Does executing a python script load it into memory?
1.2
0
0
1,125
42,746,949
2017-03-12T11:41:00.000
0
0
0
0
python,caching,flask
42,746,999
1
true
1
0
No, that's not really what Varnish is for; it's more for caching complete pages. A better fit here would be memcached, which is perfect for storing arbitrary data against a key. Redis could be another alternative.
1
0
0
I am doing API for my app using flask. It is like, suppose you query the API, it will give a list of 100 items as a response. I need to show first 10 to the users and save rest 90 in cache, so when user swiped those first 10 items, I will display next 11-20 items and so on.. The problem I am facing is to where store those 90 items and retrieve them using API. I am thinking of doing this using varnish as cache to store responses. I want to know is varnish a good fit here? If yes then How? or Is there any better way to achieve the desired result?
Flask-python caching
1.2
0
0
119
42,748,080
2017-03-12T13:36:00.000
0
0
1
0
python,conditional-statements,python-sphinx
42,748,121
1
false
0
0
You can add a description of them to the description in the docstring at the appropriate level - this will be carried over to the Spinx documentation.
1
0
0
Is it possible to document conditional statements and other statements in a function using Sphinx? If not, is there any other python library which has such functionality?
Documentation of conditionals
0
0
0
54
42,748,382
2017-03-12T14:08:00.000
0
1
1
0
python,module,package
42,748,427
1
true
0
0
You can't know what a python package does unless it is stated in its docs (on PyPI or in the repository) or without reading the code. A Python package can be anything that has a setup.py and either a single module or multiple files under a folder with a __init__.py file in it. The fact that the __init__.py is empty doesn't mean anything other than the fact that its existence means there's a python package involved. Any specific package you want to know about, you should look up for documentation or read the code to get a sense of its purpose.
1
1
0
EDIT I was being stupid. Just type help('package_name'.'pyb_name') which worked. I would like to find out what is actually in a python package I have locally downloaded and installed with pip. Typing help(package_name) just lists NAME, FILE (where the init.py is) and PACKAGE CONTENTS which is just one .pyd file. I can't open the .pyd file to check what's inside(tbh not all that familiar with .pyds). These two with a 159byte init.pyc are the only files in the package. I need to use this (not widely available) package for some university work. Thanks.
How to find out what a python package does
1.2
0
0
143
42,750,479
2017-03-12T17:11:00.000
0
0
0
0
python,pandas,dataframe,apply
42,750,534
1
true
0
0
The None occurs because the print() function doesn't return any value and apply() expects the function to return something. If you want to print the data frame, just use print(df), or if you need some other format, tell us what your are trying to get at the printed output.
1
1
1
I'm trying to use the pandas.DataFrame.apply function. My actual code performs similarly to the example below. At the end of the output it outputs "None" for each row in the dataframe. This behavior causes an error in the function I'm passing through apply. df = pd.DataFrame({"one": range(0,5), "two": range(0,5)}) df.apply(print, axis=1) Why does it behave this way? What is the None coming from? How can I alter/control this behavior?
Understanding the None output when using pandas.DataFrame.apply
1.2
0
0
1,727
42,753,062
2017-03-12T20:55:00.000
1
0
1
0
python,pdb
42,753,163
1
true
0
0
Citing Pdb documentation: ! statement Execute the (one-line) statement in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. This should do what you want, if I'm not mistaken
1
1
0
Is there a way to disable the default pdb library command aliases? I currently am using variables that have the same name as the pdb set_trace() shortcut aliases. For example, I have a variable named s, but s is a shortcut for step while using set_trace(). This is one default alias among many like a and n which represent args and next respectively. So when I am trying to inspect my s variable by typing in the s command, it runs step instead which is not what I want. Thanks!
Is there a way to disable Python debugger's pdb library set_trace default aliases?
1.2
0
0
205
42,757,209
2017-03-13T05:36:00.000
2
0
1
1
python,python-3.x,terminal,parallel-processing,gil
42,757,357
1
true
0
0
Each terminal window will start a new python interpreter, each of which has its own GIL. The difference is probably due to contention for some resource at the OS level (disk i/o, memory, cpu cycles).
1
1
0
I am trying to understand Python's GIL. I recently had an assignment where I had to compare the execution times of a certain task performed using different algorithms of different time complexities on multiple input files. I ran a python script to do the same, but I used separate terminal windows on macOS to run the same python script for different input files. I also ran it all in one terminal window, one after the other, for each input file. The CPU time for this was lower for each execution as compared to the previous approach with multiple windows where each program took twice as long but ran all at once. (Note : there were 4 terminal windows in the previous approach and the python script only ran an a.out executable compiled with clang on macOS and stored the output in different files). Can anyone explain why running them in parallel lead to each program being slower? Did they run on separate cores or did the GIL lead to each program being slower than it would if I run it one at a time in one terminal window?
Does GIL affect parallel processing of a python script in separate terminal windows?
1.2
0
0
160
42,757,866
2017-03-13T06:46:00.000
0
0
0
0
javascript,python,html,python-2.7,dom
42,757,939
3
true
1
0
INSPECT ELEMENT and VIEW PAGE SOURCE are not the same. View source shows you the original HTML source of the page. When you view source from the browser, you get the HTML as it was delivered by the server, not after javascript does its thing. The inspector shows you the DOM as it was interpreted by the browser. This includes for example changes made by javascript which cannot be seen in the HTML source.
2
0
0
I want to get the INSPECT ELEMENT data of a website. Let's say Truecaller. So that i can get the Name of the person who's mobile number I searched. But whenever i make a python script it gives me the PAGE SOURCE that does not contain the required information. Kindly help me. I am a beginner so kindly excuse me of any mistake in the question.
How do I get the data of a website as shown in INSPECT ELEMENT and not in VIEW PAGE SOURCE?
1.2
0
1
3,156
42,757,866
2017-03-13T06:46:00.000
0
0
0
0
javascript,python,html,python-2.7,dom
42,758,772
3
false
1
0
what you see in the element inspector is not the source-code anymore. You see a javascript manipulated version. Instead of trying to execute all the scripts on your own which may lead into multiple problems like cross origin security and so on, search the network tab for the actual search request and its parameters. Then request the data from there, that is the trick. Also it seems like you need to be logged in to search on the url you provided so you need to eventually adapt cookie/session/header and stuff, just like a request from your browser would. So what i want to say is, better analyse where the data you look for is coming from if it is not in the source
2
0
0
I want to get the INSPECT ELEMENT data of a website. Let's say Truecaller. So that i can get the Name of the person who's mobile number I searched. But whenever i make a python script it gives me the PAGE SOURCE that does not contain the required information. Kindly help me. I am a beginner so kindly excuse me of any mistake in the question.
How do I get the data of a website as shown in INSPECT ELEMENT and not in VIEW PAGE SOURCE?
0
0
1
3,156
42,759,735
2017-03-13T09:00:00.000
1
0
0
0
python,soap,wsdl,zeep
42,760,121
1
true
1
0
No this is a bug in the WSDL file, It defines an element with type "tns:string", I assume they meant "xsd:string". See the following line in the wsdl:
1
0
0
I'm trying to use ZEEP v1.2.0 to connect to some service and ran into this issue. I just execute: python -mzeep http://fulfill.sfcservice.com/default/svc/wsdl Result: zeep.exceptions.LookupError: No type 'string' in namespace http://www.chinafulfill.com/CffSvc/. Available types are: [...] Am I missing anything here to test this?
ZEEP WSDL LookupError: No type 'string' in namespace
1.2
0
1
996
42,761,359
2017-03-13T10:28:00.000
0
0
0
0
python,tensorflow
42,768,183
1
false
0
0
The graph is available by calling tf.get_default_graph. You can get it in GraphDef format by doing graph.as_graph_def().
1
0
1
When using tensorflow, the graph is logged in the summary file, which I "abuse" to keep track of the architecture modifications. But that means every time I need to use tensorboard to visualise and view the graph. Is there a way to write out such a graph prototxt in code or export this prototxt from summary file from tensorboard? Thanks for your answer!
export graph prototxt from tensorflow summary
0
0
0
421
42,761,881
2017-03-13T10:56:00.000
0
0
1
0
python,asterisk,voip
42,813,213
2
false
0
0
There seemed to be an issue with permissions on the log file. The log file was set to owner root not asterisk. And the script was running as asterisk and not being able to write the data to the .log file. I got an extra pair of eyes to help me troubleshoot and we figured out that was the deal. So it wasn't astcanary at all! The whole situations was very odd, so I wanted to see if anyone else had ever seen that. So you were close, arheops! at least as far as user access was concerned. I was also running the asterisk as root, when doing the above commands. So I did sudo -u asterisk then ran asterisk -r. I feel like this was stupid question but it was very confusing and not very obvious answer! Thank you for your response! :)
1
0
0
I am setting up a new voip system. The system is an Asterisk backend. I have a python script that verifies customer data when they call into technical support. When I trigger the script in the call IVR menu, I get return 0 and the script does not actually execute. I did: asterisk -rx "core stop now" asterisk -vvvvgc Once I did that, the script ran with no problems. I can go through the whole menu, verify the customer information and transfer the call to tech support extension. But Icinga shows that astcanary is no longer running and is showing as critical on the monitoring. If I restart asterisk/telephony services, astcanary is showing as ok but my script no longer runs. The script once again returns 0 and does not do what it is supposed to. Does anyone have any ideas what this conflict seems to be related about? I have monitored my server for CPU usage but the python script is not idling high usage and barely hits 0.5% when actually running (I'm using htop to watch the processes when doing this) Any thoughts or ideas will be welcomed and looked at! Thanks.
astcanary vs python script in asterisk
0
0
0
212
42,764,539
2017-03-13T13:11:00.000
2
0
0
1
python,tensorflow,sublimetext2,sublimetext3,sublimetext
42,765,548
2
true
0
0
Ok I got it: The problem is that the LD_LIBRARY_PATH variable was missing. I only exported it in .bashrc. When I add export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\ ${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} to ~/.profile it's working (don't forget to restart). It also works if I start sublime from terminal with subl which passes all the variables.
1
1
1
I wanted to create a new "build tool" for sublime text, so that I can run my python scripts with an anaconda env with tensorflow. On my other machines this works without a problem, but on my ubuntu machine with GPU support I get an error. I think this is due to the missing paths. The path provided in the error message doesn't contain the cuda paths, although I've included them in .bashrc. Update I changed ~/.profile to export the paths. But tensorflow still won't start from sublime. Running my script directly from terminal is no problem. I get ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory So somehow the GPU stuff (cuda?) can not be found Thanks
How to export PATH for sublime build tool?
1.2
0
0
668
42,766,823
2017-03-13T15:00:00.000
1
0
1
1
python,command-line,subprocess
42,767,271
1
true
0
0
Often, tools you are calling have a -y flag to automatically answer surch questions with yes.
1
0
0
I have a script where I used a few command-line tools are utilised. However I've hit an issue where I am trying to convert two videos into one video (which I can do) however this is meant to be an idle process and when I run this command with subprocess.call() it prompted me with a 'A file with this name already exists, would you like to overwrite it [y/n]?' and now I am stuck on how to emulate a users input of 'y' + Enter. It could be a case of running it as admin (somehow) or using pipes or this Stdout stuff I read about but didn't really understand. How would you guys approach this? What do you think the best technique? Cheers guys, any help is immensely appreciated!
Subprocess emulate user input after command
1.2
0
0
87
42,771,938
2017-03-13T19:31:00.000
0
0
0
0
python,bokeh,holoviews
42,772,141
2
false
0
0
There are some changes in bokeh 0.12.4, which are incompatible with HoloViews 1.6.2. We will be releasing holoviews 1.7.0 later this month, until then you have the option to downgrading to bokeh 0.12.3 or upgrading to the latest holoviews dev release with: conda install -c ioam/label/dev holoviews or pip install https://github.com/ioam/holoviews/archive/v1.7dev7.zip
1
0
1
I have tried to run the Holoviews examples from the Holoviews website. I have: bokeh 0.12.4. holoviews 1.6.2 py27_0 conda-forge However, following any of the tutorials I get an error such as the following and am unable to debug: AttributeError: 'Image' object has no attribute 'set'. Is anyone able to guide me as to how to fix this? Cheers Ed
Holoviews: AttributeError: 'Image' object has no attribute 'set'
0
0
0
997
42,774,789
2017-03-13T22:44:00.000
0
0
0
0
python,python-2.7,selenium,request,http-post
42,774,970
1
false
0
0
I don't know whether some libraries will do this for you. But I think you can simply set up a thread to run something like tcpdump to capture all HTTP package and store them somewhere while the test is running on the main process. You can start the thread before clicking the buttons and do some analysis on the captured packages after your test to get those packages containing the request you want.
1
0
0
How can you with Python 2.7 and Selenium read the POST or GET request that the driver send when the driver clicks on a button that send some request? And is it also possible to read the response?
Python Selenium, read POST/GET Request when clicking a button
0
0
1
754
42,776,941
2017-03-14T02:53:00.000
0
0
0
0
python,oracle,sqlalchemy
70,789,442
3
false
0
0
Encrypting the password isn't necessarily very useful, since your code will have to contains the means to decrypt. Usually what you want to do is to store the credentials separately from the codebase, and have the application read them at runtime. For example*: read them from a file read them from command line arguments or environment variables (note there are operating system commands that can retrieve these values from a running process, or they may be logged) use a password-less connection mechanism, for example Unix domain sockets, if available fetch them from a dedicated secrets management system You may also wish to consider encrypting the connections to the database, so that the password isn't exposed in transit across the network. * I'm not a security engineer: these examples are not exhaustive and may have other vulnerabilities in addition to those mentioned.
1
2
0
I'm working with sqlalchemy and oracle, but I don't want to store the database password directly in the connection string, how to store a encrypted password instead?
How to use encrypted password in connection string of sqlalchemy?
0
1
0
3,865
42,777,197
2017-03-14T03:19:00.000
4
0
0
0
python,computer-science,unix-timestamp
42,777,456
2
true
0
0
You need to account for the years that are leap years. A year is a leap year if: it is evenly divisible by 4; unless it is evenly divisible by 100; or if it's evenly divisible by 400. As a rough estimate for the year, calculating 1970 + 0x7fffffffffffffff // 86400 // (365 + 1/4 - 1/100 + 1/400) gives an answer of 292277026596. I'll leave the derivation of the exact day as an exercise for the reader. I used Python 3 for this calculation which gives real results for integer divides. Adjust accordingly for Python 2.
1
1
0
I've tried to calculate the date myself using the maximum 64-bit signed integer but always end up with another date that's a few million years different. I've tried using sidereal years and leap years but I always get results that are at least a few million years off. Here is what I've tried thus far: dateA = 1970 + (9223372036854775807/31556926.08) dateB = 1970 + (9223372036854775807/31536000) + (((9223372036854775807/31536000)/4)/365) Both return the wrong answer. Can anyone guide me in the right direction?
How was the date of December 4, 292,277,026,596 calculated for the overflow of the 64 bit signed unix timestamp?
1.2
0
0
913
42,777,430
2017-03-14T03:46:00.000
5
0
1
0
python,autocomplete,spyder
50,535,897
3
false
0
0
Autocomplete was not working for me at all. So, I tried Tools -> Reset Sypder to factory defaults and it worked.
2
10
1
Running on Mac Sierra, the autocompletion in Spyder (from Anaconda distribution), seems quite erratic. When used from the Ipython console, works as expected. However, when used from the editor (which is my main way of writing), is erratic. The autocompletion works (i.e. when pressing TAB a little box appears showing options) for some modules, such as pandas or matplotlib. So writing 'pd.' and hitting TAB, gets the box with options as expected. However, this does not happen with many other objects: for example, after defining a dataframe named 'df', typing 'df.' TAB shows nothing. In the Ipython console, 'df.' TAB would show the available procedures for that dataframe, such as groupby, and also its columns, etc.. So the question is threefold. First, is there any particular configuration that should be enabled to get this to work? I don't think so, given some time spent googling, but just wanna make sure. Second, could someone state what is the official word on what works and what doesn't in terms of autocompletion (e.g. what particular modules do work from the editor, and which ones doesn't?). Finally, what are the technical aspects of the differences between the editor and the Ipython console in the performance of the autocompletion with Spyder? I read something about Jedi vs. PsychoPy modules, so got curious (however, please keep in mind that although I have scientific experience, I am relatively new to computation, so please keep it reasonably simple for an educated but not expert person). UPDATE: As a side question, it would be great to know why is the autocompletion better in Rodeo (another IDE). It is more new, has way fewer overall options than Spyder, but the autocompletion works perfectly in the editor.
Why autocompletion options in Spyder 3.1 are not fully working in the Editor?
0.321513
0
0
13,872
42,777,430
2017-03-14T03:46:00.000
5
0
1
0
python,autocomplete,spyder
46,160,256
3
false
0
0
Autocompletion works correctly if there are NO white spaces in the project working directory path.
2
10
1
Running on Mac Sierra, the autocompletion in Spyder (from Anaconda distribution), seems quite erratic. When used from the Ipython console, works as expected. However, when used from the editor (which is my main way of writing), is erratic. The autocompletion works (i.e. when pressing TAB a little box appears showing options) for some modules, such as pandas or matplotlib. So writing 'pd.' and hitting TAB, gets the box with options as expected. However, this does not happen with many other objects: for example, after defining a dataframe named 'df', typing 'df.' TAB shows nothing. In the Ipython console, 'df.' TAB would show the available procedures for that dataframe, such as groupby, and also its columns, etc.. So the question is threefold. First, is there any particular configuration that should be enabled to get this to work? I don't think so, given some time spent googling, but just wanna make sure. Second, could someone state what is the official word on what works and what doesn't in terms of autocompletion (e.g. what particular modules do work from the editor, and which ones doesn't?). Finally, what are the technical aspects of the differences between the editor and the Ipython console in the performance of the autocompletion with Spyder? I read something about Jedi vs. PsychoPy modules, so got curious (however, please keep in mind that although I have scientific experience, I am relatively new to computation, so please keep it reasonably simple for an educated but not expert person). UPDATE: As a side question, it would be great to know why is the autocompletion better in Rodeo (another IDE). It is more new, has way fewer overall options than Spyder, but the autocompletion works perfectly in the editor.
Why autocompletion options in Spyder 3.1 are not fully working in the Editor?
0.321513
0
0
13,872
42,781,136
2017-03-14T08:34:00.000
1
0
0
0
python,mysql,django
42,781,404
2
false
1
0
No, you do not need to declare the Django model with all the fields from the database.
1
2
0
If a database table contains 100 fields, and a django application utilises only a few fields say 1 or 2, does the corresponding django model needs to be declared with 100 fields?
Django model with number of fields less than corresponding database table
0.099668
0
0
252
42,783,876
2017-03-14T10:47:00.000
0
0
1
0
python,position,kivy
42,783,947
1
false
0
1
I don't think you can do that. You can embed the TextInput in another layout, position that layout and make the TextInput as big as the parent.
1
0
0
I'm trying to find a way to change Textinput text box position and vertical size, but I can't find an example... anyone knows? I need to change the position of the text box from the automatic position to my own. Is there a position key like the text, size, height? And if so, how the cordinates are written?
python kivy textinput text box position
0
0
0
471
42,785,026
2017-03-14T11:41:00.000
30
0
0
0
python,tensorflow
42,932,979
6
true
0
0
For convolution, they are the same. More precisely, tf.layers.conv2d (actually _Conv) uses tf.nn.convolution as the backend. You can follow the calling chain of: tf.layers.conv2d>Conv2D>Conv2D.apply()>_Conv>_Conv.apply()>_Layer.apply()>_Layer.\__call__()>_Conv.call()>nn.convolution()...
2
66
1
Is there any advantage in using tf.nn.* over tf.layers.*? Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so.
tf.nn.conv2d vs tf.layers.conv2d
1.2
0
0
35,607
42,785,026
2017-03-14T11:41:00.000
7
0
0
0
python,tensorflow
53,683,545
6
false
0
0
All of these other replies talk about how the parameters are different, but actually, the main difference of tf.nn and tf.layers conv2d is that for tf.nn, you need to create your own filter tensor and pass it in. This filter needs to have the size of: [kernel_height, kernel_width, in_channels, num_filters] Essentially, tf.nn is lower level than tf.layers. Unfortunately, this answer is not applicable anymore is tf.layers is obselete
2
66
1
Is there any advantage in using tf.nn.* over tf.layers.*? Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so.
tf.nn.conv2d vs tf.layers.conv2d
1
0
0
35,607
42,787,327
2017-03-14T13:27:00.000
1
0
0
0
json,python-2.7,boto3,aws-cli,prettytable
42,952,740
1
false
0
0
Python Boto3 does not return the data in the tabular format. You will need to parse the data and use another python lib to output the data in the tabular format . Pretty table works good for me, read the pretty table lib docs and debug your code.
1
0
0
in aws cli we can set output format as json or table. Now I can get json output from json.dumps is there anyway could achieve output in table format? I tried pretty table but no success
Is it possible to get Boto3 | python output in tabular format
0.197375
1
1
980
42,787,560
2017-03-14T13:38:00.000
0
1
0
1
python,version-control,raspberry-pi
42,787,653
3
false
0
0
Following a couple of bad experiences where I lost code which was only on my Pi's SD card, I now run WinSCP on my laptop, and edit files from Pi on my laptop, they open in Notepad++ and WinSCP automatically saves edits to Pi. And also I can use WinSCP folder sync feature to copy contents of SD card folder to my latop. Not perfect, but better what I was doing before
2
0
0
I am writing a web python application with tornado framework on a raspberry pi. What i actually do is to connect to my raspberry with ssh. I am writing my source code with vi, on the raspberry. What i want to do is to write source code on my development computer but i do not know how to synchronize (transfer) this source code to raspberry. It is possible to do that with ftp for example but i will have to do something manual. I am looking for a system where i can press F5 on my IDE and this IDE will transfer modified source files. Do you know how can i do that ? Thanks
Synchronize python files between my development computer and my raspberry
0
0
0
334
42,787,560
2017-03-14T13:38:00.000
0
1
0
1
python,version-control,raspberry-pi
54,502,688
3
false
0
0
I have done this before using bitbucket as a standard repository and it is not too bad. If you set up cron scripts to git pull it's almost like continuous integration.
2
0
0
I am writing a web python application with tornado framework on a raspberry pi. What i actually do is to connect to my raspberry with ssh. I am writing my source code with vi, on the raspberry. What i want to do is to write source code on my development computer but i do not know how to synchronize (transfer) this source code to raspberry. It is possible to do that with ftp for example but i will have to do something manual. I am looking for a system where i can press F5 on my IDE and this IDE will transfer modified source files. Do you know how can i do that ? Thanks
Synchronize python files between my development computer and my raspberry
0
0
0
334
42,787,682
2017-03-14T13:43:00.000
1
0
1
0
python,python-2.7
42,787,765
4
false
0
0
Find and replace "str" with "sensibleNameForYourVariable", then use str(i) to convert integers to strings.
1
0
0
I have used str as a variable. Now, I would like to convert an int into a string. For this, normally I would use str(10). What should I do in this case?
Reinititalize 'str' as a type
0.049958
0
0
47
42,788,383
2017-03-14T14:14:00.000
2
0
1
0
python,anaconda,conda
68,699,677
3
false
0
0
Just go to preferences in spyder & then go to Python interpreter-> Use the following python interpreter: here, from browse files option, give path for your python2.exe file & then apply. Now your python2 doesn't have spyder-kernels module required to open console in spyder so install it by writing command in cmd python2 -m pip install spyder-kernels. Here python2 -m is used coz I have two versions of python installed
3
9
0
From what I have learnt in the documentation it states that you can easily switch between 2 python environments by just creating a new variable using command prompt "conda create -n python34 python=3.4 anaconda" if i already have python 2.7 installed. An environment named python 34 is created and we can activate it using "activate python 34" But all this happens like executing the above commands happens in my windows command prompt. I want to switch between python versions in spyder IDE, How to do this?
Switch between spyder for python 2 and 3
0.132549
0
0
15,206
42,788,383
2017-03-14T14:14:00.000
9
0
1
0
python,anaconda,conda
42,788,793
3
true
0
0
Spyder is launched from the environment that you're using. So if you want to use python 3 in Spyder then you activate python34 (or whatever you named the environment with Python 3) then run spyder. If you want to use python 2 in Spyder then you deactivate the python3 environment (or activate an environment in which you installed Python 2) then run spyder. I do not believe that you can change environments once Spyder is launched. N.B. you may need to install Spyder in each environment, depending on your set up, by first activating the environment then using conda install spyder.
3
9
0
From what I have learnt in the documentation it states that you can easily switch between 2 python environments by just creating a new variable using command prompt "conda create -n python34 python=3.4 anaconda" if i already have python 2.7 installed. An environment named python 34 is created and we can activate it using "activate python 34" But all this happens like executing the above commands happens in my windows command prompt. I want to switch between python versions in spyder IDE, How to do this?
Switch between spyder for python 2 and 3
1.2
0
0
15,206
42,788,383
2017-03-14T14:14:00.000
1
0
1
0
python,anaconda,conda
51,184,432
3
false
0
0
Just go to the directory where you have installed Spyder(use cd in command prompt), for me, it looks like "C:\Users\Rohan\Anaconda2" and type spyder in cmd. it will run your Spyder IDE.
3
9
0
From what I have learnt in the documentation it states that you can easily switch between 2 python environments by just creating a new variable using command prompt "conda create -n python34 python=3.4 anaconda" if i already have python 2.7 installed. An environment named python 34 is created and we can activate it using "activate python 34" But all this happens like executing the above commands happens in my windows command prompt. I want to switch between python versions in spyder IDE, How to do this?
Switch between spyder for python 2 and 3
0.066568
0
0
15,206
42,788,839
2017-03-14T14:34:00.000
0
0
0
0
python,libreoffice,dde
42,806,325
2
false
0
0
The API does not provide a method to suppress the prompt upon opening the file! I've tried running StarBasic code to update DDE links on "document open" event, but the question keeps popping up. So, I guess you're out of luck: you have to answer "Yes" if you want the actual values. [posted the comment to OP's question here again as answer, as suggested by @Jim K]
1
0
0
I have an .ods file that contains many links that must be updated automatically. As I understand there is no easy way to do this with macros or libreoffice command arguments, so I am trying to make all links update upon opening the file and then will save the file and exit. All links are DDE links which should be able to update automatically (and are set to do so in Edit > Links), and I have also enabled this in Tools > Options > Calc > General > Always Update Links When Opening, as well as Tools > Options > Calc > Formulas > Always Recalculate. However, I am still being prompted with a popup to manually update links upon opening, and links will not be up to date if I do not select Update. I need these DDE links to update automatically, why isn't this working? If there is no solution there, I am also willing to try to update links via Python. Will Uno work with libreoffice to do this without ruining any preexisting graphs in the file like openpyxl does?
Libreoffice - update links automatically upon opening?
0
1
0
986
42,791,422
2017-03-14T16:27:00.000
0
1
0
1
python,linux,raspberry-pi3
42,820,278
1
false
0
0
In these days the website of tornado has some problem. I downloaded the tar.gz file from another website and installed from there. Instead of use command "python" use "python3"
1
0
0
I'm using raspberry pi 3 to communicate with an android app through websocket. I installed tornado on my raspberry and the installation was succesfull but if I use it with python 2.7 I haven't any kind of problem but I need to use it with python 3 and when I just write "import tornado" I get an ImportError: No module named "tornado". It is like if it is installed in python 2 but not in 3. Both python 2 and 3 are preinstalled on raspberry. Somebody can help me? Thanks in advance Sorry for my bad english
Error import tornado in python 3
0
0
0
299
42,795,425
2017-03-14T19:58:00.000
0
0
1
0
python,spyder
42,795,506
1
false
0
0
The idle is a python interpreter, and it is giving you a python error SyntaxError: invalid syntax. You are already running python, so entering python file.py img1.bmp 4 would not make any sense. You would need to run that in a command prompt. If you want to run it in a python interpreter, you could import your file and call the functions as needed.
1
0
0
I've been trying to run some code in Spyder using their idle, but I keep getting an error such as SyntaxError: invalid syntax. Assuming that the file requires 2 inputs, for example, img1.bmp, and an integer (i.e. python file.py img1.bmp 4).
Spyder 3 for Python running code
0
0
0
143
42,798,315
2017-03-14T23:18:00.000
0
0
0
0
python,django,datetime,django-models
42,798,413
2
false
1
0
I think CharField is better if you just want to represent it
1
0
0
I am looking to have a model that can accept a partially completed date as one of its field. The field is supposed to represent the date that a historical event happened. Some of these events are only known to the year or month and not to the day. Is there any graceful way to handle this?
Python Django Datetime Field that doesn't need all the information
0
0
0
19
42,801,569
2017-03-15T05:13:00.000
1
0
0
0
javascript,python,ssl,x509certificate
42,801,616
1
false
1
0
Django is a good option to create applications using python. You can start an application, and embed your code in template and write a view to handle requests and responses.
1
0
0
What I have done: 1- Created a web form using HTML and javascript to create a SSL certificate that can create dynamic certificates. 2- Successfully parsed through an existing certificate and passed the required values to the web form. 3- I am using the HTML+javascript inside the python script itself and appending the parsed certificate values to the javascript before displaying it. What I need to do: 1-Take values from the web form, assign those to particular variables and pass those variables to a python script, that can create a CSR using those and sign it using a dummy key. So, basically, I want to call a python script on a click of a button that can take web form values and create a certificate. P.S. PHP isn't an option for me, as the server I am working on doesn't support it. Can someone guide me in the right direction as for how to proceed? Any examples or study material? Or should I start working with Flask?
Using javascript to pass web forms values to python script
0.197375
0
0
134
42,802,024
2017-03-15T05:53:00.000
1
1
0
0
python,sftp,paramiko
42,807,503
2
true
0
0
SFTP.open(mode='a') opens a file in appending mode. So first you can call SFTP.stat() to get the current size of the file (on remote side) and then open(mode='a') it and append new data to it.
1
0
0
I need implement a uploading function which can continue from the point from last interruption via sftp. I'm trying paramiko. but I cannot fond any example about this. Can anybody give me some advices? Best regards
can paramiko has the function to implememt uploading continue from the point from last interruption
1.2
0
1
41
42,804,006
2017-03-15T08:00:00.000
0
0
0
0
javascript,python,node.js
42,963,477
2
false
0
0
I tried to encode the image and send it but It did not work. So I used Socket programming instead and It worked wonderfully.
1
1
0
I am trying to send an image from node js script to python script using python-shell. From what I know, I should use binary format. I know that in python side I can use this 2 functions: import sys sys.stdout.write() and sys.stdin.read() But I am not sure how the node js side gonna be? (Which functions can I use and how can I use them?)
Sending Images from nodejs to Python script via standard input/output
0
0
1
631
42,805,374
2017-03-15T09:12:00.000
4
0
1
0
python
42,805,461
2
false
0
0
The None response in ex_dict.get('test', 0) is ok because the "test" key exists and has None value. For instance, if you try the same with ex_dict.get("non_existing_key", 0) it returns 0.
1
6
0
python dict if a key having value None and when called get() returns NoneType ex_dict = {"test" : None} ex_dict.get('test', 0) In above example it should return 0 but it wont. Can any explain why it behave like that.
Dict: get() not returning 0 if dict value contains None
0.379949
0
0
3,906
42,806,319
2017-03-15T09:56:00.000
0
0
0
0
python,django
42,808,397
4
false
1
0
Look django_migrations table in db. Find last migrated migrations file there and compare this file with next migrations file. You can also add last migrations file in there with manually if everything okay in your database.
2
0
0
I've been trying to add a foreign key to my models in Django 1.9 with the option on_delete='DO_NOTHING' per instructions on Django Docs, but for version 1.10. I ran python manage.py makemigrations without any problems but when I tried to run python manage.py migrate of course I got the error: django.core.exceptions.FieldDoesNotExist: Entrance has no field named u'DO_NOTHING' Realizing my mistake, I changed the option to on_delete=models.DO_NOTHING and ran makemigrations and migrate again but I'm still getting the same error: django.core.exceptions.FieldDoesNotExist: Entrance has no field named u'DO_NOTHING' Looks like something is wrong in migration files. Not too familiar with internal workings of Django so I don't know where to look to fix this. Any ideas?
Django migration error: has no field named u'DO_NOTHING'
0
0
0
1,411