Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
28,232,551
2015-01-30T09:13:00.000
5
0
0
0
python-3.x,scikit-learn,random-forest
28,232,764
1
false
0
0
The RandomForestClassifier introduces randomness externally (relative to the individual tree fitting) via bagging as BaggingClassifier does. However it injects randomness also deep inside the tree construction procedure by sub-sampling the list of features that are candidate for splitting: a new random set of features is considered at each new split. This randomness is controlled via the max_features parameter of RandomForestClassifier that has no equivalent in BaggingClassifier(base_estimator=DecisionTreeClassifier()).
1
3
1
How is using a BaggingClassifier with baseestimator=RandomForestClassifier differ from a RandomForestClassifier in sklearn?
RandomForestClassifier differ from BaggingClassifier
0.761594
0
0
668
28,233,768
2015-01-30T10:24:00.000
0
0
0
0
wpf,xaml,textbox,ironpython
53,803,083
3
false
0
1
Following snippet might help you. XAML : <Label Content="" HorizontalAlignment="Left" Margin="62,163,0,0" VerticalAlignment="Top" Height="472" Width="880" Foreground="Black" FontSize="18" Background="White" x:Name="label1"/> IronPython : def Button_Click1(self, sender, e): self.label1 = self.FindName('label1') self.label1.Content = "Test Succeeded" pass
1
0
0
I have an aplication in IronPython where I load my xaml with wpf: "wpf.LoadComponent(....xaml)" I have a Button and a TextBox in my app and when I push the button, the app start doing a 2 minute work, I need to update the textbox of the aplication during this work. But I can´t do it. My textBox only update it when the 2 minute work finish. Any help? Thank you
How to update TextBox using wpf, IronPython
0
0
0
2,205
28,234,634
2015-01-30T11:11:00.000
0
0
0
0
python,html
42,434,749
1
false
1
0
While reading assign text to an variable and the decode it, like if your text is stored under variable Var, then while reading use Var.decode("utf-8").
1
0
0
Dear friendly python experts, I am using BeautifulSoup to scrape some html text from a site. This site contains German words, such as "Groß" or "Bär". When I print the html text these characters get translated quite nasty making it too hard to search the html text for the words then. How can I replace ß to ss, ä to ae, ü to, ö to oe, in the html text? I was looking for a solution everywhere to this, however it got me nowhere, except confusion land As this is a project help is very much appreciated!
Python - BeautifulSoup - German characters in html
0
0
0
580
28,238,144
2015-01-30T14:35:00.000
4
0
0
0
python,django,postgresql,heroku,django-queryset
28,395,905
1
true
1
0
I realized that I was using the django server in my procfile. I accidentally commented out and commited it to heroku instead of using gunicorn. Once I switched to gunicorn on the same heroku plan the issue was resolved. Using a production level application server really makes a big difference. Also don't code at crazy hours of the day when you're prone to errors.
1
2
0
Recently I've been receiving this error regarding what appears to be an insufficiency in connection slots along with many of these Heroku errors: H18 - Request Interrupted H19 - Backend connection timeout H13 - Connection closed without response H12 - Request timeout Error django.db.utils.OperationalError in / FATAL: remaining connection slots are reserved for non-replication superuser connections Current Application setup: Django 1.7.4 Postgres Heroku (2x 2 dynos, Standard-2) 5ms response time, 13rpm Throughput Are there general good practices for where one should or should not perform querysets in a Django application, or when to close a database connection? I've never experienced this error before. I have increased my dynos on heroku and allocated significantly more RAM and I am still experiencing the same issue. I've found similar questions on Stack Overflow but I haven't been able to figure out what might be causing the issue exactly. I have querysets in Model methods, views, decorator views, context processors. My first inclination would be that there is an inefficient queryset being performed somewhere causing connections to remain open that eventually crashes the application with enough people accessing the website. Any help is appreciated. Thanks.
Django/Postgres: FATAL: remaining connection slots are reserved for non-replication superuser connections
1.2
1
0
3,434
28,238,830
2015-01-30T15:11:00.000
0
0
0
0
python,excel,csv
28,238,935
1
false
0
0
Try importing it as a csv file, instead of opening it directly on excel.
1
1
1
I've written a python/webdriver script that scrapes a table online, dumps it into a list and then exports it to a CSV. It does this daily. When I open the CSV in Excel, it is unformatted, and there are fifteen (comma-delimited) columns of data in each row of column A. Of course, I then run 'Text to Columns' and get everything in order. It looks and works great. But tomorrow, when I run the script and open the CSV, I've got to reformat it. Here is my question: "How can I open this CSV file with the data already spread across the columns in Excel?"
Retain Excel Settings When Adding New CSV
0
1
0
24
28,246,938
2015-01-31T00:48:00.000
3
0
1
0
python,json,function,python-2.7,dictionary
28,246,986
2
true
0
0
yes , since python is evaluated left to right like english. the functions will be called left to right. its slightly shorter not super pythonic but not aweful either
1
0
0
I'm creating a python application that requires a user to log into a web service. The credentials are passed to the server in a POST request as a json dictionary. In order to avoid unnecessarily saving the password in a variable I figured I would do something like the following: json.dumps({'username': raw_input('Username: '), 'password': getpass.getpass()}) Is there any guarantee that raw_input will get called before getpass? Is there actually any benefit in this method over saving the output of the functions first and then creating the dictionary from the variables? Is this a 'pythonic' way of doing things? My guess is no since there is a lot of stuff crammed into a single line and it's arguably not very readable.
Using the return value of a function as the value in a dictionary in python
1.2
0
0
50
28,246,973
2015-01-31T00:52:00.000
6
1
0
0
python,zeromq,pyzmq
28,255,012
1
false
0
0
What is causing the problem? A default setup of the ZMQ IO-thread - that is responsible for the mode of operations. I would dare to call it a problem, the more if you invest your time and dive deeper into the excellent ZMQ concept and architecture. Since early versions of the ZMQ library, there were some important parameters, that help the central masterpiece ( the IO-thread ) keep the grounds both stable and scalable and thus giving you this powerful framework. Zero SHARING / Zero COPY / (almost) Zero LATENCY are the maxims that do not come at zero-cost. The ZMQ.Context instance has quite a rich internal parametrisation that can be modified via API methods. Let me quote from a marvelous and precious source -- Pieter HINTJENS' book, Code Connected, Volume 1. ( It is definitely worth spending time and step through the PDF copy. C-language code snippets do not hurt anyone's pythonic state of mind as the key messages are in the text and stories that Pieter has crafted into his 300+ thrilling pages ). High-Water Marks When you can send messages rapidly from process to process, you soon discover that memory is a precious resource, and one that can be trivially filled up. A few seconds of delay somewhere in a process can turn into a backlog that blows up a server unless you understand the problem and take precautions. ... ØMQ uses the concept of HWM (high-water mark) to define the capacity of its internal pipes. Each connection out of a socket or into a socket has its own pipe, and HWM for sending, and/or receiving, depending on the socket type. Some sockets (PUB, PUSH) only have send buffers. Some (SUB, PULL, REQ, REP) only have receive buffers. Some (DEALER, ROUTER, PAIR) have both send and receive buffers. In ØMQ v2.x, the HWM was infinite by default. This was easy but also typically fatal for high-volume publishers. In ØMQ v3.x, it’s set to 1,000 by default, which is more sensible. If you’re still using ØMQ v2.x, you should always set a HWM on your sockets, be it 1,000 to match ØMQ v3.x or another figure that takes into account your message sizes and expected subscriber performance. When your socket reaches its HWM, it will either block or drop data depending on the socket type. PUB and ROUTER sockets will drop data if they reach their HWM, while other socket types will block. Over the inproc transport, the sender and receiver share the same buffers, so the real HWM is the sum of the HWM set by both sides. Lastly, the HWM-s are not exact; while you may get up to 1,000 messages by default, the real buffer size may be much lower (as little as half), due to the way libzmq implements its queues.
1
4
0
I am sending 20000 messages from a DEALER to a ROUTER using pyzmq. When I pause 0.0001 seconds between each messages they all arrive but if I send them 10x faster by pausing 0.00001 per message only around half of the messages arrive. What is causing the problem?
ZMQ DEALER ROUTER loses message at high frequency?
1
0
1
1,622
28,253,855
2015-01-31T16:29:00.000
0
0
0
0
python,flask
28,257,714
1
false
1
0
you need to configure your firewall on your server/workstation to allow connections on port 5000. setting the ip to 0.0.0.0 allows connections to your machine but only if you have the port open. also, you will need to connect via the ip of your machine and not localhost since localhost will only work from the machine where the server is running.
1
0
0
I have an apache server setup on a Pi, and i'm trying to learn Flask. I set it up so that The 'view' from the index '/' returns "hello world". then i ran my main program. nothing happens from the browser on the PC i'm SSH'ing from,I just get an error saying , but when i used the Pi directly and went to http:localhost:5000/ i got a response.I read about setting Host to '0.0.0.0' but that didnt help. how can i get my Flask to accept all connections? does it make a difference that I have an 'index.html' in '/'?
Flask isn't recognising connections from other clients
0
0
0
48
28,256,891
2015-01-31T21:47:00.000
6
0
1
1
python,ubuntu-13.10
47,263,684
7
false
0
0
It's also possible that you may not have run sudo apt-get update. It worked for me.
1
9
0
When i try do the given code below: "sudo apt-get install virtualenv" the error given in response to the ubuntu-shell is: E: Unable to locate package virtualenv
Unable to locate package virtualenv in ubuntu-13 on a virtual-machine
1
0
0
24,270
28,258,468
2015-02-01T01:23:00.000
1
0
0
0
python,image-processing,hash
30,613,008
1
false
0
0
I found a couple of ways to do this. I ended up using a Mean Squared Error function that I wrote myself: def mse(reference, query): return (((reference).astype("double")-(query).astype("double"))**2).mean() Until, upon later tinkering I found a function that seemed to do something similar (compare image similarity, bit by bit), but a good amount faster: def linalg_norm(reference, query): return np.linalg.norm(reference-query) I have no theoretical knowledge of what the second function does, however practically it doesn't matter. I am not averse to learning how it works.
1
2
1
I've been trying to write on a fast (ish) image matching program which doesn't match rotated or scale deformed image, in Python. The goal is to be able to find small sections of an image that are similar to other images in color features, but dissimilar if rotated or warped. I found out about perceptual image hashing, and I've had a look at the ImageHash module for Python and SSIM, however most of the things I've looked at do not have in color as a major factor, ie they average the color and only work in one channel, and phash in particular doesn't care if images are rotated. I would like to be able to have an algorithm which would match images which at a distance would appear the same (but which would not necessarily need to be the same image). Can anyone suggest how I would structure and write such an algorithm in python? or suggest a function which would be able to compare images in this manner?
Color Perceptual Image Hashing
0.197375
0
0
852
28,259,109
2015-02-01T03:23:00.000
0
0
1
0
python,python-2.7,pip,six
37,877,339
3
false
0
0
I ran into the same thing, the cause was a left over six.pyo / six.pyc in my PYTHONPATH dir which was imported instead of the installed version.
2
0
0
Got this error when importing matplotlib.pyplot. But I have checked the version of six installed using pip list, and it returns version 1.9.0. And when I checked six.__version__, it returns 1.2.0. Could any one help me?
Python2.7: ImportError: six 1.3 or later is required; you have 1.2.0
0
0
0
4,187
28,259,109
2015-02-01T03:23:00.000
0
0
1
0
python,python-2.7,pip,six
32,384,602
3
true
0
0
There is probably a bug somewhere, but a quick and dirty work around was the following: pip install six==1.8.0
2
0
0
Got this error when importing matplotlib.pyplot. But I have checked the version of six installed using pip list, and it returns version 1.9.0. And when I checked six.__version__, it returns 1.2.0. Could any one help me?
Python2.7: ImportError: six 1.3 or later is required; you have 1.2.0
1.2
0
0
4,187
28,260,051
2015-02-01T06:14:00.000
1
0
0
0
python,graph,networkx
28,291,575
1
false
0
0
No. Sorry. In principle it could be possible to create a GUI which interfaces with networkx (and maybe some people have), but it's not built directly into networkx.
1
0
0
Suppose I have to create a graph with 15 nodes and certain nodes. Instead of feeding the nodes via coding, can the draw the nodes and links using mouse on a figure? Is there any way to do this interactively?
Python : Is there a way to interactively draw a graph in NetworkX?
0.197375
0
1
105
28,261,224
2015-02-01T09:18:00.000
1
0
1
0
python,python-imaging-library,homebrew,macports,pillow
56,184,178
3
false
0
0
1-Pillow and PIL cannot co-exist in the same environment. Before installing Pillow, please uninstall PIL. 2-Pillow >= 1.0 no longer supports “import Image”. Please use “from PIL import Image” instead. so be careful with it 3-Pillow >= 2.1.0 no longer supports “import _imaging”. Please use “from PIL.Image import core as _imaging” instead.
1
5
0
I tried to use Python's Image Module on my Mac (new to mac)and I had trouble setting it up. I have Yosemite and some of the posts that I found online didn't work. I ended up installing Homebrew and I finally got PIL on my computer. However, instead of using import image (which I saw people doing it online), I have to use from PIL import image. Is there key difference between import Image and from PIL import Image. It's the first time that I actually use Image module. One more question, do I actually need to install third party tools like Homebrew and Macports to set up my environment?
Difference between from PIL import Image and import Image
0.066568
0
0
4,509
28,263,798
2015-02-01T14:30:00.000
0
0
1
0
python,passwords
28,263,882
2
false
0
0
If you really can't do imports, then do this: Enter the python directory in your computer, search for the getpass module, open the source code and search for the getpass function/class/whatever, copy-paste It in your code, Done.
1
1
0
I need help with a password issue. I'm trying to create a simple password program, but the problem I am having is that I cannot replace the input with '*'s without importing anything. Does anyone have any solutions?
Python. How to replace input with asterisks
0
0
0
4,262
28,266,847
2015-02-01T19:28:00.000
1
0
0
0
dealloc,python-c-api
28,297,671
1
true
0
1
I believe the problem here was one of reference counting. PyType_Ready() fills various tp_* fields depending on the bases of your type. One of these is tp_alloc, which I have set to 0. Its doc says the refcount is set to 1 and the memory block is zeroed. Every instance Python creates of this type, a new PyObject get added to the appropriate Python Dictionary. If it is a module level variable, this is the module's dictionary. When the dictionary is destroyed, it DECREF-s contained objects. Now the refcount will be 0, and tp_dealloc will get run. It appears that in my code I was performing an extra INCREF somewhere and the object was never getting garbage collected. It seems that (unless you compile with the specific flag) Python has no linked list that would allow it to track all of its objects. So we can't assume that Py_Finalize() will clear up. It won't! Instead, every object is held in the dictionary for its containing scope, and so on back to the module dictionary. When this module dictionary is destroyed, the destruction will creep outwards through all the branches.
1
1
0
I am embedding Python in C++. I have a working C++ Python extension object. The only thing wrong is that if I set tp_dealloc to a custom function it never gets called. I would have thought Py_Finalize() would trigger this, or maybe terminating the program. But no. Could anyone suggest why tp_dealloc isn't getting hit?
tp_dealloc not getting hit upon exit
1.2
0
0
290
28,270,435
2015-02-02T02:42:00.000
1
0
1
0
python,matplotlib
28,270,527
1
false
0
0
Aha, one needs to use the "extent" argument, as in: plt.imshow(H, cmap=plt.gray(), extent=[-5, 3, 6, 9])
1
1
1
I'm displaying an image and want to specify the x and y axis numbering rather than having row and column numbers show up there. Any ideas?
pyplot - Is there a way to explicitly specify the x and y axis numbering?
0.197375
0
0
31
28,270,967
2015-02-02T04:06:00.000
6
0
0
0
python,scikit-learn
28,287,768
2
true
0
0
Yes, you need to reorder them. Imagine a simpler case, Linear Regression. The algorithm will calculate the weights for each of the features, so for example if feature 1 is unimportant, it will get assigned a close to 0 weight. If at prediction time the order is different, an important feature will be multiplied by this almost null weight, and the prediction will be totally off.
1
4
1
Just getting started with this library... having some issues (i've read the docs but didn't get clarity) with RandomForestClassifiers My question is pretty simple, say i have a train data set like A B C 1 2 3 Where A is the independent variable (y) and B-C are the dependent variables (x). Let's say the test set looks the same, however the order is B A C 1 2 3 When I call forest.fit(train_data[0:,1:],train_data[0:,0]) do I then need to reorder the test set to match this order before running? (Ignoring the fact that I need to remove the already predicted y value (a), so lets just say B and C are out of order... )
Scikitlearn - order of fit and predict inputs, does it matter?
1.2
0
0
2,722
28,271,711
2015-02-02T05:39:00.000
0
0
0
0
python,sql,orm,flask,sqlalchemy
28,280,443
1
false
1
0
No, an ORM is not required, just incredibly convenient. SQLAlchemy will manage connections, pooling, sessions/transactions, and a wide variety of other things for you. It abstracts away the differences between database engines. It tracks relationships between tables in convenient collections. It generally makes working with complex data much easier. If you're concerned about performance, SQLAlchemy has two layers, the orm and the core. Dropping down to the core sacrifices some convenience for better performance. It won't be as fast as using the database driver directly, but it will be fast enough for most use cases. But no, you don't have to use it.
1
2
0
Most of the Flask tutorials and examples I see use an ORM such as SQLAlchemy to handle interfacing with the user database. If you have a general working knowledge of SQL, is this extra level of abstraction, heavy with features, necessary? I am tempted to write a lightweight interface/ORM of my own so I better understand exactly what's going on and have full control over the queries, inserts, etc. But are there pitfalls to this approach that I am not considering that may crop up as the project gets more complex, making me wish I used a heavier ORM like SQLAlchemy?
Handling user database in flask web app without ORM like SQLAlchemy
0
1
0
970
28,272,226
2015-02-02T06:30:00.000
0
0
0
1
python,google-app-engine,google-search-api,google-app-engine-python
29,008,682
1
false
1
0
There can be two reasons for this : 1 - miles instead of km 2 - conversion numbers (for example 35,322.2 is 35322.2 ? km ? miles ?) i suggest to check what exactly are the numbers processed when executing distance function, you can programmatically output this data in some logs Hope it helps
1
0
0
I have implemented GAE's Python Search Api and am trying to query based on distance from given geopoint. My query string is: "distance(location, geopoint(XXX, YYY)) < ZZZ". However, for some reason on the production server, this query string is returning items where the distance is greater than the ZZZ parameter. Below are actual numbers (production) demonstrating the inaccuracies: Actual Distance: 343.9m Query Distance that still gets the result: 325m Actual Distance: 18,950.3 Query Distance that still gets the result: 13,499m Actual Distance: 55,979.0 Query Distance that still gets the result: 44,615m Actual Distance: 559,443.6 Query Distance that still gets the result: 451,167m Actual Distance: 53.4 Query Distance that still gets the result: 46m Actual Distance: 35,322.2 Query Distance that still gets the result: 30,808m Actual Distance: 190.2 Query Distance that still gets the result: 143m On my development server, these inaccuracies do not exist. I am able to query down to the exact meter and get the expected results. What could cause this and how to fix it so that I get accurate query results in production? Is anyone else getting the same issue?
Google App Engine Python Search Api's Location-based queries (Geosearch) Issues
0
0
0
248
28,274,323
2015-02-02T09:15:00.000
1
0
1
0
python,pycharm
28,274,438
2
true
0
0
The feature is called "Quick Documentation" and the default shortcut for this is CTRL+Q or ALT+MOUSE BUTTON 2. As far as I know, there is no way to enable it on-hover (personally, I found this very annoying in Eclipse).
1
0
0
How can I configure Pycharm such that I always have the documentation pop up every time I hover over a method or an class? For example in eclipse: If I hover the cursor over a c++ function or a class, I can see the documentation for the same in a small pop up window. Is there some plugin or setting in Pycharm where I can enable the same? PS: I know PyCharm already has F1 button that can do the same, I was just looking for a hover alternative as I am used to the eclipse way of doing it
Documentation Pop Ups
1.2
0
0
117
28,280,308
2015-02-02T14:48:00.000
0
0
1
0
python,debugging,spyder
36,927,378
7
false
0
0
One minor extra regarding point 3: It also seemed to me the debug console frequently froze, doing prints, evaluating, etc, but pressing the stop (Exit debug) button usually got it back to the bottom of the call stack and then I could go back up ('u') to the frame I was debugging in. Worth a try. This might be for a later version of Spyder (2.3.5.2)
4
82
0
I like Python and I like Spyder but I find debugging with Spyder terrible! Every time I put a break point, I need to press two buttons: first the debug and then the continue button (it pauses at first line automatically) which is annoying. Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage. The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB. Last but not least, if I call a function from within the ipdb>> console, and put a breakpoint in it, it will not stop there. It seems like I have to put the breakpoint there before I start the debugging (Ctrl+F5). Do you have a solution or maybe can you tell me how you debug Python scripts and functions? I am using fresh install of Anaconda on a Windows 8.1 64bit.
How do I debug efficiently with Spyder in Python?
0
0
0
113,747
28,280,308
2015-02-02T14:48:00.000
0
0
1
0
python,debugging,spyder
49,370,618
7
false
0
0
You can use debug shortcut keys like: Step Over F10 Step Into F11 in tools>preferences>keyboard shortcuts
4
82
0
I like Python and I like Spyder but I find debugging with Spyder terrible! Every time I put a break point, I need to press two buttons: first the debug and then the continue button (it pauses at first line automatically) which is annoying. Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage. The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB. Last but not least, if I call a function from within the ipdb>> console, and put a breakpoint in it, it will not stop there. It seems like I have to put the breakpoint there before I start the debugging (Ctrl+F5). Do you have a solution or maybe can you tell me how you debug Python scripts and functions? I am using fresh install of Anaconda on a Windows 8.1 64bit.
How do I debug efficiently with Spyder in Python?
0
0
0
113,747
28,280,308
2015-02-02T14:48:00.000
61
0
1
0
python,debugging,spyder
28,285,708
7
true
0
0
(Spyder maintainer here) After our 4.2.0 version, released in November 2020, the debugging experience in Spyder is quite good. What we provide now is what people coming from Matlab would expect from a debugger, i.e. something that works like IPython and lets you inspect and plot variables at the current breakpoint or frame. Now about your points: If there is a breakpoint present in the file you're trying to debug, then Spyder enters in debug mode and continues until the first breakpoint is met. If it's present in another file, then you still need to press first Debug and then Continue. IPdb is the IPython debugger console. In Spyder 4.2.0 or above it comes with code completion, syntax highlighting, history browsing of commands with the up/down arrows (separate from the IPython history), multi-line evaluation of code, and inline and interactive plots with Matplotlib. This is fixed now. Also, to avoid clashes between Python code and Pdb commands, if you have (for instance) a variable called n and write n in the prompt to see its value, we will show it instead of running the n Pdb command. To run that command instead, you have to prefix it with an exclamation mark, like this: !n This is fixed too. You can set breakpoints in IPdb and they will be taken into account in your current session.
4
82
0
I like Python and I like Spyder but I find debugging with Spyder terrible! Every time I put a break point, I need to press two buttons: first the debug and then the continue button (it pauses at first line automatically) which is annoying. Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage. The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB. Last but not least, if I call a function from within the ipdb>> console, and put a breakpoint in it, it will not stop there. It seems like I have to put the breakpoint there before I start the debugging (Ctrl+F5). Do you have a solution or maybe can you tell me how you debug Python scripts and functions? I am using fresh install of Anaconda on a Windows 8.1 64bit.
How do I debug efficiently with Spyder in Python?
1.2
0
0
113,747
28,280,308
2015-02-02T14:48:00.000
1
0
1
0
python,debugging,spyder
39,023,817
7
false
0
0
Here is how I debug in Spyder in order to avoid freezing the IDE. I do this if I alter the script while in debugging mode. I close out the current IPython (debugging) console [x] Open a new one [Menu bar-> Consoles-> Open an IPython Console] Enter debug mode again [blue play pause button]. Still a bit annoying, but it has the added benefit of clearing (resetting) variable list.
4
82
0
I like Python and I like Spyder but I find debugging with Spyder terrible! Every time I put a break point, I need to press two buttons: first the debug and then the continue button (it pauses at first line automatically) which is annoying. Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage. The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB. Last but not least, if I call a function from within the ipdb>> console, and put a breakpoint in it, it will not stop there. It seems like I have to put the breakpoint there before I start the debugging (Ctrl+F5). Do you have a solution or maybe can you tell me how you debug Python scripts and functions? I am using fresh install of Anaconda on a Windows 8.1 64bit.
How do I debug efficiently with Spyder in Python?
0.028564
0
0
113,747
28,282,005
2015-02-02T16:16:00.000
0
0
0
0
python,tkinter
28,283,243
3
false
0
1
I believe you'll want to bind the key to your handler: frame.bind('<Return>', some_handler).
2
1
0
I have created a program in python(Tkinter) which takes username and password as input and a button which, when pressed, goes through a function which checks the username and password and further runs the program. I want that, rather than clicking the button, user presses the 'Enter' key and that does the function of the button. please help.
Getting values by pressing 'Enter' key rather than clicking a button
0
0
0
8,809
28,282,005
2015-02-02T16:16:00.000
0
0
0
0
python,tkinter
37,026,723
3
false
0
1
For me, the normal binding to a function did not work. Probably because I am using it inside a class, I used the Lambda function and it worked. Here's the code: inp.bind('', lambda _: show())
2
1
0
I have created a program in python(Tkinter) which takes username and password as input and a button which, when pressed, goes through a function which checks the username and password and further runs the program. I want that, rather than clicking the button, user presses the 'Enter' key and that does the function of the button. please help.
Getting values by pressing 'Enter' key rather than clicking a button
0
0
0
8,809
28,282,706
2015-02-02T16:54:00.000
0
0
0
0
python,scikit-learn,random-forest
28,315,175
2
false
0
0
classifier.predict_proba() returns the class probabilities. The n dimension of the array will vary depending on how many classes there are in the subset you train on
1
3
1
I have a dataset that I split in two for training and testing a random forest classifier with scikit learn. I have 87 classes and 344 samples. The output of predict_proba is, most of the times, a 3-dimensional array (87, 344, 2) (it's actually a list of 87 numpy.ndarrays of (344, 2) elements). Sometimes, when I pick a different subset of samples for training and testing, I only get a 2-dimensional array (87, 344) (though I can't work out in which cases). My two questions are: what do these dimensions represent? I worked out that to get a ROC AUC score, I have to take one half of the output (that is (87, 344, 2)[:,:,1], transpose it, and then compare it with my ground truth (roc_auc_score(ground_truth, output_of_predict_proba[:,:,1].T) essentially) . But I don't understand what it really means. why does the output change with different subsets of the data? I can't understand in which cases it returns a 3D array and in which cases a 2D one.
Scikit-learn RandomForestClassifier output of predict_proba
0
0
0
2,825
28,292,538
2015-02-03T06:15:00.000
0
0
0
0
python,django
28,293,569
3
false
1
0
You no need any magic to do singleton-like object in python. Just write module, for example shared.py inside your django project. Put your dictionary initialization here and import it from anywhere.
2
3
0
I have an application and a database. The application initially was written in python without django. What seems to be the problem is that it makes too many connections with the database and that slows it down. What I want to do is load whatever data is going to be used in python dictionary and then share that object with everybody(something like singletone object). What django seems to do is create a new instance of application each time a new request is made. How can I make it share the same loaded data?
Share object between django users
0
0
0
1,011
28,292,538
2015-02-03T06:15:00.000
1
0
0
0
python,django
28,294,419
3
true
1
0
Contrary to your assertion, Django does not reinitialize on every request. Actually, Django processes last for many requests, and anything defined at module level will be shared across all requests handled by that process. This can often be a cause of thread safety bugs, but in your case is exactly what you want. Of course a web site normally runs several processes concurrently, and it is impossible to share objects between them. Also, it is up to your server to determine how many processes to spawn and when to recycle them. But one object instantiation per process is better than one per request for your use case.
2
3
0
I have an application and a database. The application initially was written in python without django. What seems to be the problem is that it makes too many connections with the database and that slows it down. What I want to do is load whatever data is going to be used in python dictionary and then share that object with everybody(something like singletone object). What django seems to do is create a new instance of application each time a new request is made. How can I make it share the same loaded data?
Share object between django users
1.2
0
0
1,011
28,292,632
2015-02-03T06:23:00.000
0
1
1
0
python,metadata,id3v2,hachoir-parser
28,557,527
1
true
0
0
The problem with hachoir-metadata is that they search only "text" field for every ID3 chunk, but write image in "img_data" field not in "text" field. So, hachoir-metadata can not extract images from metadata because this problem in source code.
1
0
0
I've tried to use hachoir-metadata to work with multimedia files but I can't find how to parse covers in ID3v2 metadata. I see in source code that it know about a lot of covers tags but dose not return any in parser. And I've even tried to use libextractor and python-extractor binding and also didn't find how to fetch cover from multimedia.
Does hachoir metadata or libextractor extract covers from ID3v2 and all another formats?
1.2
0
0
141
28,293,096
2015-02-03T06:59:00.000
6
0
0
0
python,mongodb,flask,mongoengine
28,293,187
1
true
1
0
It's simple just use : Request.objects.first()
1
3
0
I have a class with name of Request and I want to get first object of it in mongoengine I think I can use this : first get all objects like this visitors = Request.objects.all()and then ss = visitors[0].ip and then call an attribute of object
get first object in mongoengine
1.2
0
0
2,821
28,295,059
2015-02-03T09:08:00.000
3
0
0
0
python,django,forms,post
28,295,685
1
false
1
0
Should I use exactly multipart/form-data content-type? Django supports only multipart/form-data, so you must use that content-type. Where can I specify enctype? (headers, parameters, etc) in normal HTML just put enctype="multipart/form-data" as one of parameters of your form element. In HttpRequester it's more complicated, because I think it lacks support for multipart/form-data by default. http://www.w3.org/TR/html4/interact/forms.html#h-17.13.4.2 is more details about multipart/form-data, it should be possible to run it in HttpRequester by hand. Why the file's data contains in request.body but request.FILES is empty? You've already answered that: Note that FILES will only contain data if the request method was POST and the that posted to the request had enctype="multipart/form-data". Otherwise, FILES will be a blank dictionary-like object.
1
2
0
I'm trying to send a file via post request to server on localhost. I'm using HttpRequester in Firefox (also tried Postman in Chrome and Tasker on Android) to sumbit request. The problem is that request.FILES is always empty. But when I try to print request.body it shows some non-human-readable data which particularly include the data from the file I want to upload (it's a database). So it makes sense to me that somehow file arrives to the server. From Django docs: Note that FILES will only contain data if the request method was POST and the that posted to the request had enctype="multipart/form-data". Otherwise, FILES will be a blank dictionary-like object. There was an error 'Invalid boundary in multipart: None' when I tried to set Content-type of request to 'multipart/form-data'. An error disappeared when I added ';boundary=frontier' to Content-type. Another approach was to set enctype="multipart/form-data". Therefore I have several questions: Should I use exactly multipart/form-data content-type? Where can I specify enctype? (headers, parameters, etc) Why the file's data contains in request.body but request.FILES is empty? Thanks
Empty request.FILES in Django
0.53705
0
1
3,002
28,308,285
2015-02-03T20:38:00.000
0
0
1
0
python,multithreading
28,308,422
2
false
0
0
It depends. If your code is spending most of its time waiting for network operations (likely, in a web scraping application), threading is appropriate. The best way to implement a thread pool is to use concurrent.futures in 3.4. Failing that, you can create a threading.Queue object and write each thread as an infinite loop that consumes work objects from the queue and processes them. If your code is spending most of its time processing data after you've downloaded it, threading is useless due to the GIL. concurrent.futures provides support for process concurrency, but again only works in 3.4+. For older Pythons, use multiprocessing. It provides a Pool type which simplifies the process of creating a process pool. You should profile your code (using cProfile) to determine which of those two scenarios you are experiencing.
1
0
0
Since my scaper is running so slow (one page at a time) so I'm trying to use thread to make it work faster. I have a function scrape(website) that take in a website to scrape, so easily I can create each thread and call start() on each of them. Now I want to implement a num_threads variable that is the number of threads that I want to run at the same time. What is the best way to handle those multiple threads? For ex: supposed num_threads = 5 , my goal is to start 5 threads then grab the first 5 website in the list and scrape them, then if thread #3 finishes, it will grab the 6th website from the list to scrape immidiately, not wait until other threads end. Any recommendation for how to handle it? Thank you
Python what is the best way to handle multiple threads
0
0
1
1,243
28,310,338
2015-02-03T22:54:00.000
1
0
1
0
python,list,dictionary,set
28,310,389
3
false
0
0
As far as efficiency goes, you can't be any more efficient than looping through the list. I would also argue that looping through the list is already a simple process.
1
2
0
For my program, I wish to cleanly check whether any elements in a list is a key in a dictionary. So far, I can only think to loop through the list and checking. However, is there any way to simplify this process? Is there any way to use sets? Through sets, one can check whether two lists have common elements.
Using set on Dictionary keys
0.066568
0
0
9,198
28,310,782
2015-02-03T23:30:00.000
0
0
1
0
python,pygame,pip
28,639,628
1
false
0
1
you must not have set the path variable in windows..it's the same mistake that i made.. set the environment variable in windows by doing fallowing steps: right click 'My Computer' go to properties Advanced System Settings Environment Variables and set Path there.. if this doesn't help then try another method : open your python installation directory goto Tools Scripts Run "win_add2path.py" Now try to run $) pip install, commd.. if it still doesn't work .. restart your console or restart the PC @[email protected]
1
0
0
I've installed Python 2.7.6 (32-bit) on Windows 7 (64-bit). This works fine in Windows Power Shell. I set the PATH to C:\Users\name\Python27\Scripts, and that didn't seem to be a problem either. I then downloaded a pygame install file, pygame-1.9.2a0-cp27-none-win32.whl, from: http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame I put the .whl file into the Python27\Scripts directory, and tried using pip install pygame-1.9.2a0-cp27-none-win32.whl in Windows Power Shell (command line, not Python interpreter), and the error message is this: The term 'pip' is not recognized as the name of cmdlet, function, script file, or operable program. I tried using Python 2.7.8 (the later version of Python 2.7, if I'm not wrong) with an .msi install of pygame, but I experienced other errors with this that I thought could be fixed with a fresh install. I am very new to programming and using the command line; any help is appreciated.
can't install pygame using pip, Python 2.7.6 on Win7
0
0
0
559
28,311,063
2015-02-03T23:55:00.000
6
0
1
0
python,ipython,nose,pdb
28,313,669
1
true
0
0
Most likely nose captures stdout. If you can run it with -s option it should work as expected. You can also use from nose.tools import set_trace; set_trace() to use pdb debugger, it will pass stdout/stdin properly.
1
0
0
I have a fairly basic question. I'm running the nosetests command for my python application's test suite. I want to drop into an interactive debugger. As the tests are run, it hits my IPython.embed() line and freezes, no prompt. Ctrl+C kills it and resumes the tests. How can I drop into an interactive prompt of some sort while running nosetests? Thanks for your help.
Using iPython with nose?
1.2
0
0
1,032
28,313,984
2015-02-04T05:15:00.000
0
0
1
0
python,instance,pickle
28,314,281
2
false
0
0
Pickling simply doesnt pickle your classes, pickle only works on data, if you try to pickle a class with built in methods it simply will not work. it will come out glitchy and broken. source: learning python by Mark Lutz
1
0
0
I have a python class which I can instantiate and then pickle. But then I have a second class, inheriting from the first, whose instances I cannot pickle. Pickle gives me the error "can't pickle instancemethod". Both instances have plenty of methods. So, does anyone have a guess as to why the first class would pickle OK, but not the second? I'm sure that you will want to see the code, but it's pretty lengthy and I really have no idea what the "offending" parts of the second class might be. So I can't show the whole thing and I don't really know what the relevant parts might be.
When does pickle fail to pickle an instance?
0
0
0
81
28,314,822
2015-02-04T06:27:00.000
0
0
1
0
python,multithreading
28,314,945
2
false
0
0
A background task should never try to kill a foreground task - it might be doing something important. The background task should pass a status of "Restart Needed" and the foreground task should exit when convenient or after prompting the user. This would be an extension of your current status checking. In your background thread you can have a status that is periodically fetched from the foreground and the foreground can take action based on the returned status. You could also supply a callback to the background thread that it calls on there being a problem.
1
0
0
I have a client and server model. The server periodically sends its healthy condition to client. The client has a background thread to take care of it(main thread is doing something else). Once the client notices the server is in a bad status, it will do some clean up work and then kill itself(kill the client process). The problem is this: At the very beginning of the process, it does atexit.register(specific_cleanup_func). Once the client recognizes the server is in a bad status, the background thread will do general_cleanup_func() and os.kill(os.getpid(), signal.SIGTERM). I hope the os.kill() called by background thread will trigger the registered specific_cleanup_func but it is not the case. I also tried to call sys.exit() from the background thread but the process does not exit. I wonder how to trigger the registered function from the background thread while killing the process or how to let the background thread ask main thread to do all those cleanup stuff and sys.exit().
python background thread cannot trigger function registered at atexit.register()
0
0
0
451
28,316,119
2015-02-04T07:49:00.000
1
1
0
0
python,regex,unit-testing,python-unittest
28,316,773
1
true
0
0
Try to test Rule and ValueSetter each in their own Test. Test that the Rule really does what you think in the 5 cases you describe in your question. Then when you test your ValueSetter just assume that Rule does what you think and set for example message_title, message_content and message_number directly. So you inject the information in a way that Rule should have done. This is what you usually do in a unittest. In order to test if everything is working in conjunction you usually would do a functional test that tests the application from a higher/user level. If you cannot construct a ValueSetter without using a Rule then just create a new class for the test that inherits from ValueSetter and overwrite the __init__ method. In this way you will be able to get a 'blank' object and set the member variables as you expect them to be directly.
1
0
0
What will be the best way to test following: We have a large complex class that we'll call ValueSetter which accepts string, gets some data from it and sets this data to several variables like message_title, message_content, message_number To perform this it uses another one class called Rule where are rules for particular case described with regular expressions. What is needed: Because in each Rule there are about 5 cases to match, we ant to test each of them separately. So far we need only to assert that particular Rule returns correct string in each case. What is the best way in this situation ?
Test part of complex structure with unittest and mocks
1.2
0
0
56
28,318,105
2015-02-04T09:42:00.000
0
0
0
0
django,python-2.7,apache2,mod-wsgi,pyinotify
28,320,868
2
false
1
0
You shouldn't prevent spawning multiple processes, because it's good thing, especially on production environment. You should consider using some external tool, separated from django or add check if folder listening is already running (for example monitor persistence of PID file and it's content).
2
1
0
I'm running apache with django and mod_wsgi enabled in 2 different processes. I read that the second process is a on-change listener for reloading code on change, but for some reason the ready() function of my AppConfig class is being executed twice. This function should only run once. I understood that running django runserver with the --noreload flag will resolve the problem on development mode, but I cannot find a solution for this in production mode on my apache webserver. I have two questions: How can I run with only one process in production or at least make only one process run the ready() function ? Is there a way to make the ready() function run not in a lazy mode? By this, I mean execute only on on server startup, not on first request. For further explanation, I am experiencing a scenario as follows: The ready() function creates a folder listener such as pyinotify. That listener will listen on a folder on my server and enqueue a task on any changes. I am seeing this listener executed twice on any changes to a single file in the monitored directory. This leads me to believe that both processes are running my listener.
how to run Apache with mod_wsgi and django in one process only?
0
0
0
444
28,318,105
2015-02-04T09:42:00.000
2
0
0
0
django,python-2.7,apache2,mod-wsgi,pyinotify
28,321,203
2
false
1
0
No, the second process is not an onchange listener - I don't know where you read that. That happens with the dev server, not with mod_wsgi. You should not try to prevent Apache from serving multiple processes. If you do, the speed of your site will be massively reduced: it will only be able to serve a single request at a time, with others queued until the first finishes. That's no good for anything other than a toy site. Instead, you should fix your AppConfig. Rather than blindly spawning a listener, you should check to see if it has already been created before starting a new one.
2
1
0
I'm running apache with django and mod_wsgi enabled in 2 different processes. I read that the second process is a on-change listener for reloading code on change, but for some reason the ready() function of my AppConfig class is being executed twice. This function should only run once. I understood that running django runserver with the --noreload flag will resolve the problem on development mode, but I cannot find a solution for this in production mode on my apache webserver. I have two questions: How can I run with only one process in production or at least make only one process run the ready() function ? Is there a way to make the ready() function run not in a lazy mode? By this, I mean execute only on on server startup, not on first request. For further explanation, I am experiencing a scenario as follows: The ready() function creates a folder listener such as pyinotify. That listener will listen on a folder on my server and enqueue a task on any changes. I am seeing this listener executed twice on any changes to a single file in the monitored directory. This leads me to believe that both processes are running my listener.
how to run Apache with mod_wsgi and django in one process only?
0.197375
0
0
444
28,319,283
2015-02-04T10:40:00.000
0
0
1
0
python,import,canopy
28,319,550
1
true
0
0
I think I have had this problem before. You should be able to solve it by resetting the python kernel in between edits (there should be an option somewhere in your gui), but I don't know if this is the most efficient way to deal with this annoyance After googling, I think the option you want is Run menu > Restart kernel. You will need to perform this every time you change track.py, and want the changes to be reflected in main.py
1
0
0
I have two python script named main.py and track.py. I tried to import track.py in main.py in the way below: import track ... And I tried to call a function in track.py in main.py below a=track.localization() But when I change the code of track.py, I think it make no result. And I think canopy has import other track.py where I can not find it at. If any one what's wrong, please give me an answer. By the way these two .py file are in the same folder which is my working folder. Thank you
why canopy always import a wrong python file
1.2
0
0
119
28,319,579
2015-02-04T10:55:00.000
2
0
0
0
python,selenium-webdriver,phantomjs,headless-browser,ghostdriver
28,319,699
2
true
1
0
If you want to bypass ghostdriver, then you can directly write your PhantomJS scripts in JavaScript or CoffeeScript. As far as I know there is no way of doing this with the selenium webdriver except with different threads in the language of your choice (python). If you are not happy with it, there is CasperJS which has more freedom in writing scripts than with selenium, but you will only be able to use PhantomJS or SlimerJS.
1
1
0
I am using PhantomJS via Python through Selenium+Ghostdriver. I am looking to load several pages simultaneously and to do so, I am looking for an async method to load pages. From my research, PhantomJS already lives in a separate thread and supports multiple tabs, so I believe the only missing piece of the puzzle is a method to load pages in a non-blocking way. Any solution would be welcome, be it a simple Ghostdriver method I overlooked, bypassing Ghostdriver and interfacing directly with PhantomJS or a different headless browser. Thanks for the help and suggestions. Yuval
Opening pages asynchronously in headless browser (PhantomJS)
1.2
0
1
691
28,321,474
2015-02-04T12:31:00.000
0
1
0
0
python-3.x,fuzzy-search,fuzzy-comparison
37,747,215
1
false
0
0
Jellyfish supports Python 3.3 and 3.4.
1
1
0
fuzzy.c:1635:5: error: too few arguments to function 'PyCode_New' I am upgrading from python 2.7 to 3.2. I am getting an error in the c-compile of the fuzzy library (that apparently isn't Python 3 compatible). Any suggestions? Is there an alternative to the NYSIIS encoding? Thanks
Fuzzy - NYSIIS python 3
0
0
0
299
28,322,139
2015-02-04T13:04:00.000
1
0
1
1
python,linux,debian,pip,dpkg
28,322,460
2
false
0
0
You can certainly dump all the packages installed by dpkg, but that's probably not what you want to do - you'll end up getting thounsands of packages unrelated to your software, and possibly breaking the system, if it's a different debian version. My advice is getting your software to a fresh debian machine and try to pip install everything from your requirements.txt. As python package installation fails (because of missing debian packages), make a text file with a newline separated list of the needed debian packages. Then, just cat my-deb-dependencies | xargs apt-get install on every new system. This takes some manual work - I don't think there's a reliable way of automating it.
1
1
0
I'm shipping a Python app for deployment on Debian servers. Providing a requirements.txt file with a list of all needed Python packages is very convenient when installing the app, especially when accompanied with a Makefile to automatically install from it using pip. But as you know some Python packages depend on Linux system packages (Debian in this case), and it would be great if I could provide a similar file with my project to install them all in one step, and define the Makefile rule to automate it. Do dpkg or apt provide such functionality?
What is the Debian equivalent of "requirements.txt" in Python for managing packages?
0.099668
0
0
1,881
28,324,393
2015-02-04T14:56:00.000
1
0
0
0
python,qt,pyqt,twisted,twisted.web
28,327,369
1
false
1
0
So i figured out that Qt isn't sending the TWISTED_SESSION cookie back with subsequent requests. all i did was send the cookie along with subsequent requests and it worked fine. i had to sqitch to python's request to ease things
1
1
0
I am currently authenticating via a RESTful http api that generates a token which is then used for subsequent request. The api server is written with python twisted and works great the auth token generation works fine in browsers When requesting from software written in pyqt the first request hands over a token to the pyqt app while subsequent request from the pyqt app fails because the remote twisted server believes it is another browser entirely. javascript ajax does this too but is solvable by sending xhrFields: {withCredentials: true} along with the request. How do I resolve this in PyQt?
IQtNetwork.QHttp request credential issue
0.197375
0
1
49
28,326,362
2015-02-04T16:25:00.000
41
1
1
0
python,pycharm
57,497,936
3
false
0
0
In PyCharm Community 2019.2/2019.3 (and probably other versions), you can simply: right-click any folder in your project select "Mark Directory As" select "Sources Root" Modules within that folder will now be available for import. Any number of folders can be so marked.
1
102
0
I am new to PyCharm. I have a directory that I use for my PYTHONPATH: c:\test\my\scripts\. In this directory I have some modules I import. It works fine in my Python shell. How do I add this directory path to PyCharm so I can import what is in that directory?
PyCharm and PYTHONPATH
1
0
0
100,884
28,327,329
2015-02-04T17:10:00.000
1
0
1
0
python,inheritance,data-structures,types,multiple-inheritance
28,327,351
1
true
0
0
So when we arbitrarily create our own data type class and defining the specific types of children / data types it will be able to handle, are we just redefining what python already has implemented? This just has one short answer: yes.
1
1
0
Apologize if this is a duplicate, tried searching. I understand that everything in python is a data type, but this is what I'm a bit confused about. So everything is an object, we have the collection class, integer, float, and other classes as children of the parent object class thinking of it as a tree of objects with say lists, tuples, and dictionaries as a child to the collection class. So when we arbitrarily create our own data type class and defining the specific types of children / data types it will be able to handle, are we just redefining what python already has implemented? For example, say we create our own data class to handle list using the super() method and then specify that only integers could be placed into this list or strings, or whatever we may wish to be contained within this data type. Sorry if this question is a bit confusing tried to word it as precisely as I could.
Defining your own python objects and data types
1.2
0
0
183
28,327,779
2015-02-04T17:36:00.000
0
0
1
0
python,python-3.x,lighttable
28,327,929
4
false
0
0
Hit Ctrl + Space to bring up the control pane. Then start typing Set Syntax and select Set Syntax to Python. Start typing your Python then press Ctrl + Shift + Enter to build and run the program.
3
6
0
I am trying Light Table and learning how to use it. Overall, I like it, but I noticed that the only means of making the watches and inline evaluation work in Python programs uses Python 2.7.8, making it incompatible with some of my code. Is there a way to make it use Python 3 instead? I looked on Google and GitHub and I couldn't find anything promising. I am using a Mac with OS X 10.10.2. I have an installation of Python 3.4.0 that runs fine from the Terminal.
Running Python 3 from Light Table
0
0
0
4,744
28,327,779
2015-02-04T17:36:00.000
0
0
1
0
python,python-3.x,lighttable
28,331,730
4
false
0
0
I got the same problem. It worked for me after saving the file with a .py extension and then typing Cmd+Enter.
3
6
0
I am trying Light Table and learning how to use it. Overall, I like it, but I noticed that the only means of making the watches and inline evaluation work in Python programs uses Python 2.7.8, making it incompatible with some of my code. Is there a way to make it use Python 3 instead? I looked on Google and GitHub and I couldn't find anything promising. I am using a Mac with OS X 10.10.2. I have an installation of Python 3.4.0 that runs fine from the Terminal.
Running Python 3 from Light Table
0
0
0
4,744
28,327,779
2015-02-04T17:36:00.000
3
0
1
0
python,python-3.x,lighttable
29,534,922
4
true
0
0
I had the same problem with using a syntax that was only valid on Python3.3. - Go to Settings:User Behaviour - add the line (find the real path of your python binary): [:app :lt.plugins.python/python-exe "/usr/bin/python3.4"] - Save and test in your lighttable It worked for me :) Hope it helps
3
6
0
I am trying Light Table and learning how to use it. Overall, I like it, but I noticed that the only means of making the watches and inline evaluation work in Python programs uses Python 2.7.8, making it incompatible with some of my code. Is there a way to make it use Python 3 instead? I looked on Google and GitHub and I couldn't find anything promising. I am using a Mac with OS X 10.10.2. I have an installation of Python 3.4.0 that runs fine from the Terminal.
Running Python 3 from Light Table
1.2
0
0
4,744
28,329,352
2015-02-04T19:04:00.000
10
0
0
0
python,ubuntu-14.04,rethinkdb,rethinkdb-python
28,330,153
1
true
0
0
You're almost certainly not losing data, you're just starting RethinkDB without pointing it to the data. Try the following: Start RethinkDB from the directory that contains the rethinkdb_data directory. Alternatively, pass the -d flag to RethinkDB to point it to the directory that contains rethinkdb_data. For example, rethinkdb -d /path/to/data/directory/rethinkdb_data
1
7
0
I save my data on RethinkDB Database. As long as I dont restart the server, all is well. But when I restart, it gives me an error saying database doesnt exist, although the folder and data does exist in folder rethinkdb_data. What is the problem ?
RethinkDB losing data after restarting server
1.2
1
0
897
28,329,562
2015-02-04T19:17:00.000
0
0
0
1
macos,python-3.x
54,770,045
2
false
0
0
For Anaconda users who have updated to Mojave, you may be running an outdated/unsigned version of Python. Simply run conda upgrade conda to update Python to the latest version, which should also be signed, and the problem should go away. If it doesn't, then you may need to contact Anaconda's support to get them to build a signed package. Of course, you will also have to conda upgrade python for each conda environment you have.
1
9
0
I am developing a simple python3 server app. I invoke it like this: python3 bbserver.py Each time after doing this I get the OSX popup: Do you want the application “Python.app” to accept incoming network connections? I've tried making an exception for python3 executable (there is no python3.app) in the firewall and have tried code signing with a codesign certificate thus: codesign -f -s mycodecert /Library/Frameworks/Python.framework/Versions/3.4/bin/python3 --deep No luck.
How to prevent OSX popup for incoming connections for Python app?
0
0
0
2,455
28,329,977
2015-02-04T19:40:00.000
1
1
0
0
python,html,xml,parsing,raspberry-pi
28,361,971
2
false
0
0
I think there are 3 steps you need to make it work. Extracting only the data you want from the given XML file. Using simple template engine to insert the extracted data into a HTML file. Use a web server to service the file create above. Step 1) You are already using lxml which is a good library for doing this so I don't think you need help there. Step 2) Now there are many python templating engines out there but for a simple purpose you just need an HTML file that was created in advance with some special markup such as {{0}}, {{1}} or whatever that works for you. This would be your template. Take the data from step 1 and just do find and replace in the template and save the output to a new HTML file. Step 3) To make that file accessible using a browser on a different device or a PC you need to service it using a simple HTTP web server. Python provides http.server library or you can use an 3rd party web server and just make sure it can access the file created on step 2.
1
2
0
I am doing a digital signage project using Raspberry-Pi. The R-Pi will be connected to HDMI display and to internet. There will be one XML file and one self-designed HTML webpage in R-Pi.The XML file will be frequently updated from remote terminal. My idea is to parse the XML file using Python (lxml) and pass this parsed data to my local HTML webpage so that it can display this data in R-Pi's web-browser.The webpage will be frequently reloading to reflect the changed data. I was able to parse the XML file using Python (lxml). But what tools should I use to display this parsed contents (mostly strings) in a local HTML webpage ? This question might sound trivial but I am very new to this field and could not find any clear answer anywhere. Also there are methods that use PHP for parsing XML and then pass it to HTML page but as per my other needs I am bound to use Python.
Parse XML file in python and display it in HTML page
0.099668
0
1
4,898
28,333,391
2015-02-04T23:13:00.000
0
0
1
0
python-3.x,windows-7,path,conda
28,354,826
1
false
0
0
In the Windows cmd shell, use the activate and deactivate commands to change the PATH automatically. For instance, if your environment is called python3, run python3 to "activate" (i.e., add to the PATH) the python3 environment. Use deactivate to remove it.
1
2
0
I've got anaconda installed and was able to create a Python 3.3 environment. I can switch to it and conda info -e shows that I've switched. However, I'm confused about what to set my PATH variable to. If I hard code it to the exact env then it works, but I thought that the purpose of conda was to be able to switch easily, as well as update and maintain various environments separately. Perhaps I misunderstood and there's no way around setting my PATH everytime...
Windows 7: What PATH to set to when using Conda
0
0
0
539
28,336,318
2015-02-05T04:37:00.000
5
1
1
0
python,c++
28,336,386
1
false
0
0
#include in C and C++ is a textual include. import in Python is very different -- no textual inclusion at all! Rather, Python's import lets you access names exported by a self-contained, separately implemented module. Some #includes in C or C++ may serve similar roles -- provide access to publicly accessible names from elsewhere -- but they could also be doing so many other very different things, you can't easily tell. For example it's normal for a .cc source file to #include the corresponding .h header file to make sure it's implementing precisely what that header file makes available elsewhere -- there's no equivalent of that in Python (or Java or AFAIK most ohter modern languages). #include could also be about making macros available... and Python very deliberately chooses to have no macros, so, no equivalence!-) All in all, I think the analogy is likely to be more confusing than helpful.
1
2
0
is "import" in python equivalent to "include" in c++? Can I consider namespaces from c++ the same way I do with python module names?
include in C++ vs import in python
0.761594
0
0
3,107
28,339,778
2015-02-05T08:57:00.000
1
0
0
1
google-app-engine,python-2.7,google-app-engine-python
28,340,241
1
false
1
0
Your full project path contains two space characters and needs to be quoted, also, a trailing slash might be required i.e.: C:\Python27\python.exe appcfg.py update "C:\Users\alastair\Desktop\School Files\Proxy Files\mirrorrr-master\mirrorrr-master\" assuming that's where you have your app.yaml file. In your case it's thinking you are pointing to "C:\Users\alastair\Desktop\School" file which does not exist and thus showing the error.
1
0
0
Ive been working on get a proxy working for when im school, to access sites that i use alot for work but my school dont like.. This is the error it comes up with when i try to upload the files to googles app engine.. C:\Program Files (x86)\Google\google_appengine>"C:\Python27\python.exe" appcfg.p y update C:\Users\alastair\Desktop\School Files\Proxy Files\mirrorrr-master\mirrorrr-master 09:44 PM Host: appengine.google.com Usage: appcfg.py [options] update | [file, ...] appcfg.py: error: Directory does not contain an School.yaml configuration file So im very confused on why it is asking for a "School.yaml" But i made one anyway, And even though its been made, it still displays this error, So if anyone can help, Please!
GoogleAppEngine error directory not found
0.197375
0
0
95
28,340,224
2015-02-05T09:21:00.000
0
0
0
1
python,weka
35,519,213
3
false
0
0
Before installing weka wrapper for python you are suppose to install the weka itself using sudo apt-get install weka or build from source code and add the path the enviroment variable using export wekahome="your weka path" this will make sure you have the required weka jar file in the directory
1
0
0
The official website shows how weka-wrapper can install on ubuntu 64 bit. I want toknowhow it can be install on ubuntu 32 bit?
How to install python weka wrapper on ubuntu 32 bit?
0
0
0
359
28,340,723
2015-02-05T09:46:00.000
8
0
1
0
python,testcase,unit-testing
28,341,477
2
true
0
0
Asserting something in your tearDown means that you need to be careful that all the cleaning is done before the actual asserting otherwise the cleaning code may not be called if the assert statement fails and raises. If the assert is just one line it may be OK to have it in every test methods, if it is more than that having a specific method would be a possibility- that method should not be a test of its own i.e. not recognized as a test by your test framework. Using a method decorator or class decorator may also be an alternative. Overall the idea is that tearDown shouldn't do any testing and that explicit is better than implicit.
1
9
0
I have a TestCase with multiple tests and need to assert a few conditions (the same for every test) at the end of each test. Is it OK to add these assertions to the tearDown() method, or is it a bad habit since they're not "cleaning" anything? What would be the right way of doing this?
Is it OK to assert in unittest tearDown method?
1.2
0
0
2,278
28,343,666
2015-02-05T12:12:00.000
2
0
0
0
python,python-2.7,pip,pymssql
28,349,658
1
true
0
0
Looking at the full traceback we see that include_dirs includes /usr/local/include but the header files are in /usr/include which I imagine has to do with the fact python 2.7 is not the system python. You can change the setup.py script to include /usr/include or copy the files into /usr/local/include
1
1
0
I'm trying to pip install pymssql in my Centos 6.6, but kept on experiencing this error: _mssql.c:314:22: error: sqlfront.h: No such file or directory cpp_helpers.h:34:19: error: sybdb.h: No such file or directory I already installed freetds, freetds-devel, and cython. Any ideas? Thanks in advance!
Installing pymssql in Centos 6.6 64-bit
1.2
1
0
2,708
28,345,878
2015-02-05T13:59:00.000
1
0
0
1
python-2.7,twisted
28,348,066
1
false
0
0
There's no need and no way to flush either the read buffer or the write buffer.
1
1
0
I created a simple tcp connection server with Twisted framework. I use the self.transport.write function to write data to the client, but I need to flush the data. Is there a way to do this? In addition, is there a way to flush incoming data?
I used self.transport.write function to write to tcp stream, How to use flush with it?
0.197375
0
0
74
28,352,419
2015-02-05T19:29:00.000
1
1
1
0
python,python-3.x,com,win32com
28,363,049
1
true
0
0
PumpWaitingMessage will process messages and return as soon as there are no more messages to process. You can call it in a loop, but you should call MsgWaitForMultipleObjects, or MsgWaitForMultipleObjectsEx, before the next loop iteration. Avoid calling these functions before the initial loop iteration, as they'll block and you won't have a chance to see if some condition is met whether there are messages to process or not. Or alternatively, provide a reasonable timeout to these functions.
1
2
0
I use Python 3.4. I have a program that provide an integration with COM module in Windows, by win32com package. To process messages from this module I use the pythoncom.PumpWaitingMessages() method in the infinite while loop. But python infinite loop makes 100% CPU core load (as shown in Windows Task Manager). The questions: Is it real "work" or peculiarity of Windows Task Manager? How one can avoid that. Maybe by using asyncio module or another way? Is it possible to process messages in another thread or asynchronously with pythoncom?
python win32com "PumpWaitingMessages()" processing
1.2
0
0
3,020
28,353,801
2015-02-05T20:54:00.000
1
0
1
0
macos,python-3.x,textmate,anaconda
29,264,565
1
true
0
0
I use Anaconda Python 2, and found a couple ways to do this. First note that the tilde shortcut ( ~ ) doesn't work everywhere, and as you found not in text mate variables. Usually you have to use the full path e.g. /Users/youruserid/Anaconda/bin/ Add the path above to your PATH variable in TextMate set a TM_PYTHON variable to use in text mate, again using the full path to the binary you want to use use the Shebang at the top of your script pointing to anaconda python #!/Users/youruserid/Anaconda/bin/python make a symbolic link in /usr/local/bin to anaconda ln -s /Users/youruserid/Anaconda/bin/python /usr/local/bin/python this will affect more than TextMate, and requires /usr/local/bin to be in your path Also some use the link /usr/local/bin/python3 for python 3.x to distinguish between python 3 and 2.
1
2
0
I have installed Anaconda with Python 3.4. How can I use that version on TextMate? I tried adding ~/anaconda/bin at the beginning of the PATH variable, but it doesn't work. When I try to run a program I get Program exited with code #1 after 0.00 seconds and no output.
Can I use Anaconda on TextMate
1.2
0
0
622
28,355,169
2015-02-05T22:20:00.000
1
0
0
1
python,debugging,gdb,segmentation-fault,strace
60,442,126
1
false
0
0
For those who would have the same problem as I had and find this page: My CherryPy server.py ran fine on my Win10 system on python3.8 but failed with segmentation fault on my Linux system which had python3.6.1. Switching to python3.8 on Linux solved my problem.
1
2
0
I am running a custom Python 2.7.3 application on CherryPy in Linux. When I used a service script in /etc/init.d/ to start or stop the service, I encountered a Segmentation Fault (SIGSEGV). Strangely, I did not receive a SIGSEGV if I ran the start or stop command manually from the shell, using "python /path/to/file.py --stop". The service script executes the same command. After some debugging, by chance, I discovered that my /tmp was mounted with a "noexec" option. I removed the "noexec" option and the application was able to start and stop via the service scripts without any segmentation faults. When I first encountered the issue, I ran strace and generated a core dump. Nothing from either tool gave me any indication that /tmp was the culprit. My question is this: How could I have used strace or gdb to help me identify that "noexec" on /tmp was causing the segmentation faults? Here is some output from gdb when analyzing the core dump: (gdb) bt full #0 PyObject_Malloc (nbytes=4) at Objects/obmalloc.c:788 bp = 0x7f6b0fd1c6e800 \Address 0x7f6b0fd1c6e800 out of bounds\ pool = 0x7f6b0fd1c000 next = \value optimized out\ size = 0 #1 0x00007f6b0f7fd8e6 in _PyUnicode_New (length=1) at Objects/unicodeobject.c:345 new_size = 4 unicode = 0x3873480 #2 0x00007f6b0f7fdd4e in PyUnicodeUCS2_FromUnicode (u=0x38367cc, size=) at Objects/unicodeobject.c:461 unicode = \value optimized out\ (There is a lot more output, this is just the first few lines) Here is some output from strace on the failure: 3046 open("/usr/local/python2.7/lib/python2.7/site-packages/oauthlib/common.py", O_RDONLY) = 9 3046 fstat(9, {st_mode=S_IFREG|0644, st_size=13310, ...}) = 0 3046 open("/usr/local/python2.7/lib/python2.7/site-packages/oauthlib/common.pyc", O_RDONLY) = 10 3046 fstat(10, {st_mode=S_IFREG|0644, st_size=16043, ...}) = 0 3046 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fbc9ff9d000 3046 read(10, "\3\363\r\n}\321\322Tc\0\0\0\0\0\0\0\0\5\0\0\0@@\2\0sd\2\0\0d\0"..., 4096) = 4096 3046 fstat(10, {st_mode=S_IFREG|0644, st_size=16043, ...}) = 0 3046 read(10, "\0\0\0C@\2\0s\330\0\0\0t\0\0|\0\0t\1\0\203\2\0s\36\0t\0\0|\0"..., 8192) = 8192 3046 read(10, "thon2.7/site-packages/oauthlib/c"..., 4096) = 3755 3046 read(10, "", 4096) = 0 3046 close(10) = 0 3046 munmap(0x7fbc9ff9d000, 4096) = 0 3046 --- SIGSEGV (Segmentation fault) @ 0 (0) --- After fixing the problem, here's a snippet from strace, from the same point where it tries to load oauthlib/common.pyc - notice that the only difference appears to be a brk() before munmap(): 3416 open("/usr/local/python2.7/lib/python2.7/site-packages/oauthlib/common.pyc", O_RDONLY) = 10 3416 fstat(10, {st_mode=S_IFREG|0644, st_size=16043, ...}) = 0 3416 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f5791f2c000 3416 read(10, "\3\363\r\n}\321\322Tc\0\0\0\0\0\0\0\0\5\0\0\0@@\2\0sd\2\0\0d\0"..., 4096) = 4096 3416 fstat(10, {st_mode=S_IFREG|0644, st_size=16043, ...}) = 0 3416 read(10, "\0\0\0C@\2\0s\330\0\0\0t\0\0|\0\0t\1\0\203\2\0s\36\0t\0\0|\0"..., 8192) = 8192 3416 read(10, "thon2.7/site-packages/oauthlib/c"..., 4096) = 3755 3416 read(10, "", 4096) = 0 3416 brk(0x372f000) = 0x372f000 3416 close(10) = 0 3416 munmap(0x7f5791f2c000, 4096) = 0 3416 close(9) = 0 What information can help me point the blame at /tmp's mount options?
Why did I get a Segmentation Fault in Python when /tmp is mounted with noexec?
0.197375
0
0
636
28,358,379
2015-02-06T04:03:00.000
3
0
1
0
python,image,image-processing,python-imaging-library,mask
63,671,324
2
false
0
0
You can use the PIL library to mask the images. Add in the alpha parameter to img2, As you can't just paste this image over img1. Otherwise, you won't see what is underneath, you need to add an alpha value. img2.putalpha(128) #if you put 0 it will be completly transparent, keep image opaque Then you can mask both the images img1.paste(im=img2, box=(0, 0), mask=img2)
1
2
1
I have some traffic camera images, and I want to extract only the pixels on the road. I have used remote sensing software before where one could specify an operation like img1 * img2 = img3 where img1 is the original image and img2 is a straight black-and-white mask. Essentially, the white parts of the image would evaluate to img1 * 1 = img3 and the black parts would evaluate to img1 * 0 = img3 And so one could take a slice of the image and let all of the non-important areas go to black. Is there a way to do this using PIL? I can't find anything similar to image algebra like I'm used to seeing. I have experimented with the blend function but that just fades them together. I've read up a bit on numpy and it seems like it might be capable of it but I'd like to know for sure that there is no straightforward way of doing it in PIL before I go diving in. Thank you.
Is it possible to mask an image in Python Imaging Library (PIL)?
0.291313
0
0
10,471
28,359,414
2015-02-06T05:53:00.000
1
0
0
0
python,django,postgresql
28,382,483
1
true
1
0
If you are using Linux you just have to add that domain name in /etc/hosts and access it like it is a real domain name. Another solution is to make that domain name to point to 127.0.0.1 while you don't push the changes to production. I'd go with the first idea though.
1
0
0
I have successfully implemented django-tenant-schema in my project. It also creates separate schema for each user after they got registered.Suppose if a customer named 'customer1' is successfully logged in, then he will redirect to "customer1.domainname.com".So please suggest me a solution to test if this is working in my local system ahead of being put it in the production environment. Thanks in advance...
How to proceed after Implementing django tenant schemas
1.2
0
0
594
28,365,304
2015-02-06T11:58:00.000
11
0
1
0
python,declaration,pep8
28,365,397
1
true
0
0
Generally, there is no preferred order. Depending on the program, a order can be needed: You can decorate classes with functions. Then the decorator function must be defined before the class. OTOH, you can decorate functions with classes. Then the decorator class must be defined before the function. You can have classes be assigned class attributes which are determined by calling a function. Again, this function must be defined before the class.
1
10
0
Is there any preferred order when declaring multiple functions and classes in the same python file? Should functions or classes be declared first? What are the best practices? PEP8 does not seems to give any recommendation
Python declaration order: classes or functions first?
1.2
0
0
4,869
28,366,467
2015-02-06T13:04:00.000
0
0
1
0
python,python-2.7,pyscripter
39,312,729
2
false
0
0
Had this problem myself. Turns out for me it was an issue with 32/64 version mismatch. I had installed a 32 bit version of Python on Windows 8 (64 bit). I then tried installing 64 bit Pyscripter and got this error message. Uninstalled both python and pyscripter, and made sure I installed 64 bit version of python and then pyscripter, worked first time!
1
1
0
I have already installed Python. But when I open PyScripter. I am getting error. It says: "Python could not be properly initialized" I'm using Windows 7 & (desktop). I downloaded Python 2.7.3 Windows Installer from python.org. should I download another version of Python.
Python could not be properly initialized
0
0
0
5,293
28,368,533
2015-02-06T14:58:00.000
2
0
0
1
macos,python-2.7,icons,xattr
39,163,917
1
false
0
0
Let we have a icon.icns file: Read the com.apple.ResourceFork extended attribute from the icon file Set the com.apple.FinderInfo extended attribute with folder icon flag Create a Icon file (name: Icon\r) inside the target folder Set extended attributes com.apple.FinderInfo & com.apple.ResourceFork for icon file (name: Icon\r) Hide the icon file (name: Icon\r) We can use stat and xattr modules to do this.
1
3
0
I am trying to write the Code in Python to Change the Icon of a Mac OS X folder using just the Python Script (Without XCODE or any other API). The procedure is that I have a icon.icns file , I need to change the folder icon to the icon.icns file using the python script.
How to change icon of a MAC OS folder using Python Script and Terminal commands?
0.379949
0
0
1,414
28,376,186
2015-02-06T22:50:00.000
5
0
1
0
python,python-3.x
28,376,254
2
false
0
0
Convert the strings to set objects. set(str1).issubset(set(str2)) You can also use this alternative syntax: set(str1) <= set(str2)
1
0
0
This seems pretty straight-forward but I'm stuck. What I want is to see if a string (str1) contains all the letters that are in a second string (str2). If str1 contains all the letters (in any order, any number of times) then return True. If not, return false. [Note] Str2 does not necessarily have to have all the letters that str1 has.
Python: Compare 2 strings and see if they contain the same letters
0.462117
0
0
1,670
28,379,325
2015-02-07T06:30:00.000
1
1
0
1
python-3.x,python-import,python-module
29,157,556
1
true
0
0
You would be better off to install those modules locally. Can you create packages of those modules (either using pip or something similar) then you can distribute them to your local box. Of what I know, there is nothing similar.
1
0
0
How I can use some 'extra' python module which is not located localy but on a remote server? Somthing like using maven dependencies with Java
How to use remote python modules
1.2
0
1
32
28,379,373
2015-02-07T06:37:00.000
0
0
1
0
python,function,variables,arguments
28,380,390
2
false
0
0
As @Alex Martelli pointed out, you can... but should you? That's the more relevant question, IMO. I understand the appeal of what you're asking. It seems like it should just be good form because otherwise you'd have to pass the variables everywhere, which of course seems wasteful. I won't presume to know how far along you are in your Python-Fu (I can only guess), but back when I was more of a beginner there were times when I had the same question you do now. Using global felt ugly (everyone said so), yet I had this irresistible urge to do exactly what you're asking. In the end, I went with passing variables in to functions. Why? Because it's a code smell. You may not see it, but there are good reasons to avoid it in the long run. However, there is a name for what you're asking: Classes What you need are instance variables. Technically, it might just be a step away from your current situation. So, you might be wondering: Why such a fuss about it? After all, instance variables are widely accepted and global variables seem similar in nature. So what's the big problem with using global variables in regular functions? The problem is mostly conceptual. Classes are designed for grouping related behaviour (functions) and state (player position variables) and have a wide array of functionality that goes along with it (which wouldn't otherwise be possible with only functions). Classes are also semantically significant, in that they signal a clear intention in your mind and to future readers about the way the program is organized and the way it operates. I won't go into much further detail other than to say this: You should probably reconsider your strategy because the road it leads to inevitably is this: 1) Increased coupling in your codebase 2) Undecipherable flow of execution 3) Debugging bonanza 4) Spaghetti code. Right now, it may seem like an exaggeration. The point is you'd greatly benefit to learn how to organize your program in a way that makes sense, conceptually, instead of using what currently requires the least amount of code or what seems intuitive to you right now. You may be working on a small application for all I know and with one only 1 module, so it might resemble the use of a single class. The worst thing that can happen is you'll adopt bad coding practices. In the long run, this might hurt you.
1
1
0
In python is there any way at all to get a function to use a variable and return it without passing it in as an argument?
Use a Variable in a Function Without Passing as an Argument
0
0
0
6,166
28,384,481
2015-02-07T16:26:00.000
1
0
1
0
python,arrays,numpy
53,429,718
6
false
0
0
Just do the following. import numpy as np arr = np.zeros(10) arr[:3] = 5
1
1
1
I'm having trouble figuring out how to create a 10x1 numpy array with the number 5 in the first 3 elements and the other 7 elements with the number 0. Any thoughts on how to do this efficiently?
Create a numpy array (10x1) with zeros and fives
0.033321
0
0
5,701
28,388,896
2015-02-07T23:46:00.000
1
0
1
1
python
28,388,912
1
false
0
0
Okay, first point is that you can't share memory among machines unless you're on a very specialized architecture. (Massively parallel machines, Beowulf clusters, and so on.) If you mean to share code, then package your code into a real Python package and distribute it with a tool like Chef, Puppet or Docker. If you mean to share data, use a database of some sort that all your workers can access. I'm fond of MongoDB because it's easy to match to an application, but there are a million others databases. A lot of people use mysql or postgresql.
1
2
0
My python process run on different machines. It consists of a manager and many workers. The worker in each machine are multi threaded and needs to update some data such as its status to the manager residing on another machine. I didn't want to use mysql because many other processes are already executing many queries on it and it will reach its max_connection I have two methods in mind: For each worker thread, write the data to a local text file. A separate bash script will run a while-loop, check for file change and scp this file to all other machines. For each worker thread, write the data to share memory and have it replicated to all other machines. I am not sure how to do this. In python how can i write to shared memory? How can i replicated shared memories?
python - How to share files between many computers
0.197375
0
0
516
28,388,976
2015-02-07T23:59:00.000
2
0
1
0
python,six
28,440,063
1
true
0
0
On some computers where I don't have pip installed I usually do: Extract the downloaded file In the command line, where you extracted do python setup.py install Module should now be installed You can run python now In the interactive interpreter do import module_name If you get no errors, installation was a success
1
2
0
I am running Python 2.7.9 on a Windows 8 machine. I've programmed for a long time (since the 60s), but I'm having trouble figuring out how to install the Six Module. I need a step-by-step set of instructions. Either help here or a suggested website would be helpful to this old man. Thanks!
Installing Python Module Six
1.2
0
0
8,519
28,389,501
2015-02-08T01:16:00.000
4
0
1
0
python,numpy,jupyter-notebook,ipython
56,665,018
3
false
0
0
I tested learning the same small neural net (1) under Jupyter and (2) running Python under Anaconda prompt (either with exec(open(foo.py).read()) under python or with python foo.py directly under Anaconda prompt). It takes 107.4 sec or 108.2 sec under Anaconda prompt, and 105.7 sec under Jupyter. So no, there is no significant difference, and the minor difference is in favor of Jupyter.
3
13
0
I am developing a program for simulation (kind of like numerical solver). I am developing it in an ipython notebook. I am wondering if the speed of the code running in the notebook is the same as the speed of the code running from terminal ? Would browser memory or overhead from notebook and stuff like that makes code run slower in notebook compared to native run from the terminal?
Does running IPython/Jupyter Notebook affect the speed of the program?
0.26052
0
0
22,901
28,389,501
2015-02-08T01:16:00.000
15
0
1
0
python,numpy,jupyter-notebook,ipython
28,389,651
3
true
0
0
One of the things that might slow things a lot would be if you had a lot of print statements in your simulation. If you run the kernels server and browser on the same machine, assuming your simulation would have used all the cores of your computer, yes using notebook will slow things down. But no more than browsing facebook or Youtube while the simulation is running. Most of the overhead of using IPython is actually when you press shift-enter. In pure python prompt the REPL might react in 100ms, and in IPython 150 or alike. But if you are concern about performance, the overhead of IPython is not the first thing you should be concern about.
3
13
0
I am developing a program for simulation (kind of like numerical solver). I am developing it in an ipython notebook. I am wondering if the speed of the code running in the notebook is the same as the speed of the code running from terminal ? Would browser memory or overhead from notebook and stuff like that makes code run slower in notebook compared to native run from the terminal?
Does running IPython/Jupyter Notebook affect the speed of the program?
1.2
0
0
22,901
28,389,501
2015-02-08T01:16:00.000
9
0
1
0
python,numpy,jupyter-notebook,ipython
48,817,440
3
false
0
0
I have found that Jupyter is significantly slower than Ipython, whether or not many print statements are used. Nearly all functions suffer decreased performance, but especially if you are analyzing large dataframes or performing complex calculations, I would stick with Ipython.
3
13
0
I am developing a program for simulation (kind of like numerical solver). I am developing it in an ipython notebook. I am wondering if the speed of the code running in the notebook is the same as the speed of the code running from terminal ? Would browser memory or overhead from notebook and stuff like that makes code run slower in notebook compared to native run from the terminal?
Does running IPython/Jupyter Notebook affect the speed of the program?
1
0
0
22,901
28,390,253
2015-02-08T03:29:00.000
1
0
0
1
python,google-app-engine,web
28,397,610
1
true
1
0
The best way is to ping the server while the user is online. Using other methods such as the Channel API with GAE proves to be unreliable since you are not constantly sending a ping message but rather just sending a disconnect message. If the browser crashes, no disconnect message is sent.
1
0
0
I want a way for users to open this webpage, and whenever they are on that page, it updates the server that they are on the page. It should only work when the user is actually looking at the webpage (not inactive, like from switchings tabs). One way to do this which I have implemented is to keep pinging the server saying that I am alive. This however causes a lot of load on the server and client side. I am using Google App Engine and webapp2, and was wondering if anyone knows a better way to do this.
Webpage ping when active
1.2
0
0
47
28,390,961
2015-02-08T05:38:00.000
98
0
1
0
python,virtualenv,pycharm,anaconda,conda
31,937,300
5
true
0
0
I know it's late, but I thought it would be nice to clarify things: PyCharm and Conda and pip work well together. The short answer Just manage Conda from the command line. PyCharm will automatically notice changes once they happen, just like it does with pip. The long answer Create a new Conda environment: conda create --name foo pandas bokeh This environment lives under conda_root/envs/foo. Your python interpreter is conda_root/envs/foo/bin/pythonX.X and your all your site-packages are in conda_root/envs/foo/lib/pythonX.X/site-packages. This is same directory structure as in a pip virtual environement. PyCharm sees no difference. Now to activate your new environment from PyCharm go to file > settings > project > interpreter, select Add local in the project interpreter field (the little gear wheel) and hunt down your python interpreter. Congratulations! You now have a Conda environment with pandas and bokeh! Now install more packages: conda install scikit-learn OK... go back to your interpreter in settings. Magically, PyCharm now sees scikit-learn! And the reverse is also true, i.e. when you pip install another package in PyCharm, Conda will automatically notice. Say you've installed requests. Now list the Conda packages in your current environment: conda list The list now includes requests and Conda has correctly detected (3rd column) that it was installed with pip. Conclusion This is definitely good news for people like myself who are trying to get away from the pip/virtualenv installation problems when packages are not pure python. NB: I run PyCharm pro edition 4.5.3 on Linux. For Windows users, replace in command line with in the GUI (and forward slashes with backslashes). There's no reason it shouldn't work for you too. EDIT: PyCharm5 is out with Conda support! In the community edition too.
1
78
0
I've got Pycharm 4 running on my Linux (Ubuntu 14.04) machine. In addition to the system python, I've also got Anaconda installed. Getting the two to play nicely together seems to be a bit of a problem... PyCharm provides some interesting integration for virtualenvs and pip, but the Anaconda Python distribution seems to prefer using its own conda tool for both activities. Is there a relatively simple/painless way to be able to use conda in conjunction with PyCharm? Not just as an alternative interpreter i.e. point PyCharm at the Anaconda Python binary for a project interpreter, but to be able to create, source/activate and deactivate virtual envs, add/remove packages in those virtual envs, etc. Or am I going to have to choose between using Anaconda (and having a more recent and up-to-date python than may come with the system), and being able to use PyCharm's features to their fullest extent?
Using (Ana)conda within PyCharm
1.2
0
0
132,021
28,399,120
2015-02-08T20:39:00.000
1
0
0
0
python,django
28,399,223
2
true
1
0
No, code speed is not affected by the size of your modules. Additional imports only affect the memory footprint (a little more memory is needed to hold the extra code objects) and startup speed (more files are loaded from disk when your Django server starts). However, this doesn't really affect code running speeds; Python does not have to do extra work to run your code.
2
0
0
In my Django web app, I have pretty much one large file that contains all my views. This has a ton of imported python libraries that are only used for certain views. Does this slow my code? Like in python does importing things like python natural language toolkit (nlkt) and threading libraries slow down the code when its not needed? I know its not great for a maintainability/style standpoint to have one big file like this, but I am asking purely from a performance standpoint.
In a Django web application, would large files or many unnecessary import statements slow down my server?
1.2
0
0
62
28,399,120
2015-02-08T20:39:00.000
0
0
0
0
python,django
28,399,156
2
false
1
0
Views load only one time, at the moment of start your code
2
0
0
In my Django web app, I have pretty much one large file that contains all my views. This has a ton of imported python libraries that are only used for certain views. Does this slow my code? Like in python does importing things like python natural language toolkit (nlkt) and threading libraries slow down the code when its not needed? I know its not great for a maintainability/style standpoint to have one big file like this, but I am asking purely from a performance standpoint.
In a Django web application, would large files or many unnecessary import statements slow down my server?
0
0
0
62
28,400,064
2015-02-08T22:23:00.000
0
0
0
0
python,sqlite
28,400,155
1
true
0
0
In terms of logical correctness, you should commit every time a set of one or more queries that are supposed to execute atomically (i.e, all of them, or else none of them, execute) is finished. There is no connection between this logical correctness and any given amount of time between commits. In your vaguely-sketched use case, I guess I'd be committing every time I'm done with a whole web page -- what I want to avoid is likely the committing of a web page that's "partially done" but not completely so -- whether that means 100 msec, or 50, or 200 -- why should that duration matter?
1
0
0
I m building a web crawler and I wanted to save links in a database with informations like type, size, etc. and actually I don't know when I should commit the database (how often) in other terms: is it a problem if I commit the database every 0.1 second?
Python sqlite3 correct use of commit
1.2
1
0
67
28,400,972
2015-02-09T00:13:00.000
6
0
1
0
python,sympy
28,444,006
3
true
0
0
Note that by default in SymPy the base of the natural logarithm is E (capital E). That is, exp(x) is the same as E**x.
1
5
0
I want to print the derivative of e**4*x. I want Python to give me 4*e**4x. Instead, it's giving me 4 times THE VALUE OF E. HOw can I get sympy to display e as the letter constant. Thanks
How do I use a constant LETTER in sympy?
1.2
0
0
7,557
28,404,878
2015-02-09T07:35:00.000
7
0
1
1
python-2.7,windows-server-2008,google-compute-engine
30,692,067
4
false
0
0
Install python EXCEPT "pip" Run the python install msi again and select "change" Select "pip" and install the pip It would be works... I think it is a priority problem into the msi package...the package seems to try to install the pip before installing python.exe. So, pip can not be installed...
2
7
0
I fired up a new Windows google compute engine instance. It's running Windows 2008 R2, service pack 1. I download and try running the Python .msi installer for version 2.7.9, and it fails with this error: There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor. I see this error for both the 64-bit and the 32-bit installer. Has anyone else seen it or know of a work-around?
Fail to install Python 2.7.9 on a Windows google compute engine instance
1
0
0
7,533
28,404,878
2015-02-09T07:35:00.000
0
0
1
1
python-2.7,windows-server-2008,google-compute-engine
31,103,137
4
false
0
0
It seems to be a dependency issue, please try to install "Microsoft Visual C++ 2008 SP1 Redistributable Package (x64)"
2
7
0
I fired up a new Windows google compute engine instance. It's running Windows 2008 R2, service pack 1. I download and try running the Python .msi installer for version 2.7.9, and it fails with this error: There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor. I see this error for both the 64-bit and the 32-bit installer. Has anyone else seen it or know of a work-around?
Fail to install Python 2.7.9 on a Windows google compute engine instance
0
0
0
7,533
28,406,420
2015-02-09T09:27:00.000
2
1
1
0
python,python-2.7,python-3.x
28,406,587
3
false
0
0
put C:\Netra_Step_2015\Tests\SVTestcases\Common\shared in your PYTHONPATH env
2
0
0
When I print sys.path in my code I get the following as output: ['C:\Netra_Step_2015\Tests\SVTestcases', 'C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression', 'C:\Python27\python27.zip', 'C:\Python27\DLLs', 'C:\Python27\lib', etc.] Now, when I write "import testCaseBase as TCB" where testcaseBase.py is in this path: C:\Netra_Step_2015\Tests\SVTestcases\Common\shared I get an error: "ImportError: No module named testCaseBase" My code is in C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression\regression.py. My code goes ahead with compilation, but testcaseBase.py which is residing in a parallel directory fails to compile. What might be the reason?
ImportError: Module not found but sys.path is showing the file resides under the path
0.132549
0
0
6,665
28,406,420
2015-02-09T09:27:00.000
0
1
1
0
python,python-2.7,python-3.x
52,959,701
3
false
0
0
Please dont use ~/ in the path . it does not work. Use the full path.
2
0
0
When I print sys.path in my code I get the following as output: ['C:\Netra_Step_2015\Tests\SVTestcases', 'C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression', 'C:\Python27\python27.zip', 'C:\Python27\DLLs', 'C:\Python27\lib', etc.] Now, when I write "import testCaseBase as TCB" where testcaseBase.py is in this path: C:\Netra_Step_2015\Tests\SVTestcases\Common\shared I get an error: "ImportError: No module named testCaseBase" My code is in C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression\regression.py. My code goes ahead with compilation, but testcaseBase.py which is residing in a parallel directory fails to compile. What might be the reason?
ImportError: Module not found but sys.path is showing the file resides under the path
0
0
0
6,665
28,406,798
2015-02-09T09:47:00.000
3
0
0
0
python,ssl,packet,scapy
28,407,181
2
false
0
0
You can neither assume that all traffic using port 443 is SSL and also that SSL can only be found on port 443. To detect SSL traffic you might try to look at the first bytes, i.e. a data stream starting with \x16\x03 followed by [\x00-\x03] might be a ClientHello for SSL 3.0 ... TLS 1.2. But of course it might also be some other protocol which just uses the same initial byte sequence.
1
1
0
How can I recognize SSL packets when I sniff in scapy? I know that SSL packets are going through port 443, can I assume that all the TCP packets that go through port 443 are SSL packets?
Scapy sniffing SSL
0.291313
0
1
6,880
28,411,082
2015-02-09T13:38:00.000
0
0
0
1
python,subprocess,terminate
28,411,277
2
false
0
0
So, this is not a question involving code, so you get a general answer, not involving code. The solution to your problem requires you to keep protocol. This protocol must survive the lifetime of your test processes. So, there must be some place where you can write to about the state of your tests, and this place must ensure persistence. What might that be? Right, the file system, for instance. The most simple place to keep track of your tests is the file system. What follows is one very simple example of how you might want to implement this: For each test case that succeeded, write a file to the current working directory, indicating success for this test. Then, before your test process would invoke a certain test, make it check the existence of such file and make it skip the test if the file is found. Let's assume your input files have different basenames (e.g. A, B, ...,). You can then use a file called B.success in the current working directory to indicate that this does not need to be tested anymore.
1
0
0
I am trying to automatize some test cases using subprocess_check.call() by calling another python script with an input and output files. I have approx. 10 input files. When I started to test, for example, first and second files were tested successfully but in the third file I got an error and the script was terminated. What I want is, how can I run my script from where it was terminated ? I do not want to start from the beginning. I just wanted to continue my test from the case where I got an error. After correcting the input file, starting from this file I want to run the script until the end. Any ideas?
Python subprocess.check_call terminate and start from where it terminated
0
0
0
58
28,413,567
2015-02-09T15:44:00.000
1
1
1
0
python,robotframework
41,825,897
2
false
1
0
Try to add the following path in environment variable also: "C:\Python27\Lib\site-packages" Since this path consists all the third party modules installed on your PC and also verify if robot-framework library is present in this folder.
2
1
0
I'm trying to install Robot Framework, but it keeps giving me an error message during setup that "No Python installation found in the registry." I've tried running the installer as administrator, I've made sure that Python is installed (I've tried both 2.7.2 and 2.7.9), and both C:\Python27 and C:\Python27\Scripts are added to the PATH (with and without slashes on the end, if that matters). I have no idea why it can't find Python. What do I need to do?
Robot Framework can't find Python
0.099668
0
0
1,818
28,413,567
2015-02-09T15:44:00.000
0
1
1
0
python,robotframework
28,443,946
2
false
1
0
I faced the same issue. Install a different bit version of ROBOT framework. In my case, I was first trying to install 64bit version but it said "No Python installation found in the registry." Then I tried to install the 32bit version of ROBOT framework and it worked. So there is nothing wrong with your Python version.
2
1
0
I'm trying to install Robot Framework, but it keeps giving me an error message during setup that "No Python installation found in the registry." I've tried running the installer as administrator, I've made sure that Python is installed (I've tried both 2.7.2 and 2.7.9), and both C:\Python27 and C:\Python27\Scripts are added to the PATH (with and without slashes on the end, if that matters). I have no idea why it can't find Python. What do I need to do?
Robot Framework can't find Python
0
0
0
1,818
28,415,460
2015-02-09T17:17:00.000
1
0
1
1
python,windows,ubuntu,exe
28,415,565
1
true
0
0
cx_freeze is another option for freezing cross-platform.
1
0
0
I have been trying to package my python scripts into .exe setup for Windows. Is that any way I can do the same while running Ubuntu?
Packaging python scripts to .exe on Ubuntu
1.2
0
0
45
28,416,182
2015-02-09T17:55:00.000
3
0
1
1
python,powershell,path
28,416,220
2
false
0
0
Python 2.6.1 is already in your path as demonstrated by item number 2 in your list. On number 3, you're adding Python 2.7 to your path after Python 2.6.1's entry. You need to remove Python 2.6.1 from your environment variable, or at a minimum, set it so that 2.7 is listed first.
1
5
0
I have two Python versions on my machine (Windows Vista), 2.6 (located in C/Program files) and 2.7 (located in C/). 1- I open PowerShell 2- I type python, and it calls python 2.6.1. 3- I want to change the path for Python 2.7, so I type: [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User") 4- and then when I run python again it still calls the version 2.6. and there is no way I can change it. I also tried to restart the computer after changing the path, with no success. Any suggestions?
Setting path for Python in PowerShell?
0.291313
0
0
15,527
28,417,806
2015-02-09T19:34:00.000
1
0
0
0
python,cassandra,datastax-enterprise
28,419,293
2
true
0
0
The details depend on your file format and C* data model but it might look something like this: Read the file from s3 into an RDD val rdd = sc.textFile("s3n://mybucket/path/filename.txt.gz") Manipulate the rdd Write the rdd to a cassandra table: rdd.saveToCassandra("test", "kv", SomeColumns("key", "value"))
1
0
1
i Launch cluster spark cassandra with datastax dse in aws cloud. So my dataset storage in S3. But i don't know how transfer data from S3 to my cluster cassandra. Please help me
How import dataset from S3 to cassandra?
1.2
1
0
1,657
28,418,823
2015-02-09T20:34:00.000
0
0
0
0
python,theano,softmax
34,094,065
1
true
0
0
solved. I had to use T.nnet.categorical_crossentropy since my target variable is an integer vector.
1
1
1
I'm implementing a DNN with Theano. At the last layer of DNN, I'm using a softmax as a nonlinear function from theano.tensor.nnet.softmax As a lost function i'm using cross entropy from T.nnet.binary_crossentropy But I get a strange error: "The following error happened while compiling the node', GpuDnnSoftmaxGrad{tensor_format='bc01' ..." I'm a newbie with theano and can't figure out what's wrong with this model. Your help is appreciated PS: my guess is it is somehow related to the fact that softmax takes a 2D tensor and returns a 2D tensor. PS2:I'm using the bleeding edge Theano (just cloned) my CUDA version is old it is 4.2 BUT I'm almost sure that that's not the problem since I'm working without error with other DNN tools written based on Theano. I'm using pylearn2 to accelerate and that's not the problem either since I already used it successfully with the current Theano and CUDA in another DNN. The error happens at this line: train= theano.function([idx], train_loss, givens=givens, updates=updates) The full error message is: cmodule.py", line 293, in dlimport rval = __import__(module_name, {}, {}, [module_name]) RuntimeError: ('The following error happened while compiling the node', GpuDnnSoftmaxGrad{tensor_format='bc01', mode='channel', algo='accurate'}(GpuContiguous.0, GpuContiguous.0), '\n', 'could not create cuDNN handle: The handle was not initialized(Is your driver recent enought?).', "[GpuDnnSoftmaxGrad{tensor_format='bc01', mode='channel', algo='accurate'}(<CudaNdarrayType(float32, (False, False, True, True))>, <CudaNdarrayType(float32, (False, False, True, True))>)]") The Cross entropy funcion I'm using is defined as: error = T.mean(T.nnet.binary_crossentropy(input, target_y) where input is the output of the softmax layer and target_y is the labels.
getting error with softmax and cross entropy in theano
1.2
0
0
1,358
28,419,700
2015-02-09T21:26:00.000
2
0
1
0
python,import,module
28,419,726
1
true
0
0
Should I always repeat my imports at the head of each module file? Yes. Every module needs to import what it needs to use. As two great minds noted in the comments, the actual loading of the module only takes place once. Multiple imports will reuse the already-loaded module, so it won't have any significant performance impact.
1
0
0
There is a main program importing a module with classes or something usefull that another submodule shall use too. For example: main.py: import datetime datetime.now() import mod mod.py: datetime.today() When importing 'mod' module python gives an error that 'datetime' is not defined. datetime.today() cant be executed. What should I do if I need to create a modular app in python instead of one-file apllication program? Should I always repeat my imports at the head of each module file ? Or can I make imported modules accessible from further imported modules?
make imported modules accessible for further imported modules
1.2
0
0
68
28,419,877
2015-02-09T21:37:00.000
52
0
0
0
python,pandas
57,704,035
3
false
0
0
Meanwhile, since 0.19.0, there is pandas.Series.is_monotonic_increasing, pandas.Series.is_monotonic_decreasing, and pandas.Series.is_monotonic.
1
33
1
Is there a way to test whether a dataframe is sorted by a given column that's not an index (i.e. is there an equivalent to is_monotonic() for non-index columns) without calling a sort all over again, and without converting a column into an index?
Check whether non-index column sorted in Pandas
1
0
0
17,253
28,422,520
2015-02-10T01:19:00.000
0
0
0
1
python,django,macos,pip
35,575,253
2
false
1
0
Try adding sudo. sudo pip install Django
2
0
0
I recently installed Python 3.4 on my Mac and now want to install Django using pip. I tried running pip install Django==1.7.4 from the command line and received the following error: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/commands/install.py", line 347, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_set.py", line 549, in install **kwargs File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 754, in install self.move_wheel_files(self.source_dir, root=root) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 963, in move_wheel_files isolated=self.isolated, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 234, in move_wheel_files clobber(source, lib_dir, True) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 205, in clobber os.makedirs(destdir) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/django' Obviously my path is pointing to the old version of Python that came preinstalled on my computer, but I don't know how to run the pip on the new version of Python. I am also worried that if I change my file path, it will mess up other programs on my computer. Is there a way to point to version 3.4 without changing the file path? If not how do I update my file path to 3.4?
Mac OSX Trouble Running pip commands
0
0
0
797