Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
22,020,129
2014-02-25T16:24:00.000
4
0
1
1
ipython-notebook
22,020,213
2
true
0
0
One minute later and it occurs to me that the front page might not support even though the server does. Sure enough, http://localhost:5000/localfile/PythonReference.ipynb?create=1 renders the local notebook.
1
5
0
I've got nbviewer installed and working. I see it has a --localfiles option that takes a folder name. It says: "Serving local notebooks in /home/gb/S14/inclass, this can be a security risk" But I can't figure out the URL format to get it to look for the file there. The code adds a handler for /localfile/(.*) but that doesn't seem to get triggered. Anyone know how to format name to give to trigger loading a local file?
How to make nbviewer display local files?
1.2
0
0
4,226
22,022,938
2014-02-25T18:25:00.000
0
0
0
0
python,selenium
22,023,178
1
false
0
0
If it's an OS dialog, no, you can't manipulate it with Selenium, you'd need a library that provides you hooks directly in to the OS. To capture the request, you would either need to use a proxy to capture the traffic and again another interface to interact with the proxy to inspect the request, or you might be able to inject some JS through Selenium that modifies the behavior of the button to return the link to you instead of navigating the browser to it.
1
0
0
When you click on some of the links on this particular page the GET request gets initiated by javascript. In this case it's a file so when you click it webdriver.Firefox makes a dialog window appear that asks you whether you want to download the file or not. Is it possible to capture the GET request directly and save it to disk or otherwise automate the dialog window?
How to capture GET requests in Selenium initiated via JavaScript?
0
0
1
583
22,024,577
2014-02-25T19:45:00.000
0
0
1
0
python,numpy
22,024,672
1
false
0
0
The exit code of the python process should reveal the reason for the process exiting. In the event of an adverse condition, the exit code will be something other than 0. If you are running in a Bash shell or similar, you can run "echo $?" in your shell after running Python to see its exit status. If the exit status is indeed 0, try putting some print statements in your code to trace the execution of your program. In any case, you would do well to post your code for better feedback. Good luck!
1
1
1
So I am trying to run various large images which gets put into an array using numpy so that I can then do some calculations. The calculations get done per image and the opening and closing of each image is done in a loop. I a have reached a frustration point because I have no errors in the code (well none to my knowledge nor any that python is complaining about), and as a matter of fact my code runs for one loop, and then it simply does not run for the second, third, or other loops. I get no errors! No memory error, no syntax error, no nothing. I have used Spyder and even IDLE, and it simply runs all the calculations sometimes only for one image, sometimes for two, then it just quits the loop (again WITH NO ERROR) as if it had completed running for all images (when it has only ran for one/two images). I am assuming its a memory error? - I mean it runs one loop , sometimes two, but never the rest? -- so ... I have attempted to clear the tracebacks using this: sys.exc_clear() sys.exc_traceback = sys.last_traceback = None I have also even tried to delete each variable when I am done with it ie. del variable However, nothing seems to fix it -- Any ideas of what could be wrong would be appreciated!
Large dataset - no error - but it wont run - python memory issue?
0
0
0
107
22,026,393
2014-02-25T21:17:00.000
5
0
1
0
python,python-2.7,file-io,dataframe,string-concatenation
22,026,711
3
false
0
0
Unless you are running into a performance issue, you can probably write to the file line by line. Python internally uses buffering and will likely give you a nice compromise between performance and memory efficiency. Python buffering is different from OS buffering and you can specify how you want things buffered by setting the buffering argument to open.
1
5
1
I have a speed/efficiency related question about python: I need to write a large number of very large R dataframe-ish files, about 0.5-2 GB sizes. This is basically a large tab-separated table, where each line can contain floats, integers and strings. Normally, I would just put all my data in numpy dataframe and use np.savetxt to save it, but since there are different data types it can't really be put into one array. Therefore I have resorted to simply assembling the lines as strings manually, but this is a tad slow. So far I'm doing: 1) Assemble each line as a string 2) Concatenate all lines as single huge string 3) Write string to file I have several problems with this: 1) The large number of string-concatenations ends up taking a lot of time 2) I run of of RAM to keep strings in memory 3) ...which in turn leads to more separate file.write commands, which are very slow as well. So my question is: What is a good routine for this kind of problem? One that balances out speed vs memory-consumption for most efficient string-concatenation and writing to disk. ... or maybe this strategy is simply just bad and I should do something completely different? Thanks in advance!
Python: Fast and efficient way of writing large text file
0.321513
0
0
15,877
22,029,142
2014-02-26T00:21:00.000
2
0
0
0
python,scipy
22,044,916
1
true
0
0
Try scipy.odr. It allows to specify weights/errors in both input and response variable.
1
0
1
So I already know how to use scipy.optimize.curve_fit for normal fitting needs, but what do I do if both my x data and my y data both have error bars?
Scipy: Fitting Data with Two Dimensional Error
1.2
0
0
104
22,030,342
2014-02-26T02:14:00.000
-1
1
1
0
python,c++,python-2.x,integer-division
22,030,432
4
false
0
0
I am not sure about Python, but in C++ integer/integer = integer, and therefore in case of -1/2 is -0.5 which is rounded automatically to integer and therefore you get the 0 answer. In case of Python, maybe the system used the floor function to convert the result into an integer.
2
4
0
C++: cout << -1/2 evaluates to 0 Python: -1/2 evaluates to -1. Why is this the case?
Why is -1/2 evaluated to 0 in C++, but -1 in Python?
-0.049958
0
0
1,668
22,030,342
2014-02-26T02:14:00.000
1
1
1
0
python,c++,python-2.x,integer-division
22,030,413
4
false
0
0
From the Python docs (emphasis mine): The / (division) and // (floor division) operators yield the quotient of their arguments. The numeric arguments are first converted to a common type. Plain or long integer division yields an integer of the same type; the result is that of mathematical division with the ‘floor’ function applied to the result. The floor function rounds to the number closest to negative infinity, hence -1.
2
4
0
C++: cout << -1/2 evaluates to 0 Python: -1/2 evaluates to -1. Why is this the case?
Why is -1/2 evaluated to 0 in C++, but -1 in Python?
0.049958
0
0
1,668
22,030,696
2014-02-26T02:47:00.000
0
0
0
0
python,django,content-management-system,django-cms
22,035,806
2
false
1
0
django CMS is a CMS on top of django. It supports multiple languages really well and plays nice together with your own django apps. The basic idea is that you define placeholders in your template and then are able to fill those placeholders with content plugins. A content plugin can be a anything from text, picture, twitter stream, multi column layout etc.
1
1
0
I am new to Django, but heard it was promising when attempting to create a a custom CMS. I am looking to get started, but their seems to be a lack of documentation, tutorials, etc on how to actually get something like this going. I am curious if their are any books/tutorials/guides that can help me get started with CMS django building. PS- I have heard of django-cms, but am unsure what exactly it is and how it is different from django.
Best way to build a custom Django CMS
0
0
0
2,589
22,033,465
2014-02-26T06:24:00.000
1
0
1
0
python,python-2.7,py2exe,sympy
28,066,995
1
true
0
0
use python 3 and import sys, see if that helps py2exe has a problem with sympy,there is and issue in github you can follow
1
2
0
I'm trying to make executable file of my python program with py2exe. I was succeed to make hello_word.exe but when I want to make exe of my own program it said RuntimeError: maximum recursion depth exceeded After trying many times I recognized that it happens when I import sympy module. Should I change setup.py when I want to import a module?
py2exe - Maximum recursion depth error
1.2
0
0
1,701
22,037,983
2014-02-26T10:06:00.000
5
1
0
0
php,python,wordpress
22,038,195
1
true
0
0
Your question doesn't really have a clear answer, because you're not comparing apples to apples here. Wordpress is a Content Management System (CMS), a piece of software built using the php language. Python is simply a language. Vulnerabilities have certainly been found in Wordpress before, it's true. Similarly, software developed in Python can have vulnerabilities. If your real question is "Would it be better securitywise for me to develop an entirely new CMS in Python, or use Wordpress?" then my answer is that you should almost certainly use Wordpress. If you're asking the question, you probably wouldn't be able to do better than the community of Wordpress developers at security - I know I couldn't.
1
0
0
I am planning to develop a E-commerce website. i was thinking to use Wordpress CMS so that there will be plugins available for implementing the E-commerce feature. but questions was raised about the security of wordpress. i have got few suggestions from by friends about developing site in python. Can anyone please help me with the advantages of python over wordpress. is it a good idea to build website in python than wordpress?
Is python secure than wordpress?
1.2
0
0
186
22,040,054
2014-02-26T11:32:00.000
0
1
1
0
python,visual-studio-2008,encoding
22,040,482
1
true
0
0
In Options - Text Editor - General, there's a setting "Auto-detect UTF-8 encoding without signature". Maybe that's all there is to it.
1
0
0
Somehow VS2008 knows about the character encoding of a source file. I need this information for a tool I wrote in python that does some processing of legacy code (some sophisticated include path remappings etc.), where each file might have a different character encoding. And for processing each file I need to know the character encoding of the file. Where does Visual Studio 2008 store the information about a file's character encoding? Or does it infer this information automatically from the content?
Where does Visual Studio 2008 store the information about a file's character encoding?
1.2
0
0
32
22,046,149
2014-02-26T15:38:00.000
1
0
0
0
python,django,tastypie
22,069,707
1
false
1
0
Although I'm not sure that the approach of using the resource_name with slashes will always work for you, in order to resolve your issue you can simply change the order of the URL registration. When register the urls, register the resource with the name "library/books" last. The reason that you have the issue is that "library/books/shelf" is caught as the book with the pk of "shelf". If the url patterns of the resource "library/books/shelf" will come first, they will be caught by Django before trying to resolve library/books/pk.
1
0
0
Is there a way to create a hierarchy of resources in tastypie using resource_name that will behave like regular django urls? I'm aiming to have tastypie urls that look like this: <app_name>/<module_name>/<functionality>, but I'm having trouble. I've created resources with the following resource_name: library/books library/books/shelf library/books/circulation (Note that the parent resource library/book has no trailing slash) In this case, I can access the parent resource just fine. However, when trying to access one of the children resources (e.g. /api/v1/library/books/circulation) I receive the following error: Invalid resource lookup data provided (mismatched type). On the other hand, when I define the parent's resource_name as library/books/ (with a trailing slash), the children resources come back fine - but the parent resource itself returns a 404 error. All is well if I format the resource_names with underscores (library_books, library_books_circulation) - but then they're really ugly... I'm running Python 2.7.3, using Django 1.6 with Tastypie 0.10.0.
Achieving django url-like functionality using resource_name in tastypie
0.197375
0
0
158
22,051,158
2014-02-26T19:22:00.000
2
0
1
0
python,pip,easy-install
51,204,337
3
false
0
0
To uninstall pip in windows: Run command prompt as administrator Give the command easy_install -m pip This may not uninstall pip completely. So again give this command pip uninstall pip If by previous command pip got uninstalled then this command wont run, else it will completely remove pip Now check by giving command pip --version This should give pip is not recognized as an internal or external command
1
7
0
On Windows 7, I install pip with easy_install and want to install a lower version of pip. I want to remove the old version, but have no idea how to completely remove the pip installed by easy_install (or if there is a way to do it without going through easy_install, that is fine). How do I do this?
How to fully uninstall pip installed with easy_install?
0.132549
0
0
63,563
22,053,050
2014-02-26T20:55:00.000
2
0
0
0
python,numpy,matrix,multidimensional-array
65,142,140
8
false
0
0
The data structure of shape (n,) is called a rank 1 array. It doesn't behave consistently as a row vector or a column vector which makes some of its operations and effects non intuitive. If you take the transpose of this (n,) data structure, it'll look exactly same and the dot product will give you a number and not a matrix. The vectors of shape (n,1) or (1,n) row or column vectors are much more intuitive and consistent.
1
394
1
In numpy, some of the operations return in shape (R, 1) but some return (R,). This will make matrix multiplication more tedious since explicit reshape is required. For example, given a matrix M, if we want to do numpy.dot(M[:,0], numpy.ones((1, R))) where R is the number of rows (of course, the same issue also occurs column-wise). We will get matrices are not aligned error since M[:,0] is in shape (R,) but numpy.ones((1, R)) is in shape (1, R). So my questions are: What's the difference between shape (R, 1) and (R,). I know literally it's list of numbers and list of lists where all list contains only a number. Just wondering why not design numpy so that it favors shape (R, 1) instead of (R,) for easier matrix multiplication. Are there better ways for the above example? Without explicitly reshape like this: numpy.dot(M[:,0].reshape(R, 1), numpy.ones((1, R)))
Difference between numpy.array shape (R, 1) and (R,)
0.049958
0
0
193,239
22,053,852
2014-02-26T21:34:00.000
5
0
1
0
python,python-3.x,matplotlib
22,482,905
3
false
0
0
Assuming you've already installed a 3.x python env in anaconda, this one line should do the trick: conda install matplotlib -n name where name is the name you previously gave to your python 3 anaconda env. If you're not sure of the name you gave it, it will be the name of a subdir in the Anaconda\envs directory. Background: I recently went through the same trouble with matplotlib not getting installed by default by anaconda when I added a full python 3 env, even though it's meant to. The above line solved it for me; it gave me the following warnings so it seems likely that the two different available versions caused it to initially install neither. However it allowed me to choose the one I wanted, and then everything worked great. Warning: 2 possible package resolutions: [u'dateutil-2.1-py33_2.tar.bz2', u'matplotlib-1.3.1-np18py33_1.tar.bz2', u'numpy-1.8.0-py33_0.tar.bz2', u'pyparsing-2.0.1-py33_0.tar.bz2', u'pyside-1.2.1-py33_0.tar.bz2', u'python-3.3.5-0.tar.bz2', u'pytz-2013b-py33_0.tar.bz2', u'six-1.6.1-py33_0.tar.bz2'] [u'dateutil-2.1-py33_2.tar.bz2', u'matplotlib-1.3.1-np17py33_1.tar.bz2', u'numpy-1.7.1-py33_3.tar.bz2', u'pyparsing-1.5.6-py33_0.tar.bz2', u'pyside-1.2.1-py33_0.tar.bz2', u'python-3.3.5-0.tar.bz2', u'pytz-2013b-py33_0.tar.bz2', u'six-1.6.1-py33_0.tar.bz2' ]
1
3
0
I am configuring Anaconda 1.9.1 together with Python 3.3.4 and I am unable to setup Matplotlib for anaconda environment when I try to add package using Pycharm. I also tried to install from Matplotlib.exe file which I downloaded from its website. I can not change the installation directory in that case. I would like to know that is there a way to tackle this issue.
How to install Matplotlib for anaconda 1.9.1 and Python 3.3.4?
0.321513
0
0
45,160
22,055,320
2014-02-26T22:55:00.000
0
0
1
0
python,dictionary
22,055,376
3
false
0
0
No, you cannot rely on that behaviour. It is an implementation detail which can change from one version of Python to the next or even from one system to the next. That being said, it is unlikely to change soon.
2
1
0
We know that regular Python dictionaries are unordered, but what about the case when the keys are the set of natural numbers? The hash function seems to be an identity function when the domain is the set of natural numbers, and iterating over something like x = {0:'a', 1:'b', 2:'c'} seems to yield the natural order of the keys, aka 0, 1, 2.. So can this behavior be relied on? (Yes I know about OrderedDict) EDIT: Here's my specific usage, or at least this captures the spirit of what I'm looking at. x = dict((a, chr(a)) for a in range(10)) for i in x: print i This seems to preserve order.
Dictionary order when keys are numbers
0
0
0
86
22,055,320
2014-02-26T22:55:00.000
0
0
1
0
python,dictionary
22,055,690
3
false
0
0
No. The order also depends on the order the keys are inserted.
2
1
0
We know that regular Python dictionaries are unordered, but what about the case when the keys are the set of natural numbers? The hash function seems to be an identity function when the domain is the set of natural numbers, and iterating over something like x = {0:'a', 1:'b', 2:'c'} seems to yield the natural order of the keys, aka 0, 1, 2.. So can this behavior be relied on? (Yes I know about OrderedDict) EDIT: Here's my specific usage, or at least this captures the spirit of what I'm looking at. x = dict((a, chr(a)) for a in range(10)) for i in x: print i This seems to preserve order.
Dictionary order when keys are numbers
0
0
0
86
22,055,432
2014-02-26T23:03:00.000
2
0
0
0
python,django,django-templates
22,055,844
1
false
1
0
If you are making a web app, I'd say you need templates. Any other solution would be a mess. However, django templates have been known to not scale well because rendering them is relatively slow compared to other solutions like jinja2. There are several apps that integrate jinja2 into django. There's also been a lot of discussion on integrating jinja2 into django core itself someday in the future. So if you are scaling up big time, you may to investigate performance and optimize template rendering. There are some big sites using django like Pinterest, Instagram, and bitbucket, so they must have figured out a way. But for the most part, django template performance is just fine.
1
0
0
Forgive my knowledge on django, although I was briefly talking with a developer from Google whom I had met and he stated something confusing to me. He mentioned something that I hadn't really gotten a chance to ask him more about. He told me to be careful with django templates because in terms of scale, they can cause problems and almost always need to be re-written. Rather he mentioned something like using a 'full stack' with django. I think back, and I don't exactly follow what he means by that. Is their a way to use Django without templates? Is it better ? Why or why not?
Using Django without templates?
0.379949
0
0
1,483
22,056,351
2014-02-27T00:08:00.000
3
1
1
0
python,metaprogramming,python-import,monkeypatching
22,056,768
2
false
0
0
If you really want to change the semantics of the import statement, you will have to patch the interpreter. import checks whether the named module already is loaded and if so it does nothing more. You would have to change exactly that, and that is hard-wired in the interpreter. Maybe you can live with patching the Python sources to use myImport('modulename') instead of import modulename? That would make it possible within Python itself.
1
2
0
Is there a way to force import x to always reload x in Python (i.e., as if I had called reload(x), or imp.reload(x) for Python 3)? Or in general, is there some way to force some code to be run every time I run import x? I'm OK with monkey patching or hackery. I've tried moving the code into a separate module and deleting x from sys.modules in that separate file. I dabbled a bit with import hooks, but I didn't try too hard because according to the documentation, they are only called after the sys.modules cache is checked. I also tried monkeypatching sys.modules with a custom dict subclass, but whenever I do that, from module import submodule raises KeyError (I'm guessing sys.modules is not a real dictionary). Basically, I'm trying to write a debugging tool (which is why some hackery is OK here). My goal is simply that import x is shorter to type than import x;x.y.
Python: force every import to reload
0.291313
0
0
603
22,057,077
2014-02-27T01:15:00.000
-1
0
1
0
python,linux,tkinter,tk,python-2.6
22,057,396
3
false
0
1
Have you tried using pip-2.6 install package?
3
1
0
I installed Python 2.6 from source for software testing (2.7 was preinstalled on my Linux distro). However, I cannot import Tkinter within 2.6, I suppose because it doesn't know where to find Tk. How do I either help 2.6 find the existing Tkinter install or reinstall Tkinter for 2.6?
Installed Python from source and cannot import Tkinter - how to install?
-0.066568
0
0
417
22,057,077
2014-02-27T01:15:00.000
1
0
1
0
python,linux,tkinter,tk,python-2.6
22,058,494
3
true
0
1
I solved this by adding '/usr/lib/x86_64-linux-gnu' to lib_dirs in setup.py, then rebuilding python
3
1
0
I installed Python 2.6 from source for software testing (2.7 was preinstalled on my Linux distro). However, I cannot import Tkinter within 2.6, I suppose because it doesn't know where to find Tk. How do I either help 2.6 find the existing Tkinter install or reinstall Tkinter for 2.6?
Installed Python from source and cannot import Tkinter - how to install?
1.2
0
0
417
22,057,077
2014-02-27T01:15:00.000
1
0
1
0
python,linux,tkinter,tk,python-2.6
22,057,115
3
false
0
1
Install the TCL and Tk development files and rebuild Python.
3
1
0
I installed Python 2.6 from source for software testing (2.7 was preinstalled on my Linux distro). However, I cannot import Tkinter within 2.6, I suppose because it doesn't know where to find Tk. How do I either help 2.6 find the existing Tkinter install or reinstall Tkinter for 2.6?
Installed Python from source and cannot import Tkinter - how to install?
0.066568
0
0
417
22,058,478
2014-02-27T03:33:00.000
1
0
0
0
python,soap,twisted,data-transfer,asynchronous-messaging-protocol
22,081,242
1
true
0
0
I like AMP a lot. twisted.protocols.amp is moderately featureful and relatively easily testable (although documentation on how to test applications written with it is a little lacking). The command/response abstraction AMP provides is comfortable and familiar (after all, we live in a world where HTTP won). AMP avoids the trap of excessive complexity (seemingly for the sake of complexity) that SOAP fell squarely into. But it's not so simple you won't be able to do the job with it (like LineReceiver most likely is). There are intermediate steps - for example, twisted.protocols.basic.Int32Receiver gives you a more sophisticated framing mechanism (32 bit length prefixes instead of magic-bytes-terminated-lines) - but in my opinion AMP is a really good first choice for a protocol. You may find you want to switch to something else later (one size really does not fit all) but AMP is at the sweet spot between features and simplicity that seems like a good fit for a very broad range of applications. It's true that there are some built-in length limits in AMP. This is a long standing sore spot that is just waiting for someone with a real-world need to address it. :) There is a fairly well thought-out design for lifting this limit (without breaking protocol compatibility!). If AMP seems otherwise appealing to you then I encourage you to engage the Twisted development community to find out how you can help make this a reality. ;) There's also always the option of using AMP for messaging and to set up another channel (eg, HTTP) for transferring your larger chunks of data.
1
0
0
I am trying to implement a server using python-twisted with potential C# and ObjC clients. I started with LineReceiver and that works well for basic messaging, but I can't figure out the best approach for something more robust. Any ideas for a simple solution for the following requirements? Request and response ex. send message to get a status, receive status back Recieve binary data transfer (non-trivial, but not massive - less than a few megs) ex. bytes of a small png file AMP seems like a feasible solution for the first scenario, but may not be able to handle the size for the data transfer scenario. I've also looked at full blown SOAP but haven't found a decent enough example to get me going.
twisted python request/response message and substantial binary data transfer
1.2
0
0
305
22,060,338
2014-02-27T06:04:00.000
0
0
0
0
python,macos,oracle
39,339,545
2
false
0
0
I haven't seen this on OS X but the general Linux solution is to add your hostname to /etc/hosts for the IP 127.0.0.1.
2
2
0
On OS X 10.9 and 10.9.1, the cx_Oracle works OK. But after I updated my system to OS X 10.9.2 yesterday, it cannot work. When connecting to Oracle database, DatabaseError is raised. And the error message is: ORA-21561: OID generation failed Can anyone help me?
cx_Oracle can't connect to Oracle database after updating OS X to 10.9.2
0
1
0
1,020
22,060,338
2014-02-27T06:04:00.000
0
0
0
0
python,macos,oracle
41,649,509
2
false
0
0
This can be fixed with a simple edit to your hosts file. Find the name of your local-machine by running hostname in your local-terminal $hostname Edit your local hosts file $vi /etc/hosts assuming $hostname gives local_machine_name append it to your localhost , 127.0.0.1 localhost local_machine_name press esc and type wq! to save Cheers!
2
2
0
On OS X 10.9 and 10.9.1, the cx_Oracle works OK. But after I updated my system to OS X 10.9.2 yesterday, it cannot work. When connecting to Oracle database, DatabaseError is raised. And the error message is: ORA-21561: OID generation failed Can anyone help me?
cx_Oracle can't connect to Oracle database after updating OS X to 10.9.2
0
1
0
1,020
22,062,837
2014-02-27T08:23:00.000
-2
0
0
0
python,real-time,pytables
22,161,505
2
false
0
0
This is definitely possible. It is especially easy if you only have one process in 'w' and multiple processes in 'r' mode. Just make sure in your 'w' process to flush() the file and/or the datasets occasionally. If you do this, the 'r' process will be able to see the data.
1
2
1
I am not sure if what I am thinking would be possible, I would need the help from someone experienced working with HDF5/PyTables. The escenario would be like this: Let's say that we have a process, or a machine or a connexion etc, acquiring data, and storing in a HDF5/PyTable format. I will call it store software. Would it be possible to have another software, I will call it analysis software, running on time?. If it helps, the store software and the analysis software would be totally independent, even wrote on different languages. My doubt is that, if the store program is writing the PyTable, mode='w', then, at the same time, can the analysis program access in mode='r', and read some data to perform some basic analysis, averages, etc, etc??. The basic idea of this is to be able to analyze data stored in a PyTable on real time. Of course any other proposed solution would be appreciated.
write and read on real time pytables
-0.197375
0
0
996
22,065,764
2014-02-27T10:31:00.000
1
0
1
0
python,command-line
22,066,139
2
false
0
0
Easiest way of accomplishing this is to run a small TCP server in a thread and have it change the variable you want to change when it receives a command to do so. Then write a python script that sends the stop command to that TCP server.
1
2
0
How can I access a running Python script's variable? Or access a function, to set the variable. I want to access it from the command line or from another Python script, that doesn't matter. For example, I have one script running run_motor.py, with a variable called mustRun. When the user pushes the stop button it should access the variable mustRun to change it to false.
Set variable of a running Python script
0.099668
0
0
3,259
22,066,818
2014-02-27T11:15:00.000
0
1
0
0
python,performance,algorithm,file,hex
22,069,185
2
false
0
0
split file into hex words consisting of purely [0-9a-fA-F] characters then int(word, 16) will change a word to a normal python integer. You can directly compare integers. Alternatively you can keep the hex words and then convert an integer to a hex string using '{0:x}'.format(someinteger), prior to comparing the hex strings.
1
0
0
I have big data hex files from which I need to compare some hex values.When i read through python read it automatically converts it into ascii and so I have to decode it again.How can i directly read file in hex?? Till now i have tried using Intelhex python package but it is throwing an error : intelhex.HexRecordError: Hex files contain invalid record.So is there any issues with my files only? How much performance difference it is going to make if I successfully read hex data without decoding
how to compare hex files in python
0
0
0
1,698
22,067,766
2014-02-27T11:54:00.000
0
0
0
0
python,django,angularjs,heroku,architecture
22,103,027
2
false
1
0
What you guys think about architecture? This is a common Service Oriented Architecture with decoupled Clients. You just have REST endpoints on your backend, and any Client can consume those endpoints. You should also think about: Do you need RESTful service (RESTful == stateless, will you store any state on the server?) How to scale the service in the future? (this is a legit thing as you already aware of huge traffic increase and assume 2 servers) How it can be improved? Use scala instead of python :) Will performance of portal will go down after adding above layers in architecture? It depends. It will get some performance penalty (any additional abtract layer has it's tax), but most probably you won't event notice it. But still, you should measure it using some stress tests. In the above architecture whether 2 servers should be used to run this (like one for client and other for serving the API's) or one server will be enough. Currently Heroku is used for deployment. Well, as usual, it depends. It depends on the usage profile you have right now and on the resources available. If you are interested in whether the new design will perform better than the old one? - there are a number of parameters. Resume This is a good overall approach for the system with different clients. It will allow you: Totally decouple mobile app and frontend development from backend development. (It could be different independent teams, outsourceable) Standardize your API layer (as all clients will consume the same endpoints) Make you service scalable easier (this includes the separate webserver for static assets and many more).
1
0
0
Currently I am working on a portal which is exposed to end users. This portal is developed using Python 2.7, Django 1.6 and MySQL. Now we want to expose this portal as a mobile app. But current design does not support that as templates, views and database are tightly coupled with each other. So we decided to re-architect the whole portal. After some research I found following: Client side: AngularJS for all client side operations like show data and get data using ajax. Server side: Rest API exposed to AngularJS. This Rest API can be developed using either Tastypie or Django Rest Framework (still not decided). Rest API will be exposed over Django. I have few questions: What you guys think about architecture? Is this is a good or bad design? How it can be improved? Will performance of portal will go down after adding above layers in architecture? In the above architecture whether 2 servers should be used to run this (like one for client and other for serving the API's) or one server will be enough. Currently Heroku is used for deployment. Currently portal is getting 10K hits in a day and it is expected to go to 100K a day in 6 months. Will be happy to provide more information if needed.
How to re-architect a portal for creating mobile app
0
0
0
183
22,071,291
2014-02-27T14:22:00.000
0
0
0
0
python,events,pyqt,pyside,qt-designer
22,102,393
2
false
0
1
Monkey Patching did the job, I dont know why I didn't taught of that
1
0
0
I am using PySide in a not-so-MVC fashion, meaning, I try as much as possible not to edit the generated .ui to .py file, I put my application logic in packages (models) and I have one module (.pyw file) more like a controller for them all to initialize and perform management. Not too best of practice but I'm doing fine, all I want is I dont want to add code to the generated ui .py file (more like my view) Now here is the problem I Noticed that the generated PySide file doesn't inherit from the QDialog or QMainWindow as you have to create it when you are instantiating the class, as a result, events like closeEvent(self, event) doesn't work inside the class even when you put it there. I know hoe to write functions for QActions and widget connections, but I DONT KNOW HOW TO ADD A CLASS BASED FUNCTION TO A GENERATED PYSIDE CLASS OUTSIDE THE CLASS. If I have to edit the generated view class, I can perfectly tweak it to what I want BUT i dont want to because I can make amendment in QtDesigner and compile at any time This is my question, since I dont want to how do i attach say a closeEvent to the object created from the class in my controller class without touching the generated view class. Thanks
Adding Event functions outside PyQt/PySide Generated Code
0
0
0
1,710
22,071,505
2014-02-27T14:30:00.000
0
0
1
0
python,python-3.x,colors,tkinter,label
22,078,545
1
false
0
1
No, it is not possible to change the color of one letter in a label. However, you can use a text widget instead of a label to color just a single character. You could also use a canvas widget.
1
0
0
I can change color of all text in label, but I want to change color of one letter. Is that possible? I use tkiner and python 3.3.
Change color of one letter in label. Python
0
0
0
820
22,074,156
2014-02-27T16:15:00.000
0
0
0
1
python,proxy,dns,dnspython
22,074,588
1
false
0
0
If Google blocked that number of requests from a given IP address, one has to assume that sending such a number of requests is against their usage policy (and no doubt a form of 'unfair usage'). So hiding your source IP behind proxies is hardly ethical. You could adopt a more ethical approach by: Distributing your requests across a number of public DNS servers (search for 'public DNS servers', there 8 or 9 providers and at least 2 servers per providers), thus reducing the number of request per server. Spread your requests across a reasonable period of time to limit the effect of queries may have on the various providers' DNS servers. Or simply limit your query rate to something reasonable. If your requests cover a number of different domains, perform your own recursive resolution so that the bulk of your requests are targeted against the authoritative servers and not public recursive servers. This way, you would resolve the authoritative servers for a domain against the public servers (i.e. NS queries) but resolve CNAME queries against the authoritative server themselves, thus further spreading load. And there is no such thing as a DNS proxy (other than a DNS server which accepts recursive queries for which it is not authoritative).
1
0
0
I'm writing a program which gathers basic CNAME information for given domains. I'm currently using Google's DNS server as the one I'm questioning, but afraid that if I'll send couple of millions DNS lookups this will get me blocked (Don't worry, it's by no means any type of DDOS or anything in that area). I'm wondering 2 things. 1. is it possible to use dnspython package to send requests through proxy servers? this way I can distribute my requests through several proxies. 2. I couldn't find a reference for a similar thing, but is it possible that I'll get blocked for so many DNS lookups? Thanks, Meny
Is it possible to use dnspython through proxy?
0
0
1
1,227
22,077,202
2014-02-27T18:27:00.000
3
0
1
0
python
22,077,225
1
false
0
0
They are separate arguments. iterable is the second argument; key is the optional third argument.
1
1
0
What is the structure of the second argument in heapq.nlargest(n, iterable[, key])? As a Python rookie, I am only used to functions in the form myfunc(x, y, x). The iterable[, key] is confusing me!
What does "iterable[, key]" mean in a method signature?
0.53705
0
0
102
22,077,846
2014-02-27T18:57:00.000
5
0
0
0
python,html,tkinter
22,094,453
1
true
0
1
There is no support for viewing rendered HTML in a tkinter widget. There was a project (tkhtml) to build a modern web browser using tcl/tk (which is what powers tkinter), but the project never got past a very early alpha release and the last check-in was in 2009.
1
3
0
I would like to read a HTML and I show it on a Tkinter window. I would like to know if this is possible with any module and if it's how can I do it, as I'm totally lost finding solutions. Thanks in advance.
Read a HTML file and show it on a Tkinter window
1.2
0
0
3,286
22,079,296
2014-02-27T20:11:00.000
3
0
1
0
python,list
22,079,431
1
true
0
0
likely just because its redundant when we already have and are use to len() maybe you could add a .len() method :) there is a a_list.__len__()
1
2
0
Python's length of list can be calculated by len(a_list). My question is, why was a_list.len() not implemented? Is there a reason behind it?
Why does python list does not have len() method
1.2
0
0
991
22,079,882
2014-02-27T20:42:00.000
6
0
1
0
python,matrix,transpose
22,080,238
1
true
0
0
[[row[i] for row in data] for i in range(len(data[0]))]
1
1
1
How do you transpose a matrix without using numpy or zip or other imports? I thought this was the answer but it does not work if the matrix has only 1 row... [[row[i] for row in data] for i in range(len(data[1]))]
How to transpose a matrix without using numpy or zip (or other imports)
1.2
0
0
4,612
22,080,748
2014-02-27T21:24:00.000
0
0
1
1
python,python-3.x
22,080,908
2
false
0
0
Your REGISTRY_KEY.strip() call is not doing what you think it's doing. It doesn't remove the string HKEY_LOCAL_MACHINE\ from the beginning of the string. Instead, it removes the characters H, K, E, etc., in any order, from both ends of the string. This is why it works when you manually put in what you expect. As for your original question, a double backslash is an escape sequence that produces a single backslash in your string, so it is not necessary to convert keyPath to double slashes.
1
1
0
I am reading reading path to the registry from a text file. The registry path is HKEY_LOCAL_MACHINE\Software\MYAPP\6.3 I store this registry in a variable : REGISTRY_KEY Then I strip the HKEY_LOCAL_MACHINE part from the string and try to read the value at the key. if REGISTRY_KEY.split('\\')[0] == "HKEY_LOCAL_MACHINE": keyPath = REGISTRY_KEY.strip("HKEY_LOCAL_MACHINE\\") try: key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, keyPath) value = winreg.QueryValueEx(key, "InstallPath")[0] except IOError as err: print(err) I get the following error [WinError 2] The system cannot find the file specified However if I do it manually like key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,r'Software\MYAPP\6.3') OR key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,"Software\\MYAPP\\6.3") it works. So is there any way I can make the keyPath variable to either be a raw string or contain double '\' PS:I am using Python 3.3
Explicitly make a string into raw string
0
0
0
123
22,082,005
2014-02-27T22:31:00.000
1
0
0
1
python,django,rackspace,mezzanine
22,083,369
2
false
1
0
The best way is to have this working, is to have a different web server serving all of your media (I used nginx). Then you setup a load balancer to detect failure and redirect all the requests to CDN in case of a failure. One thing that you might have to figure out is the image path.(use HAProxy to rewrite the request URL, if you need to)
1
3
0
I have a django/mezzanine/django-cumulus project that uses the rackspace cloudfiles CDN for media storage. I would like to automatically serve all static files from the local MEDIA_ROOT, if they exist, and only fallback to the CDN URL if they do not. One possible approach is to manage the fallback at the template level, using tags. I would prefer not to have to override all the admin templates (eg) just for this, however. Is there a way to modify the handling of all media to use one storage engine first, and switch to a second on error?
Multiple storage engines for django media: prefer local, fallback to CDN
0.099668
0
0
675
22,084,435
2014-02-28T01:33:00.000
1
0
0
0
python,networkx
22,084,546
3
false
0
0
It seems to me you should decide how many nodes you will have generate the number of links per node in your desired distribution - make sure the sum is even start randomly connecting pairs of nodes until all link requirements are satisfied There are a few more constraints - no pair of nodes should be connected more than once, no node should have more than (number of nodes - 1) links, maybe you want to ensure the graph is fully connected - but basically that's it.
1
3
1
How I can create a graph with -predefined number of connections for each node, say 3 -given distribution of connections (say Poisson distribution with given mean) Thanks
Python: Create a graph with defined number of edges per node
0.066568
0
0
1,817
22,085,475
2014-02-28T03:14:00.000
3
0
1
0
python,performance
22,085,499
1
true
0
0
Apples and oranges. You should think about program flow and structure when choosing between storing data in a scope variable or an instance attribute. Comparing performance here is meaningless. If the data logically belongs to an entity in your program which will be passed around between different parts of the program (which seems to be the case here), create a class and put the data in an instance of this class. Eg. the current word might be an attribute of the current Round, or Game, depending on how fine grained you want your objects to be. If, on the other hand, the data is only used within the scope of the current method, use a scope variable.
1
0
0
I'm creating a hangman game in Python 2, and I can store the current word in either: An object, as an attribute (like wordlist.word = blah) A variable (like word = blah) I'm then passing this word into another function, where the main parts of the game reside, to be used in comparisons and such. I kind of prefer to keep the current word in an object, since it makes sense to me to group it under that class and seems cleaner by keeping all that together, but I'm afraid that since an object has more code in it it takes more effort for the program to pass it around. Is it more efficient to use an object or a variable? What are the pros and cons of either? For example, is one more efficient/quicker than the other?
Is it more efficient to pass an object or a variable?
1.2
0
0
61
22,087,875
2014-02-28T06:29:00.000
7
0
1
0
python,python-sphinx,sphinx-apidoc
22,088,341
1
true
0
0
First you run sphinx-quickstart and accept the defaults to set up your basic structure this is only done once and you edit the table of contents section of index.rst to point to your tutorials, give an introduction, etc. - the you at least outline your tutorials in separate .rst files. You also edit config.py to add autodoc - from the website: When documenting Python code, it is common to put a lot of documentation in the source files, in documentation strings. Sphinx supports the inclusion of docstrings from your modules with an extension (an extension is a Python module that provides additional features for Sphinx projects) called “autodoc”. In order to use autodoc, you need to activate it in conf.py by putting the string 'sphinx.ext.autodoc' into the list assigned to the extensions config value. Then, you have a few additional directives at your disposal. For example, to document the function io.open(), reading its signature and docstring from the source file, you’d write this: .. autofunction:: io.open You can also document whole classes or even modules automatically, using member options for the auto directives, like .. automodule:: io :members: autodoc needs to import your modules in order to extract the docstrings. Therefore, you must add the appropriate path to sys.path in your conf.py. Add your code modules to the list as above and then call make html to buildyour documentation and use a web browser look at it. Make some changes and then run make html again. If you have to use the sphinx-apidoc then I would suggest: putting your tutorials in a separate directory as .rst files and referencing the documentation produced from the API doc from within them plus referencing the tutorials from within your code comments at the points that they are intended to illustrate. This should allow you to build your tutorials and API documentation separately depending on where you have made changes and still have linkage between them. I would strongly recommend the following: Use a version control system such as mercurial or git so that you can commit your changes before running sphinx, Put your tutorial .rst files under the VCS for your project but not the generated documentation files. Put all of the tutorial files under a separate directory with a clear name, e.g. tutorials. If you are delivering documentation then use a separate repository for your generated documents that is used to store the deliveries. Always generate documents to a location outside of you code tree. Put your invocation of sphinx-apidoc into a batch or make file so that you are consistent with the settings that you use.
1
6
0
I want to use Sphinx to document a large project that is currently not well-documented. In particular I want to use sphinx-apidoc to produce the documentation from the code as I document it. However I also want to have some pages of tutorial on how to use the project etc., but it seems when I call sphinx-apidoc it generates the whole document at once. So my question is: What is the correct workflow here so I could write the tutorial pages that are to be written manually and at the same time update the documentation in the code? Note that if I update the manually written tutorial pages (e.g. included in the index.txt) and run sphinx-apidoc I will lose them as the whole document is generated at once. In general are there any guidelines as how to proceed in building the documentation? Note: The source of inconvenience is that the basic procedure assumes you have all the code documentation already in place and will not make any updates as you proceed in producing the documentation. At least this is what I need to resolve.
What is the correct workflow in using Sphinx for Python project documentation?
1.2
0
0
2,050
22,091,306
2014-02-28T09:37:00.000
-1
0
0
0
python,pandas,stock
60,347,465
2
false
0
0
Forget about Python. There is absolutely no way to convert an ISIN to a Ticker Symbol. You have completely misunderstood the wikipedia page.
1
3
1
I'm trying to compute some portfolio statistics using Python Pandas, and I am looking for a way to query stock data with DataReader using the ISIN (International Securities Identification Number). However, as far as I can see, DataReader is not compatible with such ids, although both YahooFinance and GoogleFinance can handle such queries. How can I use DataReader with stock ISINs?
Pandas: DataReader in combination with ISIN identification
-0.099668
0
0
3,894
22,095,035
2014-02-28T12:18:00.000
33
0
1
0
ipython,ipython-notebook
23,282,206
1
false
0
0
I often start a qtconsole attached to the kernel. You can do that as follows: Create a new cell. In the new cell, type %qtconsole and execute that cell. Delete the new cell. Once you have a qtconsole that is attached to the notebook kernel. You can print the value of variables there.
1
23
0
I have often this problem, when I'm slicing or subsetting data that I want to view/print [df.head()] the data and look into into it before writing next line of my code. For this case, every time, I have to run the whole block(cell) in ipython, even if I have some logic written I had to comment that block and execute my print line alone. Is there a feature where you can select a single line and execute it.
Execute/Run a single line in IPython rather entire Cell
1
0
0
9,263
22,096,525
2014-02-28T13:28:00.000
0
0
0
0
python,algorithm,graph
22,098,160
2
true
0
0
It depends a lot on your particular needs. There are a few options. Two built-in, and one that requires a bit more work, but might be faster. If what you really want is to find two non-intersecting paths, then you can use a filtered graph - after finding one path, induce a subgraph with the intermediate nodes removed, and find the shortest path in that graph. If you can't guarantee that the paths won't be non-intersecting, then you are back to brute-force. Since paths don't include cycles, and they are simple lists, finding the number of intersecting nodes is as simple as generating sets from the two paths and finding the length of their difference, which is pretty fast. Check all pairs and find the one with the fewest intersection. Which of the above two is faster depends on your particular graph - is it sparse or dense? How many nodes are there? etc. Since all_simple_paths is a generator, you can actually focus that algorithm somewhat. i.e. if you graph the first two paths, and they are completely non-intersecting, then you already have your minimal case and don't need to look at any more. There may be a lot of paths, but you can bound with an upper limit of how many to look at, or a threshold that is allowable (i.e. instead of absolutely 0, if these only have 1 in common, it's good enough, return it), or some other combination that uses both how many paths I've looked at and the current maximum to bound the calculation time. If calculation time is really critical to your algorithm, also consider switching to igraph... networkx is MUCH easier to deal with, and usually performance is 'good enough', but for large brute force algorithms like this, igraph will probably be at least an order of magnitude faster. One last possibility is to avoid using all_simple_paths at all, and use the bfs tree instead. I am not sure if all_simple_paths is BFS, it probably is, but that might give you a better selection of initial paths to look at with the second algorithm, not sure. E.G. if you know that your source node has multiple successors, you may get decent results by just forcing your starting two paths to start with two different successors instead of just from the initial node. Note that this can also bite you too - this greedy algorithm can lead you astray as well, unless your graph is already a good fit for it (which you may already know, or may not).
2
0
0
I am using the NetworkX library for Python in my application that does some graph processing. One task is to call the all_simple_paths() function of NetworkX to give me all non-looping paths in the graph (up to a certain max. length of paths). This is working well. Using this list of all simple paths in the graph the task is now to find a number of n paths out of this list, where each of these n paths should be as different from all other n paths as possible. Or in other words: any two paths from the resulting n paths should have as few common nodes as possible. Or in even other words: each path in the resulting n paths should be as unique as possible. Can you guys think of any (non brute force) algorithm to achieve this?
How to find n most different paths in a graph?
1.2
0
1
905
22,096,525
2014-02-28T13:28:00.000
0
0
0
0
python,algorithm,graph
22,098,044
2
false
0
0
You could create a similarity or distance between two paths based on the number of edges that they share. Then apply a clustering algorithm to find n clusters, and pick one representative from each cluster, perhaps in a greedily fashion to minimise (in the case of similarities) edge weights between representatives.
2
0
0
I am using the NetworkX library for Python in my application that does some graph processing. One task is to call the all_simple_paths() function of NetworkX to give me all non-looping paths in the graph (up to a certain max. length of paths). This is working well. Using this list of all simple paths in the graph the task is now to find a number of n paths out of this list, where each of these n paths should be as different from all other n paths as possible. Or in other words: any two paths from the resulting n paths should have as few common nodes as possible. Or in even other words: each path in the resulting n paths should be as unique as possible. Can you guys think of any (non brute force) algorithm to achieve this?
How to find n most different paths in a graph?
0
0
1
905
22,097,901
2014-02-28T14:29:00.000
2
0
1
0
python,execution
22,098,159
1
true
0
0
These frames are representations of the stack frames created by function calls. You should not need to access them in normal programming. A new frame is indeed created every time a function is called, and destroyed when it exits or raises an uncaught exception. Since function calls can go many levels deep your program ends up with a bunch of nested stack frames, but it's not good programming practice (unless you are writing a debugger or similar application) to mess about with the frames even though Python does make them available.
1
4
0
In a python reference manual said that A code block is executed in an execution frame. A frame contains some administrative information (used for debugging) and determines where and how execution continues after the code block’s execution has completed. and Frame objects represent execution frames. They may occur in traceback objects But I don't understanf how frame does work. How can I get an acces to a current frame object? When is frame object creating? Is the frame object created everytime when a code of new block is strarting to execute?
frame type in python
1.2
0
0
2,140
22,100,757
2014-02-28T16:37:00.000
3
0
1
0
python,mysql
44,177,264
14
false
0
0
Also something that can go wrong: Don't name your own module mysql import mysql.connector will fail because the import gives the module in the project precedence over site packages and yours likely doesnt have a connector.py file.
1
35
0
I'm using Amazon Linux AMI release 2013.09. I've install virtualenv and after activation then I run pip install mysql-connector-python, but when I run my app I get an error: ImportError: No module named mysql.connector. Has anyone else had trouble doing this? I can install it outside of virtualenv and my script runs without issues. Thanks in advance for any help!
Can not get mysql-connector-python to install in virtualenv
0.042831
1
0
73,109
22,101,857
2014-02-28T17:26:00.000
0
0
0
0
python,openerp
22,103,548
1
true
1
0
Working through XML-RPC is pretty much like working directly on the server, only slower. To get the product list you'll need to interact wit product.product, and to narrow the list (and the data) you'll need to specify a domain such as domain=[('color','=','red'),('style','=','sport')] and fields=['id','name','...']. Hopefully that's enough to get you going.
1
0
0
I'm currently working on a mobile app that connects with an openerp 7 instance though XML-RPC. Although xmlrpc comm between iOS & Openerp 7 works perfectly, I'm puzzled at which objects I need to interact with at the openerp side in order to get the product list with only the items I want and to post a sale. Any one? Thanx, M
Retrieving product list form openerp 7
1.2
0
0
273
22,102,352
2014-02-28T17:51:00.000
0
0
0
0
python,selenium,phantomjs
22,103,574
1
true
1
0
I just discovered that my problem was with a elem.send_keys(Keys.ENTER) line. Phantomjs seems to be very fast so I had to put a time.sleep of 2 seconds before that line, and now the script works fine. What happened is that Enter button for login wasn't clicked properly. Of course time.sleep(2) isn't the best way to solve it, I will change the ENTER statement into a click with xpath.
1
2
0
I was trying to log into a website that is loaded fuly dinamically using dojo.js scripts. On my tests I am using: Selenium 2.40 Phantomjs 1.9.7 (downloaded via npm) Ubuntu 12.04 When I try my script with: driver = webdriver.Firefox() Everything works fine, Firefox logins through login page /login.do, gets through authentication page and arrives at the landing page and everything works perfectly. But I have to make this code for an Ubuntu Server so I can't use a GUI, when I change to: driver = webdriver.PhantomJS() I arrived again at /login.do ( print driver.current_url) I have tried to use WebDriverWait and nothing happens. Does PhantomJS for python an issue with dynamically loading pages? If not, can I use another tool or better yet, someone knows a book or tutorial to understand XHR Requests and doing this job with requests and urllib2?
Selenium phantomjs (python) not redirecting to welcome page after login, page is load dynamically using dojo
1.2
0
1
633
22,103,096
2014-02-28T18:32:00.000
0
1
0
1
python,linux,filesystemwatcher,inotify
22,106,444
1
true
0
0
Create a script (you wouldn't need python for this task, just df and find). This is pretty lightweight, needs less code than a daemon (much less maintenance in the long run), and running scripts once a minute by cron is not unheard of. :-)
1
1
0
I need to monitor NAS file system disk space, whenever file-system disk space goes above from a threshold value, I am I deleting oldest files from file system to bring back file system disk space below to threshold value. I read several article which suggested me two alternatives: by creating a daemon process which will run in background by creating a script and run through crontab which would be a better way to run a file system monitoring service? I need to run the monitoring script every 60 sec.For both options I will use python. it will run on *nix(unix/linux) environment.
which is better way to run a file system monitoring service?
1.2
0
0
154
22,106,380
2014-02-28T21:42:00.000
21
0
1
0
python,pypdf2
42,181,449
4
false
0
0
If you have pip, PyPDF2 is on the Python Package Index, so you can install it with the following in your terminal/command prompt: Python 2: pip install PyPDF2 Python 3: pip3 install PyPDF2
2
15
0
As a newbie... I am having difficulties installing pyPDF2 module. I have downloaded. Where and how do I install (setup.py) so I can use module in python interpreter?
How do I install pyPDF2 module using windows?
1
0
0
94,939
22,106,380
2014-02-28T21:42:00.000
1
0
1
0
python,pypdf2
33,626,099
4
false
0
0
Here's how I did it: After you have downloaded and installed Python (it usually installs under C:\Python** (** being python version - usually 27)), copy the extracted PyPDF2 contents to C:\Python** folder, after that enter in command prompt/terminal "cd C:\Python27\python.exe setup.py install". If you did everything right it should start installing PyPDF2.
2
15
0
As a newbie... I am having difficulties installing pyPDF2 module. I have downloaded. Where and how do I install (setup.py) so I can use module in python interpreter?
How do I install pyPDF2 module using windows?
0.049958
0
0
94,939
22,108,095
2014-02-28T23:58:00.000
2
0
0
0
python,matplotlib
22,109,817
1
true
0
0
I don't think it's possible. I did a little bit of the backend's work in my main script, setting up a RendererPdf (defined in backend_pdf.py) and conatining a GraphicsContextPdf which is a GraphicsContextBase which keeps a capstyle, intialized as butt. As confirmed by grep, this is the only place where butt is hardcoded as a capstyle. After some ipython debugging, I've found that a new GraphicsContextPdf or 'gc' is generated each time a patch is drawn (c.f. patches.py:392, called by way of a necessary fig.draw() in the main script), and the settings for the new gc (again initialized as butt) are incorporated into the original RendererPdf's gc. So everything gets a butt capstyle. Line2D objects are not patches, so they can maintain a particular capstyle.
1
4
1
Figures rendered with the PDF backend have a 'butt' capstyle in my reader. (If I zoom at the corner of a figure in a pdf, I do not see a square corner, but the overlap of shortened lines.) I would like either a 'round' or 'projecting' (what matplotlib calls the 'square' capstyle) cap. Thus the Spine objects are in question, and a Spine is a Patch is an Artist, none of which seem to have anything like the set_solid_capstyle() of Line2D, so I'm not sure how or where to force a particular capstyle, or if it's even possible.
set capstyle of spines for pdf backend
1.2
0
0
176
22,109,120
2014-03-01T01:57:00.000
1
1
1
0
python,performance,cpu,cpu-usage
22,109,186
2
false
0
0
First recommendation is the simpler lower than process priority to absolute minimum. If still not reponsive, you could sprinkle in sleep() calls from from time module to surrender CPU Or buy a new computer with 4 cores and just let it run. I do this all the time -- works great. ADDED Adding time.sleep() calls will leave a single cpu system running "bursty". Also sleep(0) may be effective in an inner loop as is will yield the cpu, but get rescheduled quickly if nothing else wants to use the cpu. OOPS, forgot to check, you are using Linux -- sleep(0) does nothing. You can call the native sched_yield() API, don't think it is built into Python anywhere.
1
3
0
I'm running a python program that's a fairly intensive test of many possible scenarios using a big-O of n algorithm. It's just brute-forcing it by testing over a billion different possibilities using at least five nested loops. Anyway, I'm not concerned with how much time the program takes. It's fine to run in the background for long periods of time, it's just that I can't have it clogging up the CPU. Is there any way in Python (3.3) to devote less CPU to a program in exchange for giving it more time? Thanks in advance.
Is it possible to force your computer to devote less CPU in exchange for more time when running a python program?
0.099668
0
0
245
22,110,562
2014-03-01T05:29:00.000
0
0
0
0
python,database,redis,redis-py
22,111,910
1
false
0
0
You cannot. The number of databases is not a dynamic parameter in Redis. You can change it by updating the Redis configuration file (databases parameter) and restarting the server. From a client (Python or other), you can retrieve this value using the "GET CONFIG DATABASES" command. But the "SET CONFIG DATABASES xxx" command will be rejected.
1
1
0
I know that Redis have 16 databases by default, but what if i need to add another database, how can i do that using redis-py?
Insert a new database in redis using redis.StrictRedis()
0
1
0
460
22,111,408
2014-03-01T07:18:00.000
1
1
0
0
python,usb,ethernet,libusb
22,111,428
1
true
0
0
No, it is not possible. There is no sane way to affect the PHY in PC software.
1
0
0
I am trying to set the ethernet port pins directly to send High / Low signals to light up four LEDs. Is there any way I can simply connect LEDs to the ethernet cable? What would be the best approach, if using ethernet is not a good option, to light up four LED lights and switch them on/off using a PC. I am planning to create a command-line tool in Python. So far I have gone through a lot of articles about PySerial, LibUSB etc. and have been suggested to use USB to UART converter modules, or USB to RS-232 converters. Actually I don't want any interfaces except the PC port and the LEDs as far as possible. Please suggest!
Toggle pins of ethernet port to high / low state?
1.2
0
1
250
22,112,779
2014-03-01T09:54:00.000
0
0
0
0
python,asp.net
22,112,975
2
false
0
0
Did you mean to ask about python or *nix filesystems in general?
2
1
0
Does a softlink still work if the file it links to is moved to a different location on the disk and why is that possible considering the action.
Soft links and Hard links
0
0
0
132
22,112,779
2014-03-01T09:54:00.000
1
0
0
0
python,asp.net
22,113,036
2
false
0
0
In terms of file system, soft links will not work if the target it is pointing to has been moved/renamed/deleted. It continues to points to the old target, now a non-existing location or file. Because symbolic link contains a text string that is automatically interpreted and followed by the operating system as a path to another file or directory called target.
2
1
0
Does a softlink still work if the file it links to is moved to a different location on the disk and why is that possible considering the action.
Soft links and Hard links
0.099668
0
0
132
22,113,950
2014-03-01T11:45:00.000
2
0
1
1
python,eclipse,pydev
28,799,767
1
false
0
0
The PYTHONPATH in PyDev is computed in the following order: Source folders of the project have highest priority (since this is the code you're expecting to be editing). External source folders of the project being used for the launch. Computed PYTHONPATH of the dependent projects (again, first source then external). PYTHONPATH of the related interpreter selected. Note that the final sys.path is actually computed by Python itself (so, it may be a bit different depending on your Python version -- i.e.: it could add things from the current working dir, current module or eggs even if you remove it from what's configured in PyDev -- although for PyDev, modules not added won't be available for code completion and would be present as errors when searched for as they won't be indexed), PyDev only changes the PYTHONPATH environment variable to match the order presented above. If you somehow have a different outcome, please report this as a bug... (you can see what will be actually used before running in the launch run configuration > interpreter tab > see resulting command-line).
1
1
0
Using Eclipse with the PyDev plugin, if you choose myProject>Properties>PyDev-PYTHONPATH, you then see two tabs: Source Folders and External Libraries. You can also choose myProject>Properties>Project References and see a widget with a checkable list of other parallel subprojects in your Eclipse/Pydev IDE workspace. I understand that the values in these widgets configure the PYTHONPATH when you run your project. But the documentation does not seem to say the ordering of the values you specify. For example, are Project References values always after Source Folders and before External Libraries, in the generated PYTHONPATH? (That is the ordering I wish, so that I can Python install one of my subprojects, and my main project will find the installed version if I have turned off Project References, but my main project will find the same project from my workspace if I turn on a Project Reference to it, while I am changing and debugging the subproject.) Similarly (recursively) are the External Libraries of a Referenced Project inserted in the PYTHONPATH AFTER the Source Folder of a Referenced Project? It seems like my PYTHONPATH has site-packages external library directory BEFORE the source folder of my subproject, so Python never finds the development version of my subproject, only the subproject version as installed in site-packages. I have tried several times to 'Force restore internal info' and to restart Eclipse. I suppose I could have made a mistake somewhere outside of Eclipse.
In what order are PyDev project references, sourrce folders, and external libraries in the PYTHONPATH
0.379949
0
0
340
22,114,437
2014-03-01T12:34:00.000
0
0
1
0
python,transactions,nosql,redis
22,115,137
1
true
0
0
The result of 'pipeline.execute()' is an array of 2 elements. The first is the result of 'pipeline.incr('trans:')', the second is the result of 'pipeline.incr('trans:',-1)'. The output is correct in your case.
1
2
0
I am reading a book on Redis (Redis in action) and on page 59-60. There is an example use for transaction as below: def trans(conn): pipeline = conn.pipeline() pipeline.incr('trans:') time.sleep(.1) pipeline.incr('trans:',-1) print pipeline.execute()[0] def run_transaction(conn): if 1: for i in xrange(3): threading.Thread(target=trans, args =(conn,)).start() time.sleep(.5) I am expecting that this produces: 0 0 0 But the output is: 1 1 1 Can someone explain why ('trans: is never used anywhere else')? Thanks
Why the results of my code are unexpected?
1.2
0
0
48
22,114,984
2014-03-01T13:26:00.000
0
0
0
0
python,django,debugging
22,116,486
2
false
1
0
To make your life easier, try IDE like PyCharm. I use pdb or ipdb to debug simple python file, but they wouldn't be so useful in debugging complex Python scripts. Also, django-debug-tools is a good tool to debug and optimize Django application.
1
0
0
I'm currently debugging a django application by inserting import pdb; pdb.set_trace() in the code and using the debugger commands to navigate through running application. The debugger shows the current line, but most of the time it is helpful to have a bit more context. Therefore I open the current file in an editor in another window. Now whenever the flow changes to another class I need to manually open the new file in the editor. - This feels like there is an easier way to do this. Is there any kind of IDE integration able to debug a running django application? Is there some other way I am not yet aware of?
How to debug a django application in a comfortable way?
0
0
0
91
22,121,165
2014-03-01T22:21:00.000
0
0
0
0
python
22,121,194
2
false
0
0
When you use your browser, it sends a header known as a User-Agent that identifies it. You need to 'spoof' the user agent from your python script to make it think a human is browsing the website. Set the User-Agent header to that of a common browser, this makes it difficult for the server to detect that you are using a script.
1
0
0
I was trying to download some files from a sever but it came back with an error page saying only links from internal server is allowed. I was able to download the file with any browser by clicking the link and I have verified the link I captured in Python was correct. Is there any way this can be done by using python? I tried urllib, urllib2 and requests, but none works. I could use selenium but the solution is not elegent
Download files that only allow access from internal server with Python
0
0
1
56
22,121,368
2014-03-01T22:42:00.000
0
0
0
0
python,linux,django,sockets
22,125,431
3
false
1
0
You need the following two programs running at all times: The producer, which will populate the queue. This is the program that will collect the various messages and then post them on the queue. The consumer, which will process messages from the queue. This consumer's job is to read the message and do something with it; so that it is processed and removed from the queue. The function that this consumer does is entirely up to you, but what you want to do in this scenario is write information from the message to a database model; the same database that is part of your django app. As the producer pushes messages and the consumer removes them from the queue, your database will get updated. On the django side, the process is simply to filter this database and display records for a particular machine. As such, django does not need to be aware of how the records are being populated in the database - all django is doing is fetching, filtering, sending to the template and rendering the views. The question comes how best (well actually, easily) populate the databases. You can do it the traditional way, by using Python's well documentation DB-API and write your own SQL statements; but since celery is so well integrated with django - you can use the django's ORM to do this work for you as well. I hope this gets you going in the right direction.
1
4
0
I've been trying to make a decision about my student project before going further. The main idea is get disk usage data, active linux user data, and so on from multiple internal server and publish them with Django. Before I came to RabbitMQ I was thinking about developing a client application for each linux server and geting this data through a socket. But I want to make that student project simple. Also, I don't know how difficult it is to make a socket connection via Django. So, I thought I could solve my problem with RabbitMQ without socket programming. Basically, I send a message to rabbit queue. Then get whatever I want from the consumer server. On the Django side, the client will select one of the internal servers and click the "details" button. Then I want to show this information on web page. I already read almost all documentation about rabbitmq, celery and pika. Sending messages to all internal servers(clients) and the calculation of information that I want to get is OKAY but I can't figure out how I can put this data on a webpage with Django? How would you figure out that problem if you were me? Thank you.
Using RabbitMQ with Django to get information from internal servers
0
0
0
3,991
22,122,506
2014-03-02T00:52:00.000
4
0
0
0
python,r,machine-learning,scikit-learn,svm
22,123,913
2
true
0
0
I do not have experiece with e1070, however from googling it it seems that it either uses or is based on LIBSVM (I don't know enough R to determine which from the cran entry). Scilearnkit also uses LIBSVM. In both cases the model is going to be trained by LIBSVM. Speed, scalability, variety of options available is going to be exactly the same, and in using SVMs with these libraries the main limitations you will face are the limitations of LIBSVM. I think that giving further advice is going to be difficult unless you clarify a couple of things in your question: what is your objective? Do you already know LIBSVM? Is this a learning project? Who is paying for your time? Do you feel more comfortable in Python or in R?
2
1
1
Recently I was contemplating the choice of using either R or Python to train support vector machines. Aside from the particular strengths and weaknesses intrinsic to both programming languages, I'm wondering if there is any heuristic guidelines for making a decision on which way to go, based on the packages themselves. I'm thinking in terms of speed of training a model, scalability, availability of different kernels, and other such performance-related aspects. Given some data sets of different sizes, how could one decide which path to take? I apologize in advance for such a possibly vague question.
What's the difference between using libSVM in sci-kit learn, or e1070 in R, for training and using support vector machines?
1.2
0
0
381
22,122,506
2014-03-02T00:52:00.000
0
0
0
0
python,r,machine-learning,scikit-learn,svm
22,189,863
2
false
0
0
Sometime back I had the same question. Yes, both e1070 and scikit-learn use LIBSVM. I have experience with e1070 only. But there are some areas where R is better. I have read in the past that Python does not handle categorical features properly (at least not right out of the box). This could be a big deal for some. I also prefer R's formula interface. And some of the nice data manipulation packages. Python is definitely better for general purpose programming and scikit-learn aids in using a single programming language for all tasks.
2
1
1
Recently I was contemplating the choice of using either R or Python to train support vector machines. Aside from the particular strengths and weaknesses intrinsic to both programming languages, I'm wondering if there is any heuristic guidelines for making a decision on which way to go, based on the packages themselves. I'm thinking in terms of speed of training a model, scalability, availability of different kernels, and other such performance-related aspects. Given some data sets of different sizes, how could one decide which path to take? I apologize in advance for such a possibly vague question.
What's the difference between using libSVM in sci-kit learn, or e1070 in R, for training and using support vector machines?
0
0
0
381
22,128,419
2014-03-02T13:49:00.000
1
0
0
0
python,sqlalchemy,flask,flask-sqlalchemy
22,128,680
2
false
1
0
SQL Alchemy is generally not faster (esp. as it uses those driver to connect). However, SQL Alchemy will help you structure your data in a sensible way and help keep the data consistent. Will also make it easier for you to migrate to a different db if needed.
2
0
0
I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?
Should I use an ORM like SQLAlchemy for a lightweight Flask web service?
0.099668
1
0
941
22,128,419
2014-03-02T13:49:00.000
1
0
0
0
python,sqlalchemy,flask,flask-sqlalchemy
22,134,840
2
true
1
0
Your question is too open to anyone guarantee SQLAlchemy is not a good fit, but SQLAlchemy probably will never be your problem to handle scalability. You'll have to handle almost the same problems with or without SQLAlchemy. Of course SQLAlchemy has some performance impact, it is a layer above the database driver, but it also will help you a lot. That said, if you want to use SQLAlchemy to help with your security (SQL escaping), you can use the SQLAlchemy just to execute your raw SQL queries, but I recommend it to fix specific bottlenecks, never to avoid the ORM.
2
0
0
I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?
Should I use an ORM like SQLAlchemy for a lightweight Flask web service?
1.2
1
0
941
22,132,285
2014-03-02T19:27:00.000
1
0
1
0
python,loops,while-loop
22,132,302
5
false
0
0
That is not only python , that is in most of programming languages x=1; x+=1; x will be 2 x=1; x-=1; x will be 0 x=3; x*=2; x will be 6 x=6; x/=2; x will be 3
1
0
0
what does +=, -=, *= and /= stand for in Python? and how do you use it for while loop?
what does +=, -=, *= and /= stand for in Python?
0.039979
0
0
25,742
22,134,173
2014-03-02T22:02:00.000
6
1
0
1
python,rabbitmq
22,134,400
1
true
0
0
in general, the network aspect of "batching messages" is handled at the level of the basic.qos(prefetch-size, prefetch-count) parameters. In this scheme, the broker will send some number of bytes/messages(respectively) beyond the the unacknowledged messages for a consumer, but the client library doles out messages, in process, one at a time to the application. To maximize the benefit, the appication can withhold basic.ack() for each message, and periodically issue basic.ack(delivery-tag=n, multiple=True) to acknowledge all messages with a delivery tag <= n.
1
6
0
I have a stream of requests in my RabbitMQ cluster, and multiple consumers handling them. The thing is - each consumer must handle requests in batches for performance reasons. Specifically there is a network IO operation that I can amortize by batching requests. So, each consumer would like to maximize the number of requests that it can batch, but not add too much latency. I could potentially start a timer when a consumer receives the first request and keep collecting requests until one of the two things happen - timer expires, or 500 requests have been received. Is there a better way to achieve this - without blocking each consumer?
Batch message from a rabbitMQ queue
1.2
0
0
4,628
22,137,858
2014-03-03T02:51:00.000
1
0
1
0
python,shell,python-2.7,ipython
22,137,897
1
false
0
0
Just press q if you want to exit from the docs in Ipython
1
0
0
Whenever I use ipython and want to look up the docs on some object I use the ? command after the object. However, once I'm viewing the docs I can't go back to the previous ipython shell. Is there some easy keystrokes to do this? It is getting really annoying having to exit out of the ipython shell every time I need to look up an object or function. Im using ipython in ubuntu if that helps. Thanks
Ipython ? Shell
0.197375
0
0
98
22,141,637
2014-03-03T08:11:00.000
0
1
0
0
python,paramiko
22,160,534
3
false
0
0
if you are planning to use the exec_command() method provided within the paramiko API , you would be limited to send only a single command at a time , as soon as the command has been executed the channel is closed. The below excerpt from Paramiko API docs . exec_command(self, command) source code Execute a command on the server. If the server allows it, the channel will then be directly connected to the stdin, stdout, and stderr of the command being executed. When the command finishes executing, the channel will be closed and can't be reused. You must open a new channel if you wish to execute another command. but since transport is also a form of socket , you can send commands without using the exec_command() method, using barebone socket programming. Incase you have a defined set of commands then both pexpect and exscript can be used , where you read a set of commands form a file and send them across the channel.
1
0
0
i have read other Stackoverflow threads on this. Those are older posts, i would like to get the latest update. Is it possible to send multiple commands over single channel in Paramiko ? or is it still not possible ? If so, is there any other library which can do the same. Example scenario, automating the Cisco router confi. : User need to first enter "Config t" before entering the other other commands. Its currently not possible in paramiko. THanks.
Paramiko - python SSH - multiple command under a single channel
0
0
1
4,222
22,142,369
2014-03-03T08:58:00.000
2
0
0
0
python,matlab,scipy
22,165,531
4
true
0
0
Using the functions scipy.ndimage.filters.correlate and scipy.ndimage.filters.convolve
1
11
1
I know the equivalent functions of conv2 and corr2 of MATLAB are scipy.signal.correlate and scipy.signal.convolve. But the function imfilter has the property of dealing with the outside the bounds of the array. Like as symmetric, replicate and circular. Can Python do that things
The equivalent function of Matlab imfilter in Python
1.2
0
0
12,599
22,143,644
2014-03-03T10:02:00.000
0
0
0
0
python,arrays,numpy
37,731,398
5
false
0
0
They are very applicable in scientific computing. Right now, for instance, I am running simulations which output data in a 4D array: specifically | Time | x-position | y-position | z-position |. Almost every modern spatial simulation will use multidimensional arrays, along with programming for computer games.
5
1
1
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen?
Examples on N-D arrays usage
0
0
0
141
22,143,644
2014-03-03T10:02:00.000
0
0
0
0
python,arrays,numpy
22,146,242
5
false
0
0
There are so many examples... The way you are trying to represent it is probably wrong, let's take a simple example: You have boxes and a box stores N items in it. You can store up to 100 items in each box. You've organized the boxes in shelves. A shelf allows you to store M boxes. You can identify each box by a index. All the shelves are in a warehouse with 3 floors. So you can identify any shelf using 3 numbers: the row, the column and the floor. A box is then identified by: row, column, floor and the index in the shelf. An item is identified by: row, column, floor, index in shelf, index in box. Basically, one way (not the best one...) to model this problem would be to use a 5D array.
5
1
1
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen?
Examples on N-D arrays usage
0
0
0
141
22,143,644
2014-03-03T10:02:00.000
1
0
0
0
python,arrays,numpy
22,144,505
5
false
0
0
A few simple examples are: A n x m 2D array of p-vectors represented as an n x m x p 3D matrix, as might result from computing the gradient of an image A 3D grid of values, such as a volumetric texture These can even be combined in the case of a gradient of a volume in which case you get a 4D matrix Staying with the graphics paradigm, adding time adds an extra dimension, so a time-variant 3D gradient texture would be 5D numpy.sum(array, axis=5) is not valid for a 5D-array (as axes are numbered starting at 0)
5
1
1
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen?
Examples on N-D arrays usage
0.039979
0
0
141
22,143,644
2014-03-03T10:02:00.000
0
0
0
0
python,arrays,numpy
22,144,263
5
false
0
0
For example, a 3D array could be used to represent a movie, that is a 2D image that changes with time. For a given time, the first two axes would give the coordinate of a pixel in the image, and the corresponding value would give the color of this pixel, or a grey scale level. The third axis would then represent time. For each time slot, you have a complete image. In this example, numpy.sum(array, axis=2) would integrate the exposure in a given pixel. If you think about a film taken in low light conditions, you could think of doing something like that to be able to see anything.
5
1
1
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen?
Examples on N-D arrays usage
0
0
0
141
22,143,644
2014-03-03T10:02:00.000
0
0
0
0
python,arrays,numpy
22,144,157
5
false
0
0
Practical applications are hard to come up with but I can give you a simple example for 3D. Imagine taking a 3D world (a game or simulation for example) and splitting it into equally sized cubes. Each cube could contain a specific value of some kind (a good example is temperature for climate modelling). The matrix can then be used for further operations (simple ones like calculating its Transpose, its Determinant etc...). I recently had an assignment which involved modelling fluid dynamics in a 2D space. I could have easily extended it to work in 3D and this would have required me to use a 3D matrix instead. You may wish to also further extend matrices to cater for time, which would make them 4D. In the end, it really boils down to the specific problem you are dealing with. As an end note however, 2D matrices are still used for 3D graphics (You use a 4x4 augmented matrix).
5
1
1
I was surprised when I started learning numpy that there are N dimensional arrays. I'm a programmer and all I thought that nobody ever use more than 2D array. Actually I can't even think beyond a 2D array. I don't know how think about 3D, 4D, 5D arrays or more. I don't know where to use them. Can you please give me examples of where 3D, 4D, 5D ... etc arrays are used? And if one used numpy.sum(array, axis=5) for a 5D array would what happen?
Examples on N-D arrays usage
0
0
0
141
22,144,504
2014-03-03T10:39:00.000
17
1
0
0
python,automated-tests,coverage.py,python-behave
23,836,778
1
false
0
0
You can run any module with coverage to see the code usage. In your case should be close to coverage run --source='.' -m behave Tracking code coverage for Aceptace/Integration/Behaviour test will give a high coverage number easily but can lead to the idea that the code are properly tested. Those are for see things working together, not to track how much code are well 'covered'. Tying together unittests and coverages makes more sense to me.
1
9
0
We are using Behave BDD tool for automating API's. Is there any tool which give code coverage using our behave cases? We tried using coverage module, it didn't work with Behave.
Test coverage tool for Behave test framework
1
0
0
4,754
22,144,748
2014-03-03T10:51:00.000
5
1
1
0
python,numpy,module,matplotlib,ipython
22,144,980
1
true
0
0
Repeated imports aren't a problem. No matter how many times a module is imported in a program, Python will only run its code once and only make one copy of the module. All imports after the first will merely refer to the already-loaded module object. If you're coming from a C++ background, you can imagine the modules all having implicit include guards.
1
4
0
I'm writing a .py file which will be regularly imported at the start of some of my IPython sessions in the first cells but will also be imported from other non-interactive sessions, since it contains functions that can be run in batch in non-interactive mode. It is basically a module containing many classes and functions that are very common. Since I'm using IPython with the --pylab=inline option, numpy as well as matplotlib functions are already imported, but when run in batch with a simple python mymodule.py the numpy functions have to be imported specifically. At the end I'd come up with double imports during the IPython session, a thing I don't like very much. What is the best practice in this case? Isn't importing modules twice a bad practice?
Best practices when importing in IPython
1.2
0
0
619
22,146,944
2014-03-03T12:34:00.000
1
0
0
1
python,rabbitmq,celery
56,632,082
3
false
0
0
Celery can use several back-ends. If you are already using RabbitMQ, it makes that option attractive. These are however different concerns. Use a generic RabbitMQ client library such as pika to implement a consumer for your messages, then, if needed, use Celery to schedule tasks.
1
5
0
The celery docs suggest that Rabbit-MQ must act like a middleman, where it is used as a messaging platform. In my infrastructure, Rabbit-MQ is the primary server that serves me with some data every second. Now, whenever the data is served, I want Celery to do certain tasks. Now, this throws out the whole publisher-worker model, as we're not using Celery where the messages are being produced. So, how do I go about this?
How to use celery to get messages from a rabbit-mq server?
0.066568
0
0
1,369
22,147,757
2014-03-03T13:13:00.000
4
0
1
0
python,python-3.x,dictionary,iterable,memory-efficient
22,147,887
2
false
0
0
If you want to sort them, the iterable needs to be turned into a list (which sorted will handle for you)... but how often are you going to sort an enumerate object, compared to how often you're going to just iterate over it? What about sorting the items of a dict, compared to just iterating over them? If your API produces a lazy iterator or other lazy iterable, you can turn that into a list with roughly the same amount of effort it would have taken to skip the iterator and produce a list directly. On the other hand, if your API produces a list, there's no way to avoid holding all the items in memory at once. The iterator is more flexible.
1
3
0
In Python 3 a lot of functions (now classes) that returned lists now return iterables, the most popular example being range. In this case range was made an iterable in Python 3 to improve performance, and memory efficiency (since you don't have to build a list anymore). Other "new" iterables are map, enumerate, zip and the output of the dictionary operations dict.keys(), dict.values() and dict.items(). (There are probably more, but I don't know them). Some of them (enumerate and map) have become probably more memory efficient by converting them into iterables. In Python 2.7 the others simply created lists of objects which were already in memory, so they would have been memory efficient. Why then turn them into iterables which you have to convert to lists every time you want to sort them, etc.?
Iterators in Python 3
0.379949
0
0
1,342
22,148,917
2014-03-03T14:10:00.000
0
0
0
0
python,pip,rackspace,iron.io,pyrax
22,153,710
2
false
0
0
It's difficult to know for sure what's happening without being able to see a traceback. Do you get anything like that which could be used to help figure out what's going on?
2
1
0
I created a Python script to use Rackspace's API (Pyrax) to handle some image processing. It works perfect locally, but when I upload it to Iron.io worker, it builds but does not import. I am using a Windows 8 pc, but my boss runs OS X and uploading the exact worker package, it works fine. So I'm thinking it's something with Windows 8 but I don't know how to check/fix. I do apologize in advance if I ramble or do not explain things clearly enough but any help would be greatly appreciated. My worker file looks like this: runtime "python" exec "rackspace.py" pip "pyrax" full_remote_build true Then I simply import pyrax in my python file.
pip "pyrax" dependency with iron worker
0
0
1
116
22,148,917
2014-03-03T14:10:00.000
2
0
0
0
python,pip,rackspace,iron.io,pyrax
22,153,804
2
false
0
0
I figured out that it was a bad Ruby install. No idea why, but reinstalling it worked.
2
1
0
I created a Python script to use Rackspace's API (Pyrax) to handle some image processing. It works perfect locally, but when I upload it to Iron.io worker, it builds but does not import. I am using a Windows 8 pc, but my boss runs OS X and uploading the exact worker package, it works fine. So I'm thinking it's something with Windows 8 but I don't know how to check/fix. I do apologize in advance if I ramble or do not explain things clearly enough but any help would be greatly appreciated. My worker file looks like this: runtime "python" exec "rackspace.py" pip "pyrax" full_remote_build true Then I simply import pyrax in my python file.
pip "pyrax" dependency with iron worker
0.197375
0
1
116
22,150,395
2014-03-03T15:15:00.000
0
0
1
0
python,cx-freeze,nonetype
22,177,207
1
true
0
0
After long investigation I found that the problem was with HOME environment variable (It has to be set). When I set the environment variable HOME, it started to work properly. I hope it will safe time and help somebody, Honza
1
0
0
I have problem with python scripts built in cx_freeze. When I run the source script by python, everythink works. But when I run the compilled script, it still ended with errors like: object of type 'NoneType' has no len(). Thank you for your help, Honza
Python - cx_freeze script not working
1.2
0
0
179
22,150,526
2014-03-03T15:21:00.000
0
0
0
1
python,windows-7,scheduled-tasks,pythonw
28,174,244
1
false
0
0
I encountered a similar problem. While my code worked as expected using python.exe, it failed to work with pythonw.exe. After much debugging, I identified the source as a call to sys.stdout.write(). With pythonw.exe, sys.stdout is null, so the program crashes, but, silently. I wrapped that call in "if sys.stdout is not None" and the program started working with pythonw.exe as well.
1
2
0
I've got a python script set to log in to a mail server and transfer the files to a remote directory mapped on my machine. Since I don't want it popping up every 10 minutes, I saved it as a pyw file. It worked fine at first, but then it stopped working, showing 0x1 as the result of the last run in the Windows Task Scheduler. When this happens I can execute the exact same code in a .py file and it works, but the .pyw file doesn't even if I run it manually. The pyw file only works again if I add an 'os.system("pause")' line to anywhere the code, which forces a command window to pop up for that line of code. If I take that line out again, it continues to work for the rest of the day, but then when I come in the next day it's stopped working again. I'm at wits' end on how to troubleshoot this. I'm not sure if it's an issue with pythonw, or something's going wrong with Windows Task Scheduler which is interfering with it, or something else.
pyw file not working intermittently / Task Scheduler
0
0
0
648
22,156,258
2014-03-03T20:04:00.000
10
0
0
0
python,pandas
25,162,895
2
true
0
0
In my practice, the strongest, easiest-to-see difference is that a Panel needs to be homogeneous in every dimension. If you look at a Panel as a stack of Dataframes, you cannot create it by stacking Dataframes of different sizes or with different indexes/columns. You can indeed handle more non-homogeneous type of data with multiindex. So the first choice has to be made based on how your data is to be organized.
1
14
1
Using Pandas, what are the reasons to use a Panel versus a MultiIndex DataFrame? I have personally found significant difference between the two in the ease of accessing different dimensions/levels, but that may just be my being more familiar with the interface for one versus the other. I assume there are more substantive differences, however.
Pandas MultiIndex versus Panel
1.2
0
0
2,444
22,158,395
2014-03-03T22:00:00.000
2
0
0
0
python,animation,matplotlib
22,512,378
1
false
0
0
I think you are right, although it is simple to go from a list to a function (just iterate over it) or back (store function values in an array). So it really doesn't matter too much, but you can pick the one that best suits your code, as you described. (Personally I find ArtistAnimation to be the most convenient) If your result is very large, it might be good to use FuncAnimation so you don't need to store your data. MPL still stores it's own copy for plotting, but this factor two might make a difference.
1
2
1
So in the examples of matplotlib.animation there are two main functions that are used to make animations: AritstAnimation and FuncAnimation. According to the documentation the use of each of them is: .ArtistAnimation: Before calling this function, all plotting should have taken place and the relevant artists saved. FuncAnimation Makes an animation by repeatedly calling a function func, passing in (optional) arguments in fargs. So it appears to me that ArtistAnimation is useful when you have already the whole array, list or set of whatever object you want to make an animation from. FuncAnimation in the other hand seems to be more useful whenever you have a function that is able to give it your next result. Is my intuition above correct about this? My question in general is when is more convenient to use one or the other. Thanks in advance
ArtistAnimation vs FuncAnimation matplotlib animation matplotlib.animation
0.379949
0
0
2,205
22,159,215
2014-03-03T22:48:00.000
5
0
1
0
python,matplotlib,ipython
22,161,688
3
true
0
0
We (IPython) have kind of gone back and forth on the best location for config on Linux. We used to always use ~/.ipython, but then we switched to ~/.config/ipython, which is the XDG-specified location (more correct, for a given value of correct), while still checking both. In IPython 2, we're switching back to ~/.ipython by default, to make it more consistent across the different platforms we support. However, I don't think it should have been using ~/.config on a Mac - it should always have been ~/.ipython there.
2
1
1
Over time, I have seen IPython (and equivalently matplotlib) using two locations for config files: ~/.ipython/profile_default/ ~/.config/ipython/profile_default which is the right one? Do these packages check both? In case it matters, I am using Anaconda on OS X and on Linux
IPython & matplotlib config profiles and files
1.2
0
0
555
22,159,215
2014-03-03T22:48:00.000
2
0
1
0
python,matplotlib,ipython
22,161,946
3
false
0
0
As far as matplotlib is concerned, on OS X the config file (matplotlibrc) will be looked for first in the current directory, then in ~/.matplotlib, and finally in INSTALL/matplotlib/mpl-data/matplotlibrc, where INSTALL is the Python site-packages directory. With a standard install of Python from python.org, this is /Library/Frameworks/Python.framework/Versions/X.Y/lib/pythonX.Y/site-packages, where X.Y is the version you're using, like 2.7 or 3.3.
2
1
1
Over time, I have seen IPython (and equivalently matplotlib) using two locations for config files: ~/.ipython/profile_default/ ~/.config/ipython/profile_default which is the right one? Do these packages check both? In case it matters, I am using Anaconda on OS X and on Linux
IPython & matplotlib config profiles and files
0.132549
0
0
555
22,159,351
2014-03-03T22:58:00.000
2
0
1
0
python,text,machine-learning,classification,tf-idf
22,159,481
1
true
0
0
First, let's get some terminology clear. A term is a word-like unit in a corpus. A token is a term at a particular location in a particular document. There can be multiple tokens that use the same term. For example, in my answer, there are many tokens that use the term "the". But there is only one term for "the". I think you are a little bit confused. TF-IDF style weighting functions specify how to make a per term score out of the term's token frequency in a document and the background token document frequency in the corpus for each term in a document. TF-IDF converts a document into a mapping of terms to weights. So more tokens sharing the same term in a document will increase the corresponding weight for the term, but there will only be one weight per term. There is no separate score for tokens sharing a term inside the doc.
1
0
1
So I'm making a python class which calculates the tfidf weight of each word in a document. Now in my dataset I have 50 documents. In these documents many words intersect, thus having multiple same word features but with different tfidf weight. So the question is how do I sum up all the weights into one singular weight?
(Text Classification) Handling same words but from different documents [TFIDF ]
1.2
0
0
722
22,159,824
2014-03-03T23:32:00.000
1
1
0
0
python,paypal
22,611,605
1
true
0
0
At this moment, paying another user via API is not possible via REST APIs, so mass pay/Adaptive payments would be the current existing solution. It is likely that this ability will be part of REST in a future release.
1
1
0
I'm looking into the new paypal REST api. I want the ability to be able to pay another paypal account, transfer money from my acount to their acount. All the documentation I have seen so far is about charging users. Is paying someone with the REST api possible? Similar to the function of the mass pay api or adaptive payments api.
Paypal REST Api for Paying another paypal account
1.2
0
1
105
22,160,820
2014-03-04T00:59:00.000
0
0
0
0
python,file-upload,amazon-web-services,amazon-s3
22,162,436
2
false
1
0
This answer is relevant to .Net as language. We had such requirement, where we had created an executable. The executable internally called a web method, which validated the app authenticated to upload files to AWS S3 or NOT. You can do this using a web browser too, but I would not suggest this, if you are targeting big files.
1
0
0
I would like for a user, without having to have an Amazon account, to be able to upload mutli-gigabyte files to an S3 bucket of mine. How can I go about this? I want to enable a user to do this by giving them a key or perhaps through an upload form rather than making a bucket world-writeable obviously. I'd prefer to use Python on my serverside, but the idea is that a user would need nothing more than their web browser or perhaps opening up their terminal and using built-in executables. Any thoughts?
user upload to my S3 bucket
0
1
0
141
22,162,538
2014-03-04T03:52:00.000
0
0
1
0
python,gevent,greenlets
31,352,615
3
false
0
0
I would say that it is a thread safe object then it is not dangerous, but you should always think hard about it. If it isn't thread safe you need to worry about reentrancy of the methods and the consequence of the different object operations not being atomic. Some objects are stateful and they need to complete certain operations before another thread comes in.
1
4
0
Is it safe to pass a multiprocessing object (queue, dictionary, etc...) to multiple gevent threads? Since they're not actually running concurrently, I don't think there's a problem. However, I know that gevent isn't supposed to be specifically compatible with multiprocessing.
Passing a multiprocessing queue/dictionary/etc.. to green threads
0
0
0
599
22,165,086
2014-03-04T07:01:00.000
0
0
0
0
python,web-scraping
22,165,257
1
true
1
0
robots.txt file does have limits. Its better to inform the owner of the site if you are crawling too often and read reserved rights at the bottom of the site. It is a good idea to provide a link, to the source of your content.
1
1
0
I am working on creating a web spider in python. Do i have to worry about permissions from any sites for scanning there content? If so, how do i get those? Thanks in advance
Permission to get the source code using spider
1.2
0
1
43
22,165,792
2014-03-04T07:41:00.000
2
0
0
0
django,python-social-auth
22,521,700
1
false
1
0
remove SOCIAL_AUTH_USER_MODEL because you are using Django Default User model.
1
3
0
I work on django project that migrate from django social auth to python social auth. Previously new social auth user first name/last name will be saved automatically for first time login. Now after using python social auth, it's not. Seems I have to use this setting: SOCIAL_AUTH_USER_MODEL but SOCIAL_AUTH_USER_MODEL = 'django.contrib.auth.models.User' generate error when invoking runserver: django.core.management.base.CommandError: One or more models did not validate: default.usersocialauth: 'user' has a relation with model web.models.User, which has either not been installed or is abstract. Wanted to try subclassing User model in the project from django.contrib.auth.models import User class User(User): but that is not feasible right now. Also saving manually the name from response data in custom pipeline is prohibited as well. Really want to know if there any other solution? Thanks.
Saving user's social auth name using python social auth in django
0.379949
0
0
1,254
22,166,964
2014-03-04T08:49:00.000
0
0
1
0
python
22,167,226
3
false
0
0
My idea would be to iterate through the list, but to keep three candidates in three variables. Then, as you progress through the list, you substitute them with new values based in order to approach the required value. For example: c1 = 1; c2 = 2; c3 = 3; 1+2+3 =/= 8 next element is 4 try to substitute the smallest candidate, in this case c1: 4+2+3 > 8 try the next one, c2: 1+4+3 ==8 end THe idea is to substitute a candidate based on how much the new some would approach, but not exceed, the desired value. If you iterate through the whole list, but found no suitable match, you can either run another iteration to be sure, or proclaim that the list does not contain any such numbers. This depends largely on whether the list is sorted. This algorithm probably needs refinement, it's just from the top of my head, but I think it conveys an idea, which you can use. Hope it helps.
1
1
0
Let's say my list is l = [1,2,3,4,5], and x = 8. I'm thinking that I should iterate through the list using (for i in l:) and check to see if 2 numbers from the list add up to x-i using recursion. But that doesn't seem like the most efficient way to approach the problem. Can someone show me a better way preferably in Python? Thanks
Given a list of numbers, what's the most efficient way to find which 3 of them sum up to x?
0
0
0
190
22,169,094
2014-03-04T10:19:00.000
1
0
0
0
python,proxy,network-programming
22,248,982
2
true
0
0
I guess it's a Microsoft ISA server thing. You would have to get across to the network administrator to be able to have a measure of success in what you're trying to achieve.
1
1
0
How do I bypass very stubborn proxy in a workplace through python. By bypass, I mean I would be able to fill in the username and password and it would allow me access from my python scripts. I noticed that it's only some selects apps like Web browsers that can pass through and a download-er that I use. My anti-virus can't go through. The problem is that I use Python on this machine and would love to continue working with things like urllib2.
Truely Bypass Stubborn Proxy
1.2
0
0
103
22,169,372
2014-03-04T10:30:00.000
3
0
1
0
python,django
22,169,632
2
true
1
0
Yes, that is setting the attribute on the class. But no, that would not necessarily make it available between requests, although it might. Your question shows a misunderstanding of how Django requests work. Django is not necessarily served using multiple threads: in fact, in most server configuration, it is hosted by multiple independent processes that are managed by the server. Again, depending on the configuration, each of those processes may or may not have multiple threads. But whether or not threads are involved, processes are started and killed by the server all the time. If you set an attribute on a class or module in Django during one request, any subsequent requests served by that same process will see that attribute. But there's no way to guarantee which process your next request will be served by. And there's certainly no way to know if the same user will be accessing the next request from that same process. Setting things at class or module level can be the source of some very nasty thread-safety bugs. My advice is generally not to do it. If you need to keep things across request, store it in the database, the cache, or (especially if it's specific to a particular user) in the session.
1
1
0
I am a bit confused over the difference between setting an object's attribute and setting an attribute on an object's __class__ attribute. Roughly, obj.attr vs obj.__class__.attr. What's the difference between them? Is the former setting an attribute on an instance and the latter setting an attribute on an instance's class (making that attribute available to all instances)? If this is the case, then how are these new class attributes available in Django requests, since the framework works with multiple threads? Does setting a class variable make it persist between requests?
Setting an attribute on object's __class__ attribute
1.2
0
0
83
22,175,330
2014-03-04T14:56:00.000
0
0
1
0
python,recursion,rename
22,175,762
2
false
0
0
You can use os.listdir to list the folders and files on some path. This returns a list that you can iterate through. For each list entry, use os.path.join to combine the file/folder name with the parent path and then use os.path.isdir to check if it is a folder. If it is a folder then check the last character's validity and, if it is invalid, change the folder name using os.rename. Once the folder name has been corrected, you can repeat the whole process with that folder's full path as the base path. I would put the whole process into a recursive function.
1
1
0
We just switched over our storage server to a new file system. The old file system allowed users to name folders with a period or space at the end. The new system considers this an illegal character. How can I write a python script to recursively loop through all directories and rename and folder that has a period or space at the end?
recursive script to rename folders ending with a space or period
0
0
0
457
22,175,349
2014-03-04T14:56:00.000
0
1
0
1
python,bash,shell,emacs,ipython
22,208,324
2
false
0
0
After running your /etc/university/env.sh, start Emacs from this shell. Then the variables set before are known.
1
1
0
I'm working for a university and they have their own libraries and paths for python libraries. Every time I start ipython, I need to run a shell script (e.g. /etc/university/env.sh) The problem is that emacs doesn't recognize the env.sh file. When I do py-shell, emacs always envokes Python WITHOUT any pre-set environment variables. Is there a way to make emacs run /etc/corporate/env.sh before starting python?
how to load shell environment variables when Emacs starts py-shell?
0
0
0
392