Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18,867,280 | 2013-09-18T08:25:00.000 | 2 | 1 | 0 | 0 | python,unit-testing,jenkins,pytest | 18,894,932 | 2 | false | 1 | 0 | I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do. | 1 | 1 | 0 | I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
@unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
@pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this? | Py.test skip messages don't show in jenkins | 0.197375 | 0 | 0 | 1,231 |
18,872,148 | 2013-09-18T12:22:00.000 | 1 | 0 | 0 | 1 | python,zodb | 18,883,943 | 1 | true | 0 | 0 | No, ZEO was never designed for such use.
It is designed for scaling ZODB access across multiple processes instead, with authentication and authorisation left to the application on top of the data.
I would not use ZEO for anything beyond a local network anyway. Use a different protocol to handle communication between game clients and game server instead, keeping the ZODB server side only. | 1 | 0 | 0 | I have been looking into using ZODB as a persistence layer for a multiplayer video game. I quite like how seamlessly it integrates with arbitrary object-oriented data structures. However, I am stumbling over one issue, where I can't figure out, whether ZODB can resolve this for me.
Apparently, one can use the ClientStorage from ZEO to access a remote data storage used for persistence. While this is great in a trusted local network, one can't do this without proper authorization and authentication in an open network.
So I was wondering, if there is any chance to realize the following concept with ZODB:
On the server-side I would like to have a ZEO server running plus a simulation of the game world that might operate as a fully authorized client on the ZEO server (or use the same file storage as the ZEO server).
On the client side I'd need very restricted read/write access to the ZEO server, so that a client can only view the information its user is supposed to know about (e.g. the surrounding area of their character) and can only modify information related to the actions that their character can perform.
These restrictions would have to be imposed by the server using some sort of fine-grained authorisation scheme. So I would need to be able to tell the server whether user A has permissions to read/write object B.
Now is there way to do this in ZODB or third-party solutions for this kind of problem? Or is there a way to extend ZEO in this way? | Fine-grained authorisation with ZODB | 1.2 | 0 | 0 | 86 |
18,872,300 | 2013-09-18T12:28:00.000 | 1 | 0 | 0 | 0 | c++,python,qt,qt4,pyqt | 18,872,394 | 1 | true | 0 | 1 | QTableWidget is a QWidget. Just use the hide() function. | 1 | 0 | 0 | I need to hide the entire QTableWidget and show it. I didn't find the same function for both in its documentation. Do you have any idea? | I need to hide QTableWidget by default | 1.2 | 0 | 0 | 50 |
18,879,888 | 2013-09-18T18:41:00.000 | 1 | 0 | 1 | 0 | python,multithreading,web-services,web,mechanize | 18,880,269 | 1 | true | 0 | 0 | Simply use two different instances of mechanize.Browser. As both use their own chain of handlers, they don't share cookies, logins, etc... It doesn't really matter if you use them from different threads or not, they're completely isolated in any case. | 1 | 0 | 0 | Is the following possible in python using the mechanize module ?
Having 2 threads in a program, both accessing the same web server, but one of them is actually logged into the server with a user/pass, while the other thread is just browsing the same web server without logging in.
I see that if I login to a webserver (say X) using mozilla, and then I open chrome I am not logged in automatically and I have to login again in chrome. I want to have the same behaviour in a python multithreaded program, where one thread is logged in and the other is not.
What would be a suitable way to do this ?
Thanks for any tips ! | A multi-threaded python program with a strange requirement :) | 1.2 | 0 | 1 | 71 |
18,880,353 | 2013-09-18T19:08:00.000 | 10 | 0 | 0 | 0 | python,django,django-forms | 18,880,929 | 1 | true | 1 | 0 | Make sure you have enctype="multipart/form-data" set on your html tag. And make sure you pass request.FILES into the form. For example: form = MyAwesomeForm(request.POST, request.FILES) | 1 | 2 | 0 | I have a modelForm and imagefield in model.
Everything seems ok in form.
I put a debugger in save method of form.
self.data appears like below
<QueryDict: {u'city': [u'19105'], u'surname': [u'VARGI'], u'name': [u'Tuna'], u'image': [u'996884_10151559653258613_1077262085_n.jpg'], u'user': [u'1'], u'interest': [u''], u'csrfmiddlewaretoken': [u'pHAfb5EJBc7N3Xa8YxTQKRrSDeLdBugh'], u'biography': [u'asdasdasdasdasda']}>
and self.is_valid() is True and self.errors is empty
but my self.clean() is
{'city': <City: Ankara, Ankara, Turkey>, 'is_featured': False, 'biography': u'asdasdasdasdasda', 'surname': u'VARGI', 'name': u'Tuna', 'image': None, 'is_active': False, 'user': <Member: tuna>, 'interest': [], 'email': u'', 'categories': []}
image come as None. Any ideas ?
By the way, i can add an image from django-admin | Django form imagefield appear in self.data but none in self.cleaned_data | 1.2 | 0 | 0 | 926 |
18,880,490 | 2013-09-18T19:15:00.000 | 1 | 0 | 1 | 0 | python,wpf,ironpython | 18,882,572 | 1 | true | 1 | 1 | Absolutely, you just have to include the appropriate assemblies (IronPython.dll, IronPython.Modules.dll, Microsoft.Scripting.dll, and Microsoft.Dynamic.dll) and any standard library modules you may be using with your application. | 1 | 1 | 0 | I'm new to Python (IronPython), I come from VS environment.
I want to integrate Python with WPF and I understand that IronPython is good a solution for that.
I've install IronPython on my computer, but i wonder if the project (Python+WPF) can run on a machine that IronPython is not installed on ?
My computer has: Python, IronPython, .Net Freamwork
Target Machine has: Python, .Net Freamwork | Can IronPython run on machine that it does not installed on? | 1.2 | 0 | 0 | 223 |
18,882,510 | 2013-09-18T21:17:00.000 | 1 | 0 | 1 | 0 | ipython,ipython-notebook | 35,315,482 | 1 | false | 0 | 0 | When I have a long noetbook, I create functions from my code, and hide it into python modules, which I then import in the notebook.
So that I can have huge chunk of code hidden on the background, and my notebook smaller for handier manipulation. | 1 | 6 | 1 | I have some very large IPython (1.0) notebooks, which I find very unhandy to work with. I want to split the large notebook into several smaller ones, each covering a specific part of my analysis. However, the notebooks need to share data and (unpickleable) objects.
Now, I want these notebooks to connect to the same kernel. How do I do this? How can I change the kernel to which a notebook is connected? (And any ideas how to automate this step?)
I don't want to use the parallel computing mechanism (which would be a trivial solution), because it would add much code overhead in my case. | How to share Ipython notebook kernels? | 0.197375 | 0 | 0 | 1,689 |
18,884,017 | 2013-09-18T23:29:00.000 | 0 | 0 | 1 | 0 | python,datetime | 18,884,244 | 4 | false | 0 | 0 | I don't know how you are getting your start and end time, but if you are limiting it to a single day, then be sure that the start time comes before the end time. If you had, for example, start time = 1800 and end time = 1200, you won't find any time in between on that day. | 1 | 6 | 0 | I want to check in python if the current time is between two endpoints (say, 8:30 a.m. and 3:00 p.m.), irrespective of the actual date. As in, I don't care what the full date is; just the hour. When I created datetime objects using strptime to specify a time, it threw in a dummy date (I think the year 1900?) which is not what I want. I could use a clumsy boolean expression like (hour == 8 and minute >= 30) or (9 <= hour < 15) but that doesn't seem very elegant. What's the easiest and most pythonic way to accomplish this?
Extending a bit further, what I'd really like is something that will tell me if it's between that range of hours, and that it's a weekday. Of course I can just use 0 <= weekday() <= 4 to hack this, but there might be a better way. | How to check in python if I'm in certain range of times of the day? | 0 | 0 | 0 | 19,919 |
18,884,259 | 2013-09-18T23:57:00.000 | 3 | 0 | 0 | 0 | android,python,ios,mobile,browser | 18,892,784 | 1 | true | 1 | 1 | You could do a very simple kivy app, that would start a service (inside which you would do your "server" side, with a small engine like flask or bottle, but i guess cherryPy should work too), and in the main.py of the "kivy" app, don't import kivy, just import webbrowser and start a browser window to localhost:your port. This will use the android browser.
edit: oh, services are android only for now, apparently ios 7 supports them too, but kivy-ios hasn't been updated to make use of them. | 1 | 5 | 0 | My application has an html-based front-end and uses python logic on the back-end. This application needs to run offline, not connected to the internet, so by "back-end" here I don't mean a server running remotely, but rather python logic running side-by-side in the same app as the browser/html engine. For Windows or Mac desktop apps, I build a Chromium Embedded Framework application, and then launch a sub-process which runs a CherryPy python application built using py2exe (or py2app). The client and the server then communicate using normal http.
I'd like to achieve the same thing on both iOS and Android. I've researched several alternatives, but nothing seems to do quite what I need.
Kivy is close, but as far as I can tell it doesn't offer a browser/html front-end, but rather provides its own layout engine on top of OpenGL. It has an extension mechanism, but that seems to be more about extending the python side, not the front-end side.
On the other hand, I could start with PhoneGap and then add a python library as an extension (possibly using Kivy's mobile library build of python). Or for that matter I could just write a regular C++ app that embeds a browser and uses a python library build.
On the third hand, I've played with using various python-to-javascript converters to get the back-end logic into something that can work with PhoneGap directly, but that approach gets pretty difficult given all of the python logic I have -- some of it just doesn't convert so easily.
Do you know of apps that are displaying html and running python logic in the same app? | combining html front-end and python "back-end" in mobile app | 1.2 | 0 | 0 | 2,655 |
18,884,331 | 2013-09-19T00:07:00.000 | 1 | 0 | 1 | 0 | python,module | 18,884,384 | 2 | false | 0 | 0 | For a single module, it usually doesn't make any difference. For complicated webs of modules, though, an installation program may do many things that wouldn't be immediately obvious. For example, it may also copy data files into locations the new modules can find them, put executables (binary libraries, or DLLs on Windws, for example) where the new modules can find them, do different things depending on which version of Python you have, and so on.
If deploying a web of modules were always easy, nobody would have written setup programs to begin with ;-) | 2 | 2 | 0 | New to Python, so excuse my lack of specific technical jargon. Pretty simple question really, but I can't seem to grasp or understand the concept.
It seems that a lot of modules require using pip or easy_install and running setup.py to "install" into your python installation or your virtualenv. What is the difference between installing a module and simply taking it and importing the into another script? It seems that you access the modules the same way.
Thanks! | Difference between installing and importing modules | 0.099668 | 0 | 0 | 4,599 |
18,884,331 | 2013-09-19T00:07:00.000 | 6 | 0 | 1 | 0 | python,module | 18,884,362 | 2 | true | 0 | 0 | It's like the difference between:
Uploading a photo to the internet
Linking the photo URL inside an HTML page
Installing puts the code somewhere python expects those kinds of things to be, and the import statement says "go look there for something named X now, and make the data available to me for use". | 2 | 2 | 0 | New to Python, so excuse my lack of specific technical jargon. Pretty simple question really, but I can't seem to grasp or understand the concept.
It seems that a lot of modules require using pip or easy_install and running setup.py to "install" into your python installation or your virtualenv. What is the difference between installing a module and simply taking it and importing the into another script? It seems that you access the modules the same way.
Thanks! | Difference between installing and importing modules | 1.2 | 0 | 0 | 4,599 |
18,884,852 | 2013-09-19T01:18:00.000 | 2 | 0 | 0 | 1 | python,html,google-app-engine | 18,884,880 | 1 | true | 1 | 0 | you can make a post request to whatever http resource you want
in your html form you can change the action to point at your gae python script
<form action="http://yourdomain/your/gae/endpoint" method="post">
You can then follow the tutorial and access the posted data accordingly. Finally, it is up to you to return an appropriate response, this might include redirecting back to the original domain, dependent on your application | 1 | 0 | 0 | I have a website hosted already and it contains an HTML form. I want to be able to submit the form to a Python script on Google App engine to handle the data. In the documentation, there is a tutorial to process forms with Python, but it was from a page that was served in the script itself. How do I link an existing domain/webpage to a script running on Google App Engine? Thanks for any help!
~Carpetfizz | Existing HTML website, but processing on Google App Engine? | 1.2 | 0 | 0 | 67 |
18,886,383 | 2013-09-19T04:49:00.000 | 0 | 0 | 0 | 0 | python | 18,897,337 | 1 | false | 0 | 0 | cat dataset.csv dataset.csv dataset.csv dataset.csv > bigdata.csv | 1 | 0 | 1 | In wakari, how do I download a CSV file and create a new CSV file with each of the rows in the original file repeated N number of times in the new CSV file. | Repeat rows in files in wakari | 0 | 0 | 0 | 61 |
18,888,331 | 2013-09-19T07:10:00.000 | 0 | 0 | 0 | 1 | python-2.7,ubuntu-12.04,openerp | 18,888,839 | 1 | false | 1 | 0 | uninstallation of module will surely bring up some surprises with openerp, so better to test the module for functionality and features in a test system and when you are sure about things install it in LIVE system.
For your problem, just check if there is any aeroo report is in the system, if yes delete the report and it's actions manually. | 1 | 0 | 0 | I am using OpenERP 7 with Ubuntu 12.04. I have been trying to install Aero Reports module for creating OpenERP reports. I faced some "XML issues " during installation.
Now when I try to remove the module it says "Integrity Error
The operation cannot be completed, probably due to the following:
- deletion: you may be trying to delete a record while other records still reference it" .Plz help me to fix this issue
Hopes for suggestion | How to uninstall OpenERP-7 module in ubuntu 12.04 ? | 0 | 0 | 0 | 1,437 |
18,888,367 | 2013-09-19T07:12:00.000 | 0 | 1 | 0 | 0 | python,selenium,webdriver | 18,890,907 | 2 | true | 1 | 0 | Use pySelenese module for python - it parses html test and let you run it. | 1 | 0 | 0 | I use selenium, python webdriver to run my test application. I also have some selenium html tests that I would like to add to my application. This html tests are changing quite ofen so I can not just convert those tests to python webdriver and add it to my app. I think I need somehow run those tests without changes from my python webdriver app. How can I do it? | Execute selenium html tests from webdriver tests | 1.2 | 0 | 1 | 141 |
18,892,291 | 2013-09-19T10:31:00.000 | 0 | 0 | 1 | 0 | python,pylint | 18,892,888 | 2 | false | 0 | 0 | Just listen to Pylint and get rid of the offending usage of that black listed function using a cleaner alternative such as list comprehension. | 1 | 1 | 0 | I can disable the following warning completely:
W0141: Used builtin function %r Used when a black listed builtin function is used (see the bad-function option). Usual black listed functions are the ones like map, or filter , where Python offers now some cleaner alternative like list comprehension.
But is there also a way to remove one function from the black list? | Change black listed functions in PyLint | 0 | 0 | 0 | 946 |
18,895,770 | 2013-09-19T13:18:00.000 | 8 | 0 | 1 | 0 | python | 18,895,854 | 2 | false | 0 | 0 | When a Python interpreter is invoked on a script, it is parses and transforms it into byte-code.. this leaves a .pyc file which is actually what executed.
A script could write into itself but that would not cause parsing to restart. | 1 | 4 | 0 | I am creating a simple AI program in Python 2.7 and i was going to make it be able to learn. Is there any way to have it so the script could edit itself, like adding a answer to a question into it's own code, in a certain spot in the code.
thank-you in advance guys! | Can a python script write into itself | 1 | 0 | 0 | 3,852 |
18,899,515 | 2013-09-19T15:55:00.000 | 3 | 0 | 0 | 1 | python,websocket,twisted,autobahn | 18,913,092 | 2 | false | 1 | 0 | Why do you need a thread to launch protocolInstance.sendMessage() ?
This can be done in a normal reactor loop.
The core of a twisted is reactor and it gives a much easier look at things when you consider twisted itself reactive - meaning it does something as a reaction (response) to something else.
Now I assume that the thread you are talking about, also gets created and made in calling sendMessage because of certain events or activity or status. I can hardly imagine a case where you would just need to send a message out of the blue without any reason to react.
If however there is an event which should trigger sendMessage, there is no need to invoke that in thread: just use twisted mechanisms for catching that event and then calling sendMessage from that particular event's callback.
Now on to your concrete example: can you specify what "whenever I need" means exactly in the context of this question? An input from another connection? An input from the user? Looping activity? | 1 | 8 | 0 | Maybe I'm missing something here in the asynchronous designs of Twisted, but I can't seem to find a way to call the sendMessage() method "externaly". By this I mean, sending messages without being solely at the callback methods of Twisted/AutobahnWebsockets (like at onOpen or when receiving data from server at onMessage())
Of course I could launch a thread and call my_protocol_instance.sendMessage("hello") but that would defeat every purpose of the asynchronous design right?
In a concrete example, I need to have a top wrapper class which opens the connection and manages it, and whenever I need I call my_class.send_my_toplevel_message(msg). How can I implement this?
Hope I've been clear on my explanation.
Thanks | Writing an "interactive" client with Twisted/Autobahn Websockets | 0.291313 | 0 | 0 | 2,363 |
18,901,185 | 2013-09-19T17:22:00.000 | 7 | 0 | 1 | 0 | python,ipython,ipython-notebook | 36,663,910 | 9 | false | 0 | 0 | Just cd to your working folder and then start the IPython notebook server. This way you can be mobile. | 1 | 65 | 0 | I just started IPython Notebook, and I tried to use "Save" to save my progress. However, instead of saving the *.ipynb in my current working directory, it is saved in my python/Scripts folder. Would there be a way to set this?
Thanks! | IPython Notebook save location | 1 | 0 | 0 | 134,744 |
18,901,729 | 2013-09-19T17:54:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,serialization,yaml,pickle | 69,407,450 | 3 | false | 0 | 0 | If it is not important for you to read files by a person, but you just need to save the file, and then read it, then use the pickle. It is much faster and the binaries weigh less.
YAML files are more readable as mentioned above, but also slower and larger in size.
I have tested for my application. I measured the time to upload and download an object to a file, as well as its size.
Serialization/deserialization method
Average time, s
Size of file, kB
PyYAML
1.73
1149.358
pickle
0.004
690.658
As you can see, yaml is 1,67 times heavier. And 432,5 times slower.
P. S. This is for my data. In your case, it may be different. But that's enough for comparison. | 2 | 6 | 0 | I am naive to Python. But, what I came to know is that both are being used for serialization and deserialization. So, I just want to know what all basic differences in between them? | What are the basic difference between pickle and yaml in Python? | 0.066568 | 0 | 0 | 4,291 |
18,901,729 | 2013-09-19T17:54:00.000 | 10 | 0 | 1 | 0 | python,python-2.7,serialization,yaml,pickle | 18,901,841 | 3 | false | 0 | 0 | YAML is a language-neutral format that can represent primitive types (int, string, etc.) well, and is highly portable between languages. Kind of analogous to JSON, XML or a plain-text file; just with some useful formatting conventions mixed in -- in fact, YAML is a superset of JSON.
Pickle format is specific to Python and can represent a wide variety of data structures and objects, e.g. Python lists, sets and dictionaries; instances of Python classes; and combinations of these like lists of objects; objects containing dicts containing lists; etc.
So basically:
YAML represents simple data types & structures in a language-portable manner
pickle can represent complex structures, but in a non-language-portable manner
There's more to it than that, but you asked for the "basic" difference. | 2 | 6 | 0 | I am naive to Python. But, what I came to know is that both are being used for serialization and deserialization. So, I just want to know what all basic differences in between them? | What are the basic difference between pickle and yaml in Python? | 1 | 0 | 0 | 4,291 |
18,903,516 | 2013-09-19T19:39:00.000 | 2 | 1 | 1 | 0 | python,distutils | 18,904,242 | 1 | true | 0 | 0 | As opposed to ignoring failures when importing print out a trace message or a warning so that the user will still get the negative feedback.
As for importing a specific subfile if you are using python 3.3+ (or python 2.7) you can use imp.load_source which accepts a pathname of a file you want to import. | 1 | 0 | 0 | What I want to achieve is as follows:
I have a python package, let's call it foo, comprising a directory foo containing an __init__.py and, under normal use a compiled extension library (either a .so or a .pyd file), which __init__.py imports into the top level namespace.
Now, the problem is that I wish the top level namespace to contain a version string which is available to setup.py during building and packaging, when the extension library is not necessarily available (not yet built), and so would cause an ImportError when trying to import foo.version.
Now, clearly, I could have an exception handler in __init__.py that just ignores failures in importing anything, but this is not ideal as there may be a real reason that the user cares about why the package can't be imported.
Is there some way I can have the version string in a single place in the package, have it importable, yet not break the exceptions from attempts to import the extension? | Import from a package with known import failure | 1.2 | 0 | 0 | 60 |
18,905,550 | 2013-09-19T21:53:00.000 | 2 | 0 | 1 | 0 | python,multiple-inheritance,mixins,composition,facade | 26,149,413 | 1 | true | 0 | 0 | As noted above:
Composition is when one class contains members that are instances of other classes. The term Facade usually means that you are encapsulating a lot of logic behind the public members of a class. So notice that you can have composition without a Facade, if the composition members are all public. I won't get into the merit of a Facade without composition being valid, because that's a semantics labyrinth. BTW composition and interfaces are two different beasts, even if consider only Java or only Python. | 1 | 2 | 0 | In the python context, are compositions and facades the same things. I know what facade design pattern is used for, but just wanted to make sure can I call my class, which implements a facade, a composition?
Can I say the same for a mixin class as well, or is it just a different beast all together having more proximity to multiple inheritance and solving some other problem than facade?
Can we loosely call composition in Python equivalent to interfaces in Java, or this statement is totally out of place.
PS: I want my application to provide a standard interface to clients but the exact implementation, inside the gut, will keep changing overtime as we get more development bandwidth. Since the client will use this class for further development I don't want their code to break in case we make any change on the implementation side. I want to fix my jargon confusions before I start the project. | Are Compositions and Facade the Same Thing in python? | 1.2 | 0 | 0 | 538 |
18,907,103 | 2013-09-20T00:28:00.000 | 1 | 0 | 1 | 1 | shell,ipython,ipython-notebook | 28,972,659 | 4 | false | 0 | 0 | Was just looking for this and my wee face dropped when I saw it was a bit of an issue. Thought I would just post my solution in case it is usefull to anyone else.
Basically I was looking for a way to send sudo commands through the notebook, probably not very wise but I needed it for what I was doing. And i couldn't get a prompt for the password. So decided to use a x-terminal and sending the command through to the terminal. You don't get any feed back but may be due to not hooking to the IO on the way back. Here is what i used in the notebook:
In [1] !xterm -e sudo mount -o loop system.img /system/
I'm using linux but i would expect !cmd for windows might do the trick too | 1 | 5 | 0 | Can I execute a shell command that requires input in ipython and/or an ipython notebook?
When I execute such a command, I see it's prompt, but no apparent way to provide it with input from my keyboard.
An example could be an rsync command to a remote server (thus requiring a password). There are no doubt dangers security-wise here - these are somewhat reduced in my case as I'm running on localhost. | ipython: Can I provide input to a shell command | 0.049958 | 0 | 0 | 4,131 |
18,907,641 | 2013-09-20T01:41:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,windows | 18,907,667 | 3 | false | 0 | 0 | It generally depends on what OS you are running and how you installed your python. Under linux or Mac OSX, you don't need to unistall the previous version. I am not sure how things are handled for Windows. | 2 | 6 | 0 | I have python 3.2 installed and I want to know if I have to uninstall earlier versions before installing newer ones. | Do I have to uninstall old python versions to update to a new one on Windows? | 0 | 0 | 0 | 27,473 |
18,907,641 | 2013-09-20T01:41:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,windows | 18,907,713 | 3 | false | 0 | 0 | You can install multiple versions of Python on Windows, but only the last version you installed will be used by default: when double-clicking a .py file in Windows Explorer, when typing just python at the command line, etc. "Edit in IDLE" on the context menu also uses the last version you installed. To use other versions you'll need to specify the full path of the version you want. Also, if you use the PYTHONPATH environment variable, there's only one of those, and the scripts in the directories specified in PYTHONPATH may or may not work with whatever version of Python you happen to be running. This can be worked around by writing a batch file that sets PYTHONPATH before launching Python. | 2 | 6 | 0 | I have python 3.2 installed and I want to know if I have to uninstall earlier versions before installing newer ones. | Do I have to uninstall old python versions to update to a new one on Windows? | 0.066568 | 0 | 0 | 27,473 |
18,907,937 | 2013-09-20T02:26:00.000 | 0 | 0 | 0 | 0 | python,berkeley-db,bsddb | 18,924,888 | 1 | false | 0 | 0 | You can't... at least not without installing something. You'd have to statically link BDB into the python interpreter. But, then you'd have a custom python binary you'd need to install. | 1 | 1 | 0 | I'm using Berkeley DB in a python project, and I am wondering if I can make the libraries available to python without specifically installing berkeley DB.
How can you embed Berkeley DB in an application generally?
Has anyone done this with python and bsddb3? | How to use Berkeley DB in an application without installing | 0 | 0 | 0 | 251 |
18,907,998 | 2013-09-20T02:37:00.000 | 0 | 0 | 1 | 0 | python,arrays,list,numpy,multidimensional-array | 18,908,045 | 2 | false | 0 | 0 | Numpy is an extension, and demands that all the objects on it are of the same type , defined on creation. It also provides a set of linear algebra operations. Its more like a mathematical framework for python to deal with Numeric Calculations (matrix, n stuffs). | 1 | 1 | 1 | After only briefly looking at numpy arrays, I don't understand how they are different than normal Python lists. Can someone explain the difference, and why I would use a numpy array as opposed to a list? | Difference between a numpy array and a multidimensional list in Python? | 0 | 0 | 0 | 2,487 |
18,910,200 | 2013-09-20T06:26:00.000 | 0 | 0 | 0 | 0 | python,nlp | 18,910,584 | 1 | false | 0 | 0 | Try xlrd Python Module to read and process excel sheets.
I think an appropriate implementation using this module is an easy way to solve your problem. | 1 | 0 | 1 | to provide some context: Issues in an application are logged in an excel sheet and one of the columns in that sheet contains the email communication between the user (who had raised the issue) and the resolve team member. There are bunch of other columns containing other useful information. My job is to find useful insights from this data for Business.
Find out what type of issue was that? e.g. was that a training issue for the user or access issue etc. This would mean that I analyze the mail text and figure out by some means the type of issue.
How many email conversations have happened for one issue?
Is it a repeat issue?
There are other simple statistical problems e.g. How many issues per week etc...
I read that NLP with Python can be solution to my problems. I also looked at Rapidminer for the same.
Now my Question is
a. "Am I on the right track?, Is NLP(Natural Language Processing) the solution to these problems?"
b. If yes, then how to start.. I have started reading book on NLP with Python, but that is huge, any specific areas that I should concentrate on and can start my analysis?
c. How is Rapidminer tool? Can it answer all of these questions? The data volume is not too huge (may be 100000 rows)... looks like it is quite easy to build a process in rapidminer, hence started on it...
Appreciate any suggestions!!! | Analyze Text to find patterns and useful information | 0 | 0 | 0 | 637 |
18,911,070 | 2013-09-20T07:21:00.000 | 31 | 0 | 1 | 0 | python,virtualenv,python-3.3,virtualenvwrapper,python-venv | 19,312,987 | 1 | true | 0 | 0 | Sorry this answer is a bit delayed. pyvenv does not aim to supersede virtualenv, in fact virtualenv in Python 3 depends on the standard library venv module.
The pyvenv command creates an absolutely minimal virtual environment into which other packages can be installed.
The Python 3 version of virtualenv actually subclasses the standard library's implementation and provides hooks to automatically install setuptools and pip into the environment which pyvenv doesn't do on it's own.
As far as I know virtualenvwrapper depends on virtualenv only because the mkvirtualenv or mkproject commands allow you to specify packages to be installed into the new environment, this only works because virtualenv will have already installed setuptools and pip.
So to answer your question I believe you should be able to use virtualenvwrapper on environments created by pyvenv as long as you follow virtualenvwrapper's conventions for where to put things and you either manually install setuptools and pip into the environment or don't use any package management features of virtualenvwrapper. | 1 | 18 | 0 | Virtualenvwrapper is a user-friendly shell around Python's virtualenv.
Python 3.3 ships with pyvenv built into the standard library, which aims to supercede virtualenv.
But if I install Virtualenvwrapper on Python3.3, it still installs virtualenv, leading me to believe it doesn't use 'pyvenv' under the covers.
Presumably this doesn't really matter - if I wish to use virtualenvwrapper on Python3.3 I should happily let it use virtualenv instead of pyvenv, and will (for the moment) suffer no ill effects? | Does using virtualenvwrapper with Python3.3 mean I cannot (or should not) be using pyvenv? | 1.2 | 0 | 0 | 2,304 |
18,913,370 | 2013-09-20T09:33:00.000 | 0 | 0 | 0 | 0 | python,openpyxl,xlsxwriter | 18,917,174 | 2 | false | 0 | 0 | In answer to the last part of the question:
xlsxwriter : As of my understanding, we can not modify existing xlsx File. Do we any update into this module.
That is correct. XlsxWriter only writes new files. It cannot be used to modify existing files. Rewriting files is not a planned feature. | 1 | 2 | 0 | I have multiple xlsx File which contain two worksheet(data,graph). I have created graph using xlsxwriter in graph worksheet and write data in data worksheet. So I need to combine all graph worksheet into single xlsx File. So My question is:
openpyxl : In openpyxl module, we can load another workbook and modify the value.is there anyway to append new worksheet of another File. For Example.
I have two xlsx data.xlsx(graph worksheet) and data_1.xlsx(graph worksheet)
So Final xlsx (graph worksheet and graph_1 worksheet)
xlsxwriter : As of my understanding, we can not modify existing xlsx File. Do we any update into this module. | Combine multiple xlsx File in single Xlsx File | 0 | 1 | 0 | 2,875 |
18,918,682 | 2013-09-20T14:01:00.000 | 1 | 0 | 0 | 1 | python,tornado | 18,950,083 | 1 | true | 0 | 0 | Tornado uses the standard library's logging module, which is blocking in most configurations. Python 3.2 includes a QueueHandler class which can be used to move the actual I/O to a separate thread; prior to that there was no standard solution to non-blocking logging (but there's probably a package on PyPI with a 2.x-compatible implementation). | 1 | 1 | 0 | I am using Tornado for a websockets server and I am trying to figure out how to log to a file without blocking the main thread. Is tornado.log non-blocking? If not, is there a general pythonic way to log to a file without blocking the main thread?
Thanks! | Writing to log files in a non-blocking manner in Tornado/Python | 1.2 | 0 | 0 | 501 |
18,919,553 | 2013-09-20T14:42:00.000 | 1 | 0 | 1 | 0 | python,loops,slice,uppercase | 18,924,802 | 2 | false | 0 | 0 | s = "Hello, world!"
print ' '.join([x.upper() if x == 'o' else x for x in s])
HellO, wOrld! | 1 | 0 | 0 | Let
greeting = 'Hello, world!'
(1) Use slicing to change the the letter o to captital O. Notice
there are two 'o's!
Save the new string into the variable new_greeting and print it
(2) Instead of using slicing, now use for loop and conditional
execution to do it.
I have been trying unsuccessfully to use greeting.upper() to no avail!! | Switch case in python using slicing and loop | 0.099668 | 0 | 0 | 159 |
18,921,546 | 2013-09-20T16:27:00.000 | 0 | 0 | 0 | 0 | python,selenium-webdriver,selenium-rc,selenium-ide | 18,952,607 | 1 | false | 0 | 0 | Just you need to revert the steps to work.
Options>Formats>HTML
It should work. | 1 | 0 | 0 | Changed Selenium IDE source code format by following these steps:
Options>>Formats>>Python2/UT/RC>>Ok
Format>>Python2/UT/RC
Recorded code in IDE stop
Now playback button is not enabled. Try to export code to Python2/UT/RC to Eclipse(Python) enabled but there also it is not working, when trying to execute it is opening a box with "Ant" and close.
Please help. | Selenium IDE playback is not working | 0 | 0 | 1 | 179 |
18,921,548 | 2013-09-20T16:27:00.000 | 2 | 0 | 1 | 0 | python,arrays,performance,sqlite,data-structures | 18,924,301 | 3 | false | 0 | 0 | Well, in general, if you have too much data to keep in memory, you need to use some kind of external storage; and if all your data does fit in memory, you don't need to do anything fancy.
The biggest problem you're likely to have is if you have more data than your operating system will allow in a single process image; in that case again you will need external storage.
In both cases this comes down to: use a database, whether sql or no. If it's a sql database, you might like to use an ORM to make that easier.
However, until you hit this problem, just store everything in memory, and serialise to disk. I suggest using cPickle or an ORM+sqlite. | 1 | 3 | 0 | I'm looking for some help understanding the performance characteristics of large lists, dicts or arrays in Python. I have about 1M key value pairs that I need to store temporarily (this will grow to maybe 10M over the next year). They keys are database IDs ranging from 0 to about 1.1M (with some gaps) and the values are floats.
I'm calculating pagerank, so my process is to initialize each ID with a value of 1, then look it up in memory and update it about ten times before saving it back to the database.
I'm theorizing that lists or arrays will be fastest if I use the database ID as the index of the array/list. This will create a gappy data structure, but I don't understand how fast look ups or updates will be. I also don't yet understand if there's a big gain to get from using arrays instead of lists.
Using a dict for this is very natural, with key-value pairs, but I get the impression building the dict the first time would be very slow and memory intensive as it grows to accommodate all the entries.
I also read that SQLite might be a good solution for this using the :memory: flag, but I haven't dug into that too much yet.
Anyway, just looking for some guidance here. Any thoughts would be much appreciated as I'm digging in. | Performance of Large Data Structures in Python | 0.132549 | 0 | 0 | 3,889 |
18,922,009 | 2013-09-20T16:55:00.000 | 4 | 0 | 1 | 0 | python,pyqt,python-sip | 20,581,575 | 3 | false | 0 | 1 | In Ubuntu, package python-qt4-dev was missing. I installed it and it fixes the problem. | 1 | 1 | 0 | I'm building a project that depends on pyqt (e.g. VTK with pyqt). I'm getting an error like QtCoremod.sip: No such file (or something similar). What's going wrong? | pyqt: unable to find QtCoremod.sip | 0.26052 | 0 | 0 | 2,960 |
18,923,546 | 2013-09-20T18:31:00.000 | 1 | 0 | 1 | 0 | python,visual-studio,dll | 18,924,514 | 2 | true | 0 | 0 | In general the two must match - you might "get away with it" some of the time. The solution is to import .py files. | 1 | 0 | 0 | Example: If I have an embedded Python 3.2 application compiled with MS VS2010, must all external imports also be compiled with VS2010, or is there some way to successfully import pyd components precompiled in MSVS 2008 ?
I can't seem to find the definitive answer online.
Thanks,
Rob. | Do Python modules necessarily have to be compiled with the same version as the main core? | 1.2 | 0 | 0 | 74 |
18,923,762 | 2013-09-20T18:45:00.000 | 1 | 0 | 0 | 0 | python,xml,wxpython,wxwidgets | 18,925,053 | 1 | false | 0 | 1 | Take a look at the wxPython demo of the StyledTextCtrl. It's in wx.stc. Anyway, the demo labeled StyledTextCtrl_2 shows how to create syntax highlighting for a Python file using self.SetLexer(stc.STC_LEX_PYTHON).
For XML, you would just need to change that line to self.SetLexer(stc.STC_LEX_XML). You should also look at PyShell / PyCrust or possibly Editra. The latter is a text editor created with wxPython that does this sort of thing. | 1 | 0 | 0 | I want to read/search in an XML document in a window of a wxPython application.
A lot of text editors will highlight the content and maybe have support for
block and unblock of XML elements.
Is there a component that provides at this sort of functionality in wxPython? | How should I display an XML document in a wxPython UI? | 0.197375 | 0 | 0 | 678 |
18,930,996 | 2013-09-21T09:22:00.000 | 1 | 0 | 0 | 0 | python-2.7 | 18,931,016 | 1 | false | 0 | 0 | Assuming you have control over both client and server, send a message to the server with a time-stamp on it and have the server merely return the timestamp back to the client. When the client receives this back, it compares the current timestamp to the one in the payload and voila, the difference is the time it took for a round-trip. | 1 | 0 | 0 | How to calculate round trip time for the communication between client and server in tcp connections. | Tcp round trip time calculation using python | 0.197375 | 0 | 1 | 1,348 |
18,933,331 | 2013-09-21T13:47:00.000 | -1 | 0 | 0 | 1 | batch-file,wix,continuous-integration,teamcity,python-2.5 | 18,936,954 | 2 | false | 0 | 0 | First of all, there is no a well behaved way to supress the PAUSE command; however, it is possible to do that in a very simple way. The method consist in modify the cmd.exe file, that is, you may use a binary-hex editor to load cmd.exe file, look for the PAUSE command (that is encoded in two-bytes UNICODE characters), modify it by a different command and save the cmd.exe file. After that, the PAUSE command will no longer work.
Yes, I know that there are multiple reasons to NOT do that; however, I am just answering the question. The OP is responsible to evaluate if this method would be useful for their needs or not. | 2 | 3 | 0 | I have been tasked with consolidating out entire teams' build process. Including build servers, CI, etc.
The way our project is structured is that each sub team is responsible for the development of their own code bases.
Over time, each team has usually created their own python/sh/bat/Wix depending on the requirements of their deployment. I've been tasked with consolidating all the builds into one primary Team City system.
Problem is that I've found that many build scripts (bat files) contain commands such as UI prompting and PAUSEing.
Does anybody know of any way to perhaps suppress those commands prior/during the script execution.
I have considered preprocessing the batch files and removing/REM'ing the Pauses but that is not ideal. Since there are +- 350 individual projects spread across +- 35 HG repositories (which are, in themselves, spread across 4 cities).
Ideally we don't want to perform pre-building cleanup.
Does anyone know of any supper-dupper magic trick to do this or does it required making changes to each build file, | Suppressing windows command line PAUSE command | -0.099668 | 0 | 0 | 2,666 |
18,933,331 | 2013-09-21T13:47:00.000 | 12 | 0 | 0 | 1 | batch-file,wix,continuous-integration,teamcity,python-2.5 | 18,934,862 | 2 | false | 0 | 0 | You can disable input by redirecting input to nul: <nul yourScrpt.bat. This will effectively disable any PAUSE commands, but it will also disable any SET /P or other command that prompts for input.
If you disable input for a master bat script that calls other scripts, the child scripts will inherit the disabled input. | 2 | 3 | 0 | I have been tasked with consolidating out entire teams' build process. Including build servers, CI, etc.
The way our project is structured is that each sub team is responsible for the development of their own code bases.
Over time, each team has usually created their own python/sh/bat/Wix depending on the requirements of their deployment. I've been tasked with consolidating all the builds into one primary Team City system.
Problem is that I've found that many build scripts (bat files) contain commands such as UI prompting and PAUSEing.
Does anybody know of any way to perhaps suppress those commands prior/during the script execution.
I have considered preprocessing the batch files and removing/REM'ing the Pauses but that is not ideal. Since there are +- 350 individual projects spread across +- 35 HG repositories (which are, in themselves, spread across 4 cities).
Ideally we don't want to perform pre-building cleanup.
Does anyone know of any supper-dupper magic trick to do this or does it required making changes to each build file, | Suppressing windows command line PAUSE command | 1 | 0 | 0 | 2,666 |
18,934,509 | 2013-09-21T15:43:00.000 | 2 | 0 | 0 | 0 | python,django,date | 18,934,550 | 1 | true | 1 | 0 | I would do it in django. Have two DateFields in your model, both that could be blank and null. On the first time someone views your page (with both dates unset), create a view for the template request that sets the one of the DateFields to today = datetime.date.today() and the other to today + datetime.timedelta(8) to be 8 days after the current date. Save your updated model and then display that model in the template. | 1 | 0 | 0 | I Need to put an expiration date in a template, it show the current date and the expiration date will be 8 days from current date.
Can someone tell me how can I do it? Is possible do it with Django or do I have to do it whit maybe Jquery or JavaScript?
And I need to send it to my database too, not just display it in the template. | Sum 7 days after current date in Django | 1.2 | 0 | 0 | 1,338 |
18,936,253 | 2013-09-21T18:44:00.000 | 1 | 0 | 0 | 0 | javascript,python,google-app-engine | 18,937,126 | 2 | false | 1 | 0 | I assume you want to send a piece of html to a browser that won't disrupt the existing layout and functioning of the page, but still take place in DOM (hence be accessible from JS). In that case you may consider a hidden/invisible iframe. | 1 | 0 | 0 | I am rather new at this, so please cut me some slack.
I am working in a web application using google app engine. My server runs using Python, and when the user visits my "/" page, the server gets some HTML source code from a website and places it into a variable called html_code.
Now I would like to pass the html_code variable to the client so he can analyze it using JavaScript. Is this possible? How can I achieve this? Please provide examples if possible.
Thanks in advance, Pedro. | How to make python invoke a JavaScript function? | 0.099668 | 0 | 0 | 79 |
18,936,535 | 2013-09-21T19:10:00.000 | 1 | 0 | 0 | 0 | python,django | 18,936,552 | 3 | false | 1 | 0 | So that the apps can be taken and used with a different database if desired without needing to modify the code (much). | 3 | 1 | 0 | I was just wondering why each project does not just have one model.py file, considering its just a file full of classes ( acting as database tables), because the whole project runs on one database, why can there be more than one models.py file if all files work with the same database?
Thanks. | Why are there one model.py per app instead of just one model.py out through the whole project? | 0.066568 | 0 | 0 | 37 |
18,936,535 | 2013-09-21T19:10:00.000 | 1 | 0 | 0 | 0 | python,django | 18,936,585 | 3 | false | 1 | 0 | Django is set up to have projects that are collections of reusable, self contained apps. Each has its own model.py because they're tied closely to the views and templates for that app but may not be needed for the rest of the project. | 3 | 1 | 0 | I was just wondering why each project does not just have one model.py file, considering its just a file full of classes ( acting as database tables), because the whole project runs on one database, why can there be more than one models.py file if all files work with the same database?
Thanks. | Why are there one model.py per app instead of just one model.py out through the whole project? | 0.066568 | 0 | 0 | 37 |
18,936,535 | 2013-09-21T19:10:00.000 | 0 | 0 | 0 | 0 | python,django | 18,944,280 | 3 | false | 1 | 0 | Ususally you will start writing one app. Once in a time, you will recognize that there are features which are not very tightly related (e.g. user management or different sub-parts). Additionally, your models.py will start to be lengthy and want to have a clearer structure.
This is the point in time where you start splitting your project in independent sub-parts - the apps. Still, they will work with the same database. And even better: some friendly guys might have built apps whih you can include in your project - each bringing teir models and - of course - interacting with your database.
If everything in yourproject is closely related, there is no need for different apps. | 3 | 1 | 0 | I was just wondering why each project does not just have one model.py file, considering its just a file full of classes ( acting as database tables), because the whole project runs on one database, why can there be more than one models.py file if all files work with the same database?
Thanks. | Why are there one model.py per app instead of just one model.py out through the whole project? | 0 | 0 | 0 | 37 |
18,938,619 | 2013-09-21T23:09:00.000 | 1 | 0 | 1 | 0 | python,string,eof | 18,938,639 | 1 | true | 0 | 0 | AFAIK, CPython keeps track of the length and start of the string. As of CPython 3.3 it also keeps track of how many bytes per character in order to compress strings that can fit into subsets of the Unicode spectrum, such as Latin-1 strings. | 1 | 1 | 0 | I know python has inbuilt string support. But what I would like to know is how it handles the end of string construct. C has '\0' character to signify end of string. How does python do it? It would be great if someone could tell me how it works in the cpython source code. | How does python find the end of string? | 1.2 | 0 | 0 | 1,206 |
18,940,685 | 2013-09-22T05:32:00.000 | 1 | 0 | 0 | 0 | wxpython | 20,538,024 | 1 | false | 0 | 1 | This is a tip really. Use
sizer.SetEmptyCellSize((0,0))
and your empty row will not take any space. | 1 | 1 | 0 | I'm trying to add and delete rows to a GridBagSizer dynamically. Each row of the sizer has a collection of widgets with the rightmost being a 'delete' button which removes the row it's in when pressed. Another button outside the sizer adds a new bottom row of widgets to the sizer when pressed.
I have a simple example app that works, but it's rather baroque and I'm hoping there's a simpler way.
The working example detaches and destroys all widgets in the row being deleted, but this doesn't remove the blank row where the widgets used to be, even after sizer.Layout(). What I have done to get around this is detach all widgets in rows below the removed row and move them one row up.
It works, but is there a better way?
Ross | A better way of deleting and Adding GridBagSizer rows dynamically? | 0.197375 | 0 | 0 | 356 |
18,943,717 | 2013-09-22T12:15:00.000 | 2 | 0 | 0 | 0 | python,django,sockets,web-applications | 18,943,786 | 1 | false | 1 | 0 | If the uploading doesn't occur too often, why not just create a Django POST/PUT view for the job that simply accepts the file over HTTP? With the information you've provided, I cannot see why this simple solution wouldn't be up to the task. | 1 | 0 | 0 | I have a Django web application which shows a website to display some data. So, this application consists of the html pages and views to display this data which i am storing in a SQLite DB.
At the end of the day a third party needs to connect to this web application and upload binary data over to the application. What is the best way to host this service, as an independent python web server or part of Django application or how else ?
Any suggestions would be appreciated ! | How to do socket programming in a django app | 0.379949 | 0 | 0 | 263 |
18,944,962 | 2013-09-22T14:38:00.000 | 2 | 0 | 1 | 0 | python | 18,945,255 | 3 | false | 0 | 0 | I think it is because the concept of named parameters is conflated with the concept of optional parameters. When you define your function with optional arguments like def drawLine(x1, y1, x2, y2, color=Color.BLACK, width=1) all the optional arguments have to come at the end of the list - otherwise, if you have more than one optional argument, it would be ambiguous which value was an optional argument versus a required argument. If you then invoke drawLine with drawLine(x1=1, x2=2, 3,4,1), which arguments go with which? So for sanity's sake, put the positional arguments first, followed by the named arguments. Declaring *args and **kwargs in the same order just makes sense from a conceptual consistency standpoint. | 1 | 2 | 0 | args always before kwargs when many python book introduce them. I just swap the position of them, but the interpreter told me this is a invalid syntax. Can anyone explain it? | why I can't define a function like fun(**kwargs, *args)? | 0.132549 | 0 | 0 | 171 |
18,945,802 | 2013-09-22T16:01:00.000 | 0 | 0 | 0 | 0 | php,mysql,python-2.7,xml-parsing | 18,945,969 | 2 | false | 0 | 0 | I'd say, turn off execution time limit in PHP (e.g. use a CLI script) and be patient. If you say it starts to insert something into database from a 17 GB file, it's actually doing a good job already. No reason to hasten it for such one-time job. (Increase memory limit too, just in case. Default 128 Mb is not that much.) | 1 | 4 | 0 | I have a 17gb xml file. I want to store it in MySQL. I tried it using xmlparser in php but it says maximum execution time of 30 seconds exceeded and inserts only a few rows. I even tried in python using element tree but it is taking lot of memory gives memory error in a laptop of 2 GB ram. Please suggest some efficient way of doing this. | extremely large xml file to mysql | 0 | 1 | 0 | 215 |
18,947,522 | 2013-09-22T18:48:00.000 | 0 | 0 | 1 | 0 | python,operators,symbols | 18,947,750 | 2 | false | 0 | 0 | A symbol in a programming language is either a binding to some value (eg. variable identifiers), a value itself (eg. "foo", 123, True), keyword (eg. def, class, import,try, except,...) or other language specific construct( (), {}, [],...).
So a symbol does not always have to be a string of characters.
In contrast, an operator defines a specific function among one or more values. (There are unary, binary, tertiary,... operators)
eg. + in 1+1, < in a<b are operators
It's noteworthy if you are considering this idea in a compiler's standpoint, everything you write in your code is a symbol. That is even +, - , *, / , are mere symbols to a lexical analyzer. (I assume that this fact is out of the scope of your question). Hence we will restrict our answer to the domain of language syntax.
However this idea is universal for any programming language | 1 | 1 | 0 | I'm reviewing for a test over some basic Python syntax stuff and I'm wanting to make sure I have a proper understanding of the difference between a symbol and an operator. A symbol can be a string of characters or a operator and an operator can only be something that does something to characters or strings right? | Symbol vs Operator in Python | 0 | 0 | 0 | 376 |
18,948,174 | 2013-09-22T19:59:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,urllib2 | 18,948,829 | 1 | true | 1 | 0 | We make no guarantees that urlfetch calls will all go out on the same IP address. | 1 | 1 | 0 | I have an App that worked well on both GAE and test server till a few days ago. It connects to a remote site, logs in and browse pages and input information automatically. The remote site is using dynamic URLs to follow the session, each page gives the link for next call.
The program is very basic : urllib2.urlopen then regexp to extract the next url key then new call to urllib2.urlopen and so on.
Now my app works still perfectly on test server but fails when deployed on GAE : I have a series of calls to urllib2.open and most of the time, the remote site says it has lost the session already on the second call but 1/10th I could go to the third call and once GAE has gone successfully to the fourth call.
This seems to point out that it is not a security issue with the remote site (which has not changed) nor a question of redirect and cookies I have read in other posts.
Users reported to me that it worked well till the 14th of Sept 13 and the failure was reported to me on the 20th. Was there a change in the handling of URLfetch in GAE recently?
I've just spent 2 days on the problem with no tangible clue.
It may be a question of IP address? The remote server could control the session with the IP adress and the dynamicURL together and I can imagine that GAE does not garantee that in a same call to GAE, all calls to URLlib are handled by the same machine? This could explain why sometimes it works for two or three successive URLs. I do not know enough GAE internal mechanism to confirm.
Thank you in advance for your ideas. | Python App Engine's urllib2: works locally but not when deployed to GAE | 1.2 | 0 | 0 | 225 |
18,949,298 | 2013-09-22T22:13:00.000 | 3 | 1 | 1 | 0 | python,c,python-c-api | 18,949,369 | 1 | true | 0 | 0 | I'm not sure there is any easy way to get such "data". It really depends on what you are doing, and you have to take into account that transferring the data from the Python side to the C side and back again will be an extra load on the system, compared to simply perform the operations in Python directly.
If you are doing a lot of calculations, and those calculations are complicated but can't be done in an existing library (such as "numpy"), then it may be worth doing. And of course, calculation doesn't necessarily have to be "mathematics", it could be shuffling data in a large array, or making if (x > y) z++; type operations. But you really need to have a large amount of stuff to do before it makes sense to convert the data from "python" to "C" type of data, and back again.
It's a bit like asking "How much faster is this sporty car than that not-so-sporty car", and if you drive in a big city with lots of congestion, the difference may not be any at all - but if you take them to a race-track, where the sporty car gets to stretch its legs properly, the winner is quite obvious.
In the "car" theme, the "congestion" is lots of calls to a very small function in C, that doesn't do much work - the "convert from Python to C and back to Python data" will be the congestion/traffic lights. If you have large lumps of data, then you get a bit more "race-track" effect. | 1 | 2 | 0 | Is there any data that visualizes just how much performance can be gained by using the Python C API when writing functions directly in C to be used as python modules?
Besides the obvious fact that "C is faster"; is there any data that compares Python C API vs C? | Python C API performance gains? | 1.2 | 0 | 0 | 402 |
18,951,721 | 2013-09-23T04:21:00.000 | 8 | 0 | 1 | 0 | python,algorithm,dynamic-programming | 18,952,628 | 1 | true | 0 | 0 | You're on the right track. As you examine each new column, you will end up computing all possible best-scores up to that point.
Let's say you built your compatibility list (a 2D array) and called it Li[y] such that for each pattern i there are one or more compatible patterns Li[y].
Now, you examine column j. First, you compute that column's isolated scores for each pattern i. Call it Sj[i]. For each pattern i and compatible
pattern x = Li[y], you need to maximize the total score Cj such that Cj[x] = Cj-1[i] + Sj[x]. This is a simple array test and update (if bigger).
In addition, you store the pebbling pattern that led to each score. When you update Cj[x] (ie you increase its score from its present value) then remember the initial and subsequent patterns that caused the update as Pj[x] = i. That says "pattern x gave the best result, given the preceding pattern i".
When you are all done, just find the pattern i with the best score Cn[i]. You can then backtrack using Pj to recover the pebbling pattern from each column that led to this result. | 1 | 9 | 0 | I am trying to teach myself Dynamic Programming, and ran into this problem from MIT.
We are given a checkerboard which has 4 rows and n columns, and
has an integer written in each square. We are also given a set of 2n pebbles, and we want to
place some or all of these on the checkerboard (each pebble can be placed on exactly one square)
so as to maximize the sum of the integers in the squares that are covered by pebbles. There is
one constraint: for a placement of pebbles to be legal, no two of them can be on horizontally or
vertically adjacent squares (diagonal adjacency is ok).
(a) Determine the number of legal patterns that can occur in any column (in isolation, ignoring
the pebbles in adjacent columns) and describe these patterns.
Call two patterns compatible if they can be placed on adjacent columns to form a legal placement.
Let us consider subproblems consisting of the rst k columns 1 k n. Each subproblem can
be assigned a type, which is the pattern occurring in the last column.
(b) Using the notions of compatibility and type, give an O(n)-time dynamic programming algorithm for computing an optimal placement.
Ok, so for part a: There are 8 possible solutions.
For part b, I'm unsure, but this is where I'm headed:
SPlit into sub-problems. Assume i in n.
1. Define Cj[i] to be the optimal value by pebbling columns 0,...,i, such that column i has pattern type j.
2. Create 8 separate arrays of n elements for each pattern type.
I am not sure where to go from here. I realize there are solutions to this problem online, but the solutions don't seem very clear to me. | Pebbling a Checkerboard with Dynamic Programming | 1.2 | 0 | 0 | 10,228 |
18,957,189 | 2013-09-23T10:39:00.000 | 1 | 0 | 1 | 0 | python | 18,957,914 | 3 | false | 0 | 0 | How about cls.fields['key'] = 'value'. | 1 | 6 | 0 | How can I add a key to an object's dictionary with setattr()? Say I have the fields dictionary defined in my class. From another class I would like to add a key to this dictionary. How do I proceed? setattr(cls, 'fields', 'value') changes the attribute entirely. | Add key to dict with setattr() in Python | 0.066568 | 0 | 0 | 12,191 |
18,960,242 | 2013-09-23T13:16:00.000 | 26 | 0 | 1 | 0 | python,exception | 18,960,337 | 6 | false | 0 | 0 | You don't want to break on every exception; idiomatic Python code uses exceptions heavily (EAFP) so you'd be continually breaking in unrelated code.
Instead, use pdb post-mortem: import pdb; pdb.pm(). This uses sys.last_traceback to inspect the stack including the locals at the throw point. | 1 | 28 | 0 | If ones catches an exception outside of the function it is originally thrown, ones loses access to the local stack. As a result one cannot inspect the values of the variables that might have caused the exception.
Is there a way to automatically start break into the debugger (import pdb; pdb.set_trace()) whenever a exception is thrown to inspect the local stack? | Is it possible to automatically break into the debugger when a exception is thrown? | 1 | 0 | 0 | 16,097 |
18,962,170 | 2013-09-23T14:39:00.000 | 0 | 0 | 1 | 0 | python,outlook,exchange-server,gravatar,avatar | 18,970,588 | 1 | false | 0 | 0 | The data is actually stored in the Active Directory. I don't remember the AD attribute off the top of my head, but on the MAPI level, the property can be retrieved from the PR_EMS_AB_THUMBNAIL_PHOTO (PidTagThumbnailPhone) property. | 1 | 2 | 0 | Is it possible to fetch user profile photo on an MS Exchange network using Python?
Currently users signup with their company domain, and I'd like to fetch their profile photo automatically. Maybe something similar to gravatar, but for Microsoft networks? | Fetch Microsoft Exchange / Outlook profile photo using Python | 0 | 0 | 1 | 452 |
18,962,581 | 2013-09-23T15:00:00.000 | 4 | 0 | 1 | 0 | python | 18,962,633 | 4 | false | 0 | 0 | Because python is whitespace sensitive - if a line has four spaces and the next has one tab, python will NOT see them as being indented the same, giving you either compile errors, or even worse, code that executes in a way that you don't want. Because they look the same to you, you will not be able to tell the difference easily.
So set your editor to not use tabs, so that you can trust your eyes. | 1 | 1 | 0 | I have read in some python tutorials that it's better to use spaces than tabs. | Why is it recommended that I replace tabs by spaces? | 0.197375 | 0 | 0 | 1,752 |
18,964,553 | 2013-09-23T16:42:00.000 | 2 | 0 | 1 | 1 | python,linux,shell | 18,964,952 | 5 | true | 0 | 0 | There’a subtle distinction here. If the target is a binary or begins with a #! shebang line, then the shell calls execv successfully. If the target is a text file without a shebang, then the call to execv will fail, and the shell is free to try launching it under /bin/sh or something else. | 3 | 3 | 0 | In Linux, we usually add a shebang in a script to invoke the respective interpreter. I tried the following example.
I wrote a shell script without a shebang and with executable permission. I was able to execute it using ./. But if I write a similar python program, without shebang, I am not able to execute it.
Why is this so? As far as my understanding, shebang is required to find the interpreter. So how does shell scripts work, but not a python script? | Use of shebang in shell scripts | 1.2 | 0 | 0 | 704 |
18,964,553 | 2013-09-23T16:42:00.000 | 4 | 0 | 1 | 1 | python,linux,shell | 18,964,586 | 5 | false | 0 | 0 | shell scripts will only work if you are in the shell you targeted ... there is not python shell ... as such python will never work without explicity calling python (via shebang or command line) | 3 | 3 | 0 | In Linux, we usually add a shebang in a script to invoke the respective interpreter. I tried the following example.
I wrote a shell script without a shebang and with executable permission. I was able to execute it using ./. But if I write a similar python program, without shebang, I am not able to execute it.
Why is this so? As far as my understanding, shebang is required to find the interpreter. So how does shell scripts work, but not a python script? | Use of shebang in shell scripts | 0.158649 | 0 | 0 | 704 |
18,964,553 | 2013-09-23T16:42:00.000 | 5 | 0 | 1 | 1 | python,linux,shell | 18,964,583 | 5 | false | 0 | 0 | My assumption is that a script without a shebang is executed in the current environment, which at the command line is your default shell, e.g. /bin/bash. | 3 | 3 | 0 | In Linux, we usually add a shebang in a script to invoke the respective interpreter. I tried the following example.
I wrote a shell script without a shebang and with executable permission. I was able to execute it using ./. But if I write a similar python program, without shebang, I am not able to execute it.
Why is this so? As far as my understanding, shebang is required to find the interpreter. So how does shell scripts work, but not a python script? | Use of shebang in shell scripts | 0.197375 | 0 | 0 | 704 |
18,968,569 | 2013-09-23T20:49:00.000 | 0 | 0 | 0 | 0 | python,html,code-coverage,python-sphinx,nose | 43,797,423 | 1 | false | 1 | 0 | I have a similar problem and I could solve it by doing the following. Note this is just for the HTML output.
Create an index.rst field with just the following:
======================
Javadoc of the API XXX
======================
As I am using the extension "sphinx.ext.autosectionlabel", this becomes a "level 1" section.
Modify the Make file so, once the HTML is generated, it replaces the index.html of the section "Javadoc of the API XXX" with the Javadoc of my API.
After this change, on the toctree I have a link to "Javadoc of the API XXX" and when you click it, you see the Javadoc.
I know is not the proper way of doing it, but it is the only that I have come up with after many Google searches. | 1 | 4 | 0 | How do you include html docs generated by other tools such as nose, coverage, and pylint reports into Sphinx documentation. Should they go into the _static directory? and if so, how do you link to them?
I am trying to build a concise package of all of the code development tool documentation into a .pdf or html. | Integrate other html documentation into sphinx docs | 0 | 0 | 0 | 310 |
18,970,356 | 2013-09-23T23:08:00.000 | -1 | 0 | 0 | 1 | python,google-app-engine | 18,994,112 | 1 | false | 1 | 0 | Thank you for looking out but figured it out. Appending the object to a list did the trick but I am sure there is a better efficient way. | 1 | 2 | 0 | Is there a way to update an object and its properties within an ndb StructuredProperty object instead of appending? | GAE - updating structured properties ndb | -0.197375 | 0 | 0 | 356 |
18,970,452 | 2013-09-23T23:16:00.000 | -2 | 0 | 1 | 1 | python,multithreading,pydev | 19,946,760 | 3 | false | 0 | 0 | Given that Python doesn't really do threads properly (the GIL is bound to cock things up one way or another) I wouldn't be surprised if debugging them was a less than thrilling experience. If it comes to that it's not that good an experience either debugging C/C++ threads, even under the latest versions of GDB and CDT.
I don't actually know for sure but I've a hunch that adopting multiple processes in Python instead of multiple threads might make your experience better. If you arrange things so that a single instance of Eclipse/PyDev was debugging a single Python process you might end up with a lot of windows on your screen but it will be a much more flexible debugging experience.
Thats what I used to do under VxWorks in C, where there was no threads or processes just tasks. A consequence was that you could run a debugger for each task and it was wonderful. | 1 | 5 | 0 | I have a multithreaded Python application running on a Linux server. I can use PyDev's Debug Server to remotely debug into it, which seems like a very valuable debug resource. There is however a problem I'm seeing that's preventing it from being as helpful as I would like.
While my application is running on the server, I can go into Eclipse on the other box, suspend MainThread, get a nice stack trace of what it was up to at the time, then resume execution. It's great. However, when I try that on one of the child threads, the suspend button grays out but there's no stack trace and everything just keeps on running as normal. I can see in the Debug window that there IS a child thread and it's PID, but can't really control it or see what it is up to. Right-clicking and trying the helpful-sounding "copy stack" only gives me "Thread-4 - pid29848_seq5".
Breakpoints seem to work okay. If a child thread hits one of those, I can step through and watch variables and such. However, using that effectively requires me to already have a specific point of interest in the code. I'm really more looking to run my application and, when it gets into an unusual state, use PyDev to see what's up.
Do I have something wrong with my setup? Is this just a limitation of PyDev I'm up against? How can I see what's going on with the child threads? | Getting PyDev suspend to work on threads other than MainThread | -0.132549 | 0 | 0 | 2,402 |
18,971,258 | 2013-09-24T00:50:00.000 | 2 | 0 | 0 | 0 | python,pid,xvfb,pyvirtualdisplay | 19,010,607 | 1 | true | 0 | 0 | The answer to this situation is to actually use the .pid property on the display object, BUT, it is only present after the display has had the start method called on it. That means, if you need to get the PID for the process, the display has to be started first. | 1 | 4 | 0 | Trying to find the PID of a Display object when creating it using Pyvirtualdisplay. The display an Xvfb virtual framebuffer.
We have tried looking at the .pid property, but it is not present. Also, the .process property is non existant. Both raise an AttributeError error when accessed.
Thanks very much! Any help will be appreciated! | How to get PID of process when using XVFB via Pyvirtualdisplay? | 1.2 | 0 | 0 | 684 |
18,972,662 | 2013-09-24T03:49:00.000 | 1 | 0 | 1 | 1 | python,installation,setup.py | 18,973,151 | 1 | false | 0 | 0 | Instead of using --user, why not use a virtualenv? they are much more flexible, and put its bin directory on the path when activated.
Otherwise, manually putting ~/.local/bin on your PATH, as you did, is what you need to do. | 1 | 1 | 0 | Say I have a python application that I want to install and if I run python setup.py install --user, everything gets put into ~/.local as expected (on linux), and inside of that the stuff in ~/.local/lib/python2.7/site-packages/
gets seen by the PYTHONPATH as expected; however, my executables that are created by setup.py (using either entry_points via setuptools or scripts via distutils) are correctly put into ~/.local/bin, but are not seen by the PATH at the command line.
Thus, I have to add $HOME/.local/bin to my PATH (via my .zshrc) to get these executables seen by my environment. I'm assuming this is the expected behaviour, but my question is, is there some way to get my executables "registered" with my PATH when I run the installation with the --user flag during the setup?
I believe this should work, as I see that ipython does something like this, where if it's installed with the --user flag (into ~/.local), then you don't have to add to your path ~/.local/bin to get the local install of ipython seen at the command line. I just can't figure out how ipython does it. Many thanks in advance. | confusion with results of `python setup.py install --user` | 0.197375 | 0 | 0 | 2,255 |
18,973,574 | 2013-09-24T05:27:00.000 | 0 | 0 | 1 | 0 | python,applescript,iphoto | 18,975,788 | 4 | false | 0 | 0 | Simply invoke the system command which iPhoto (assuming that you can run iPhoto from a shell) and parse the output. | 1 | 0 | 0 | One of my desktop apps I need to know where the iPhoto Library is installed, programmatically. I do not want to pick it from predicted location (/Users/me/Pictures/iPhoto) since power user may have installed it somewhere else.
I'm developing app using Python and I guess Applescript might have way to figure out iPhoto location but I don't know how. | Need to know iPhoto Library location programmatically using Applescript | 0 | 0 | 0 | 444 |
18,973,863 | 2013-09-24T05:49:00.000 | 2 | 0 | 1 | 0 | python-2.7,scikit-learn,portable-python | 20,853,862 | 4 | false | 0 | 0 | you can easily download SciKit executable, extract it with python, copy SciKit folder and content to c:\Portable Python 2.7.5.1\App\Lib\site-packages\ and you'll have SciKit in your portable python.
I just had this problem and solved this way. | 1 | 2 | 1 | While I am trying to install scikit-learn for my portable python, its saying " Python 2.7 is not found in the registry". In the next window, it does ask for an installation path but neither am I able to copy-paste the path nor write it manually. Otherwise please suggest some other alternative for portable python which has numpy, scipy and scikit-learn by default. Please note that I don't have administrative rights of the system so a portable version is preferred. | How to install scikit-learn for Portable Python? | 0.099668 | 0 | 0 | 1,485 |
18,976,073 | 2013-09-24T08:03:00.000 | 6 | 1 | 0 | 0 | python,unit-testing,naming-conventions | 27,770,151 | 1 | false | 0 | 0 | The way you are doing it is the cleanest approach - as long as there is a clear location where people would expect to find the tests for a module level function then I think you are good. The stylistic difference between the function name and the test class name - although an annoyance - isn't significant enough to worry about. | 1 | 13 | 0 | When writing tests I usually name the modules prefixed with test_ for example spam.py and test_spam.py. This makes finding the tests easy. When testing classes in a module I create a unittest.TestCase derivative with a similar class name, postfixed with Test. e.g. Spam becomes SpamTest (not TestSpam as this sounds like it is a test implementation of Spam). Then class functions are tested by test functions that are prefixed with test_ and postfixed with _testcondition or some other descriptive postfix. I find that this works brilliantly as the original object names are included.
The problem occurs when I want to test module level functions. Following my regular structure I would create a unittest.TestCase derivative with the same name as the function, postfixed with Test. The problem with this is that class names are camel cased and function names are lower cased with underscores to separate words. Ignoring the naming convention some_function becomes SomeFunctionTest. I cannot help but feeling that this is ugly.
What would be a better fit? What is common practice? Is there a 'standard' like pep8 for this? What do you use? | Python unit test naming convention for module functions | 1 | 0 | 0 | 5,209 |
18,977,699 | 2013-09-24T09:27:00.000 | 0 | 0 | 1 | 1 | python,windows,remote-access | 18,977,803 | 1 | false | 0 | 0 | use command line program psexec.
if you need to control remote computer through python I recommend you install on the remote computer rpyc. on the website of rpyc there is documentation on how to use it. | 1 | 0 | 0 | I have a Executable file in remote windows machines.How can I execute .exe file remotely using python? or How can I get access to remote windows command line?.Please help.
I have credentials of remote windows machines?
PS: All remote windows machines are in a same network | How to execute .exe file in a remote windows machine using python | 0 | 0 | 0 | 1,224 |
18,977,772 | 2013-09-24T09:31:00.000 | 0 | 0 | 0 | 0 | android,python,ios,unity3d | 18,981,659 | 5 | false | 1 | 0 | You can use .net platform as an backend server. | 2 | 0 | 0 | Im building this app (using Unity3d) for a city hall and I need to split the content from the actual app since content must be easily changeable without having to update the app itself.
I want to host the content on a server and use http get/post messages to retrieve the data. I also need to have a web editor (kinda like a CMS) so that the client can change the content himself.
In the editor I would just have a list of "rooms", where each "room" would be one of three types (i.e. text screen, slideshow or audio). Depending on what type the room is, different parameters should be visible and editable.
What language you suggest I write the server in? (the server that the app would contact in order to obtain the up-to-date content) Python i'm guessing here?
What would be the easiest way to build the browser editor? Javascript and django? | Developing a server to host a smartphone app content | 0 | 0 | 0 | 747 |
18,977,772 | 2013-09-24T09:31:00.000 | 0 | 0 | 0 | 0 | android,python,ios,unity3d | 18,977,988 | 5 | false | 1 | 0 | Android get connected easily with cloud server.I don't know about others. You can connect using JSON and PHP for this. | 2 | 0 | 0 | Im building this app (using Unity3d) for a city hall and I need to split the content from the actual app since content must be easily changeable without having to update the app itself.
I want to host the content on a server and use http get/post messages to retrieve the data. I also need to have a web editor (kinda like a CMS) so that the client can change the content himself.
In the editor I would just have a list of "rooms", where each "room" would be one of three types (i.e. text screen, slideshow or audio). Depending on what type the room is, different parameters should be visible and editable.
What language you suggest I write the server in? (the server that the app would contact in order to obtain the up-to-date content) Python i'm guessing here?
What would be the easiest way to build the browser editor? Javascript and django? | Developing a server to host a smartphone app content | 0 | 0 | 0 | 747 |
18,981,428 | 2013-09-24T12:18:00.000 | 1 | 0 | 0 | 0 | python,multithreading,python-2.7,pyqt,pyqt4 | 18,983,482 | 1 | false | 0 | 1 | That depends. Your PC is idle 99.9995% of the time while you type; so it has a lot of CPU power to spend on background tasks. Most people don't notice this since the virus scanner typically eats 5-20% of the performance. But typing or clicking a button barely registers in the CPU load.
OTOH, if you run a long task in the UI thread, then the UI locks up until the task is finished. So from a user perspective, the UI will be blocking while for the serial port, the world will be OK. Overall, this will be faster but it will feel sluggish.
Multithreading is generally slower than doing everything in a single thread due to synchronization or locking. But a single thread doesn't scale. Eventually, you hit a brick wall (performance wise) and no trick will make that single thread execute faster. | 1 | 1 | 0 | My simple Python app uses PyQt4 for its GUI and clicking a QPushButton causes the app to send a message via serial. GUI elements also update frequently.
Question: I did not know how to implement multithreading. Will not having multithreaded process cause the app to be less responsive and less consistent in sending the serial communication especially when a GUI element will be updated at the same time the serial message is being sent? | Is Multi-threading required for PyQt4 and Writing to Serial | 0.197375 | 0 | 0 | 105 |
18,986,290 | 2013-09-24T15:46:00.000 | 0 | 0 | 0 | 0 | python,html | 18,988,074 | 2 | false | 1 | 0 | I suggest that instead of creating your own templating language (which is what this task amounts to), you use one of the many which already exist, and use that to perform the necessary operations. Try Jinja2, Django Templates, or Cheetah to see what you fancy. There are also many others. | 1 | 2 | 0 | Just to be clear this is not a scraping question.
I'm trying to automate some editing of similar HTML files. This involves removing content between tags.
When editing HTML files locally, is it easier to open() the file then dump the content line by line into a string so it's easier to apply a regular expression?
Thanks | Python - editing local HTML files - Should I edit all of the content as a one string or as an array line by line? | 0 | 0 | 0 | 173 |
18,986,354 | 2013-09-24T15:49:00.000 | 2 | 1 | 0 | 0 | python,django,email,python-2.7,gmail | 18,986,649 | 1 | true | 1 | 0 | Make sure messages on different subjects have a different 'From' address | 1 | 0 | 0 | For users of our Django app that receive their emails through gmail they are finding that emails are getting grouped into conversations that shouldn't be.
I'm not sure what gmail expects in an email to consider it unique enough to not group as a conversation but when I send plain text emails with DIFFERENT subjects using send_mail or even try a multipart/alternative with EmailMultiAlternatives with an html body gmail still assumes they are part of the same conversation.
Obviously this creates confusion when our application sends emails with different subjects and bodies to the same user and they are all grouped and gmail only shows the subject of the first message in the conversation.
I have 100% confirmed by looking at the raw original email messages to make sure the emails are different subjects and bodies.
I just want to know if I can change anything in how django creates the email message so it can play nice with gmail conversations.
I am using python 2.7.4, and can replicate the "issue" with Django 1.4 and 1.5. | Django's send_mail messages get grouped into a gmail converation | 1.2 | 0 | 0 | 261 |
18,991,447 | 2013-09-24T20:29:00.000 | 0 | 0 | 0 | 0 | python,django,apache,wamp,mod-wsgi | 18,998,207 | 2 | false | 1 | 0 | You don't need Apache at all at this point. For development, things work much better if you use the built in development server, as described in the tutorial. | 1 | 0 | 0 | To make my question clear:
I have had wamp installed, and it brought Apache. Will this Apache be used by others like Django?
If the wamp Apache is enough for others, its Apache is in wamp directory C:\wamp\bin\apache, not sth like C:\programs file...It is ok for django
If I have to install Apache manually for django, will the step be install Apache, install mod_wsgi?
Any help would be greatly appreciated | Trying to use Django, do I need to install Apache manually if Wamp already did | 0 | 0 | 0 | 429 |
18,992,186 | 2013-09-24T21:16:00.000 | 0 | 1 | 1 | 0 | python,multithreading | 18,992,283 | 1 | true | 0 | 0 | How many cores do you have?
How parallelizable is the process?
Is the problem CPU bound?
If you have several cores and it's parallelizable across them, you're likely to get a speed boost. The overhead for multithreading isn't nearly 100% unless implemented awfully, so that's a plus.
On the other hand, if the slow part is CPU bound it might be a lot more fruitful to look into a C extension or Cython. Both of those at times can give a 100× speedup (sometimes more, often less, depending on how numeric the code is) for much less effort than a 2× speed-up with naïve usage of multiprocessing. Obviously the 100× speedup is only for the translated code.
But, seriously, profile. Chances are there are low hanging fruit that are much easier to access than any of this. Try a line profiler (say, the one called line_profiler [also called kernprof]) and the builtin cProfile. | 1 | 0 | 0 | I have a python script which gathers 10,000's of 'people' from an API and then goes on to request two other APIs to gather further data about them and then save the information to a local database, it takes around 0.9 seconds per person.
So at the moment it will take a very long time to complete. Would multi-threading help to speed this up? I tried a multi-threading test locally and it was slower, but this test was just a simple function without any API interaction or anything web/disk related.
thanks | Should i use multi-threading? (retrieving mass data from APIs) | 1.2 | 0 | 0 | 89 |
18,992,369 | 2013-09-24T21:26:00.000 | 6 | 0 | 0 | 1 | python,django,celery,gevent | 18,992,940 | 1 | true | 1 | 0 | In short you do need a celery.
Even if you use gevent and have concurrency, the problem becomes request timeout. Lets say your task takes 10 minutes to run however the typical request timeout is about up to a minute. So what will happen if you trigger the task directly within a view is that the server will start processing it however after a minute a client (browser) will probably disconnect the connection since it will think the server is offline. As a result, your data can become corrupt since you cannot be guaranteed what will happen when connection will close. Celery solves this because it will trigger a background process which will process the task independent of the view. So the user will get the view response right away and at the same time the server will start processing the task. That is a correct pattern to handle any scenarios which require lots of processing. | 1 | 5 | 0 | I am working on a django web app that has functions (say for e.g. sync_files()) that take a long time to return. When I use gevent, my app does not block when sync_file() runs and other clients can connect and interact with the webapp just fine.
My goal is to have the webapp responsive to other clients and not block. I do not expect a zillion users to connect to my webapp (perhaps max 20 connections), and I do not want to set this up to become the next twitter. My app is running on a vps, so I need something light weight.
So in my case listed above, is it redundant to use celery when I am using gevent? Is there a specific advantage to using celery? I prefer not to use celery since it is yet another service that will be running on my machine.
edit: found out that celery can run the worker pool on gevent. I think I am a litle more unsure about the relationship between gevent & celery. | Do I need celery when I am using gevent? | 1.2 | 0 | 0 | 2,856 |
18,993,366 | 2013-09-24T22:45:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 18,993,416 | 6 | false | 0 | 0 | You could either :
perform successive divisions by 10 on the corresponding integer taken as if it was in base 10.
use strings operations to find the last 1 and take everything after it
use regular expressions to get the 0s at the end and count them
look for operations to convert to binary and perform successive divisions by two. | 1 | 1 | 0 | I have this string: 11000000101010000000010000000000
I would like to count the 0s starting at the back until I hit 1 and stop there, determining the total number of 0s at the end. In this particular case it would give me 10 as an answer.
Any help greatly appreciated. | How to count ending 0s in binary string | 0 | 0 | 0 | 139 |
18,993,675 | 2013-09-24T23:15:00.000 | 0 | 1 | 0 | 0 | python,node.js,hmac,digest | 18,993,775 | 1 | true | 0 | 0 | Checked the node manuals as well. It looks correct to me. What about the ; in the end of the chain? | 1 | 0 | 0 | I want to generate a signature in Node.js. Here is a python example:
signature = hmac.new(SECRET, msg=message, digestmod=hashlib.sha256).hexdigest().upper()
I have this:
signature = crypto.createHmac('sha256', SECRET).update(message).digest('hex').toUpperCase()
What am I doing wrong? | Nodejs equivalent of Python HMAC signature? | 1.2 | 0 | 1 | 1,076 |
18,994,731 | 2013-09-25T01:24:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,import,scipy,cx-freeze | 71,558,533 | 3 | false | 0 | 0 | I had the same trouble, but nonlin was imported in "/scipy/optimize/init.py" file. It is marked as "# Deprecated namespaces, to be removed in v2.0.0".
You can just comment the string in file, where import nonlin is. It worked for me. | 1 | 3 | 0 | I am creating a Windows EXE using cx_freeze, Python3 and the Scipy installation from lfd.uci.edu. Upon running the exe, I receive the error: ImportError: cannot import name nonlin.
The Scipy file line this references, in site-packages\scipy\optimize_root.py: from . import nonlin.
I can load a console with Python, and successfully run import scipy.optimize.nonlin.
Adding scipy.optimize.nonlin to my setup.py includes doesn't solve the problem.
nonlin.py is located in the optimize directory in my scipy install, and its corresponding location as a compiled file in the library file cx_freeze generates. | Scipy in frozen Python: Cannot import name nonlin | 0.066568 | 0 | 0 | 458 |
18,994,787 | 2013-09-25T01:33:00.000 | 0 | 0 | 0 | 0 | python-2.7,scipy,svmlight | 44,480,375 | 2 | false | 0 | 0 | I also met this problem when I assigned numbers to a matrix.
like this:
Qmatrix[list2[0], list2[j]] = 1
the component may be a non-integer number, so I changed to this:
Qmatrix[int(list2[0]), int(list2[j])] = 1
and the warning removed | 1 | 12 | 1 | I'm running python 2.7.5 with scikit_learn-0.14 on my Mac OSX Mountain Lion.
Everything I run a svmlight command however, I get the following warning:
DeprecationWarning: using a non-integer number instead of an integer will result in an error >in the future | Python Svmlight Error: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future | 0 | 0 | 0 | 34,473 |
18,995,966 | 2013-09-25T04:00:00.000 | 0 | 0 | 0 | 0 | python,mongodb,pymongo | 18,998,582 | 2 | false | 0 | 0 | You would not end up with duplicate documents due to the operator you are using. You are actually using an atomic operator to update.
Atomic (not to be confused with SQL atomic operations of all or nothing here) operations are done in sequence so each process will never pick up a stale document or be allowed to write two ids to the same array since the document each $set operation picks up will have the result of the last $set.
The fact that you did get duplicate documents most likely means you have an error in your code. | 2 | 0 | 0 | in my program , ten process to write mongodb by update(key, doc, upsert=true)
the "key" is mongodb index, but is not unique.
query = {'hotelid':hotelid,"arrivedate":arrivedate,"leavedate":leavedate}
where = "data.%s" % sourceid
data_value_where = {where:value}
self.collection.update(query,{'$set':data_value_where},True)
the "query" id the not unique index
I found sometimes the update not update exists data, but create a new data.
I write a log for update method return, the return is " {u'ok': 1.0, u'err': None, u'upserted': ObjectId('5245378b4b184fbbbea3f790'), u'singleShard': u'rs1/192.168.0.21:10000,192.168.1.191:10000,192.168.1.192:10000,192.168.1.41:10000,192.168.1.113:10000', u'connectionId': 1894107, u'n': 1, u'updatedExisting': False, u'lastOp': 5928205554643107852L}"
I modify the update method to update(query, {'$set':data_value_where},upsert=True, safe=True), but three is no change for this question. | mongodb update(use upsert=true) not update exists data, insert a new data? | 0 | 1 | 0 | 820 |
18,995,966 | 2013-09-25T04:00:00.000 | 0 | 0 | 0 | 0 | python,mongodb,pymongo | 18,996,136 | 2 | false | 0 | 0 | You can call it "threadsafe", as the update itself is not done in Python, it's in the mongodb, which is built to cater many requests at once.
So in summary: You can safely do that. | 2 | 0 | 0 | in my program , ten process to write mongodb by update(key, doc, upsert=true)
the "key" is mongodb index, but is not unique.
query = {'hotelid':hotelid,"arrivedate":arrivedate,"leavedate":leavedate}
where = "data.%s" % sourceid
data_value_where = {where:value}
self.collection.update(query,{'$set':data_value_where},True)
the "query" id the not unique index
I found sometimes the update not update exists data, but create a new data.
I write a log for update method return, the return is " {u'ok': 1.0, u'err': None, u'upserted': ObjectId('5245378b4b184fbbbea3f790'), u'singleShard': u'rs1/192.168.0.21:10000,192.168.1.191:10000,192.168.1.192:10000,192.168.1.41:10000,192.168.1.113:10000', u'connectionId': 1894107, u'n': 1, u'updatedExisting': False, u'lastOp': 5928205554643107852L}"
I modify the update method to update(query, {'$set':data_value_where},upsert=True, safe=True), but three is no change for this question. | mongodb update(use upsert=true) not update exists data, insert a new data? | 0 | 1 | 0 | 820 |
18,997,814 | 2013-09-25T06:32:00.000 | 1 | 0 | 0 | 0 | python,django,python-2.7 | 18,998,172 | 1 | true | 1 | 0 | Set DEBUG = True in your settings.py and start your server with python manage.py runserver --traceback | 1 | 0 | 0 | Django is not giving me detailed information about the error.
For example when I get ImportError, I can not see where the error is comming from. Which file which line. It only gives me ImportError: cannot import name ___. But it is not enough to find where the error is. It is not only about ImportError.Many error is given to me with lack of detail like that. I am really bored with searching where is the error and it takes my time.
Is there a way to make it to give me more information about error in DJango.
I am using python==2.7 and django==1.5.3. | Django lack detail of error | 1.2 | 0 | 0 | 42 |
19,003,111 | 2013-09-25T11:00:00.000 | 2 | 0 | 0 | 0 | python,api | 19,003,218 | 2 | false | 0 | 0 | I won't give you the solution. But you should:
Write and test a regular expression that breaks the line down into its parts, or use the CSV library.
Parse the numbers out so they're decimal numbers rather than strings
Collect the lines up by ID. Perhaps you could use a dict that maps IDs to lists of orders?
When all the input is finished, iterate over that dict and add up all orders stored in that list.
Make a string format function that outputs the line in the expected format.
Maybe feed the output back into the input to test that you get the same result. Second time round there should be no changes, if I understood the problem.
Good luck! | 1 | 0 | 0 | To start I am a complete new comer to Python and programming anything other than web languages.
So, I have developed a script using Python as an interface between a piece of Software called Spendmap and an online app called Freeagent. This script works perfectly. It imports and parses the text file and pushes it through the API to the web app.
What I am struggling with is Spendmap exports multiple lines per order where as Freeagent wants One line per order. So I need to add the cost values from any orders spread across multiple lines and then 'flatten' the lines into One so it can be sent through the API. The 'key' field is the 'PO' field. So if the script sees any matching PO numbers, I want it to flatten them as per above.
This is a 'dummy' example of the text file produced by Spendmap:
5090071648,2013-06-05,2013-09-05,P000001,1133997,223.010,20,2013-09-10,104,xxxxxx,AP
COMMENT,002091
301067,2013-09-06,2013-09-11,P000002,1133919,42.000,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
301067,2013-09-06,2013-09-11,P000002,1133919,359.400,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
301067,2013-09-06,2013-09-11,P000003,1133910,23.690,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
The above has been formatted for easier reading and normally is just one line after the next with no text formatting.
The 'key' or PO field is the first bold item and the second bold/italic item is the cost to be totalled. So if this example was to be passed through the script id expect the first row to be left alone, the Second and Third row costs to be added as they're both from the same PO number and the Fourth line to left alone.
Expected result:
5090071648,2013-06-05,2013-09-05,P000001,1133997,223.010,20,2013-09-10,104,xxxxxx,AP
COMMENT,002091
301067,2013-09-06,2013-09-11,P000002,1133919,401.400,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
301067,2013-09-06,2013-09-11,P000003,1133910,23.690,20,2013-10-31,103,xxxxxx,AP
COMMENT,002143
Any help with this would be greatly appreciated and if you need any further details just say.
Thanks in advance for looking! | How to 'flatten' lines from text file if they meet certain criteria using Python? | 0.197375 | 0 | 0 | 584 |
19,009,190 | 2013-09-25T15:34:00.000 | 4 | 0 | 0 | 0 | python,quickfix | 19,012,104 | 1 | true | 1 | 0 | I would comment with this, but I don't have enough reputation. So - I'm not sure why you can't log into your server, but are you sure that you don't have delimiters? Because if you're using \x01 as a delimiter in FIX, the tag-values pairs will usually just be displayed as "all mashed together," but the hex dump of it reveals otherwise (coming from personal experience).
Also, you might be getting downvoted because you haven't provided much context. If you provided the relevant bit of code or what your FIX output looks like, that might help. | 1 | 0 | 0 | I am using quickfix, compiled from the source on a linux box, setup to use the python headers. Everything 'seems' fine when I run my code, but I can't log on to my FIX server, and I noticed that the messages I'm sending have no field/tag delimiters, all the fields and values are just mashed together...
What might be causing this? Am I missing some setup in 'FIX_Settings.txt'?
Thanks! | No field delimiter in outgoing FIX messages? | 1.2 | 0 | 0 | 584 |
19,011,091 | 2013-09-25T17:06:00.000 | 0 | 0 | 1 | 0 | python,saml,saml-2.0 | 19,993,471 | 2 | false | 0 | 0 | Someone would need to create a pypi package containing a xmlsec1 binary.
Such package doesn't exist yet because it's:
quite unnatural - xmlsec1 is C application, not a python lib
hard - it has to be cross platform which is more hassle in C apps than in Python
python bindings should be written around xmlsec1 for a package to be at least somewhat relevant to pypi.
It shouldn't be impossible, and I'd love to be able to type "pip install xmlsec1" and see it doing all hard work. Unfortunately so far noone bothered implementing it. | 1 | 1 | 0 | Is it possible to automatically install the xmlsec1 requirement of PySAML2 using pip?
The current project requires many packages and all are installed using pip and a requirements.txt file. I am now starting a SAML SSO implementation and need to install PySAML2. However, all the docs state that xmlsec1 needs to be installed as a requirement, and the pip install did not install it.
Is it possible to install xmlsec1 using pip? I see that PIL and pycrypto can successfully install external libs, so I am wondering as to why xmlsec1 cannot be installed using pip as part of PySAML2 dependencies. | Installing pysaml2 with pip - the xmlsec1 requirement | 0 | 0 | 1 | 3,036 |
19,011,517 | 2013-09-25T17:33:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,ftp,amazon-ec2,ftplib | 19,076,508 | 2 | false | 0 | 0 | If you're still having trouble could you try ruling out Amazon firewall problems. (I'm assuming you're not using a host based firewall.)
If your EC2 instance is in a VPC then in the AWS Management Console could you:
ensure you have an internet gateway
ensure that the subnet your EC2 instance is in has a default route (0.0.0.0/0) configured pointing at the internet gateway
in the Security Group for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
in the Network ACLs for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
If your EC2 instance is NOT in a VPC then in the AWS Management Console could you:
in the Security Group for inbound allow All Traffic from all sources (0.0.0.0/0)
Only do this in a test environment! (obviously)
This will open your EC2 instance up to all traffic from the internet. Hopefully you'll find that your FTPS is now working. Then you can gradually reapply the security rules until you find out the cause of the problem. If it's still not working then the AWS firewall is not the cause of the problem (or you have more than one problem). | 1 | 1 | 0 | I'm running Python 2.6.5 on ec2 and I've replaced the old ftplib with the newer one from Python2.7 that allows importing of FTP_TLS. Yet the following hangs up on me:
from ftplib import FTP_TLS
ftp = FTP_TLS('host', 'username', 'password')
ftp.retrlines('LIST') (Times out after 15-20 min)
I'm able to run these three lines successfully in a matter of seconds on my local machine, but it fails on ec2. Any idea as to why this is?
Thanks. | EC2 fails to connect via FTPS, but works locally | 0 | 0 | 1 | 659 |
19,012,700 | 2013-09-25T18:40:00.000 | 0 | 0 | 1 | 0 | python | 19,013,035 | 5 | false | 0 | 0 | When I'm not near my own PC, I use ideone.com. I like that it is a universal IDE, which for me means both C++ and Python. | 2 | 1 | 0 | I want to teach some students basic Python programming without having to teach them how to use the terminal. Recently, I was teaching a 2 hour intro session, and not only did teaching terminal stuff take a long time, but it also intimidated a lot of the students. I would like a solution where students wouldn't have to leave graphical user interfaces that they were comfortable with.
I also want the solution to let them execute a particular Python file (eg, not just using the interactive Python interpreter) and see the output from printing things.
Thanks! | How to Run Python without a Terminal | 0 | 0 | 0 | 289 |
19,012,700 | 2013-09-25T18:40:00.000 | 8 | 0 | 1 | 0 | python | 19,012,735 | 5 | true | 0 | 0 | Surely that's what IDLE is for? It's not much good as an IDE, but it does work well for exactly what you describe - opening modules and executing them, and running commands in an interactive shell. | 2 | 1 | 0 | I want to teach some students basic Python programming without having to teach them how to use the terminal. Recently, I was teaching a 2 hour intro session, and not only did teaching terminal stuff take a long time, but it also intimidated a lot of the students. I would like a solution where students wouldn't have to leave graphical user interfaces that they were comfortable with.
I also want the solution to let them execute a particular Python file (eg, not just using the interactive Python interpreter) and see the output from printing things.
Thanks! | How to Run Python without a Terminal | 1.2 | 0 | 0 | 289 |
19,021,174 | 2013-09-26T06:44:00.000 | 4 | 0 | 1 | 0 | python,csv | 19,021,269 | 2 | true | 0 | 0 | Yes you need to read the whole file in memory before knowing how many lines are in it.
Just think the file to be a long long string Aaaaabbbbbbbcccccccc\ndddddd\neeeeee\n
to know how many 'lines' are in the string you need to find how many \n characters are in it.
If you want an approximate number what you can do is to read few lines (~20) and see how many characters are per lines and then from the file's size (stored in the file descriptor) get a possible estimate. | 1 | 4 | 0 | Does there exist a way of finding the number of lines in a csv file without actually loading the whole file in memory (in Python)?
I'd expect there can be some special optimized function for it. All I can imagine now is read it line by line and count the lines, but it kind of kills all the possible sense in it since I only need the number of lines, not the actual content. | Find number of lines in csv without reading it | 1.2 | 0 | 0 | 8,245 |
19,023,238 | 2013-09-26T08:35:00.000 | 1 | 0 | 1 | 1 | python,windows | 51,057,133 | 2 | false | 0 | 0 | Windows case insensitivity is a pain. Why would they do that? You can understand why searches should be case insensitive, but in most cases defined content should keep the exact value. Why? Well from experience it causes so many problems. I've never come across an issue where I've thought, "oh why wasn't that uppercased or lowercased?".
From a Python point of view, why would they do that? Windows stores the key case sensitively, I'm guessing it is only some functions that get the value in a case insensitive manner, because I know for a fact that not all access functions do. I think MKS can tell the difference.
Don't force platform specific behaviour (and be careful with forcing other behaviour) in an interface. Provide an alternative method to force case insensitivity, if required. | 1 | 6 | 0 | Is there any reason why os.environ contains all environment variables uppercase on Windows?, I don't understand why (only on windows) it doesn't load them on using the same case as it is defined ?
Is there an equivalent implementation of os.environment that loads the environment variable information without modifying them for Windows?
thanks | Why python uppercases all environment variables in windows | 0.099668 | 0 | 0 | 2,886 |
19,025,952 | 2013-09-26T10:39:00.000 | 3 | 0 | 0 | 0 | python,redis,cassandra,cql,cqlengine | 19,033,019 | 1 | true | 0 | 0 | Instead of serializing your dictionaries into strings and storing them in a Redis LIST (which is what it sounds like you are proposing), you can store each dict as a Redis HASH. This should work well if your dicts are relatively simple key/value pairs. After creating each HASH you could add the key for the HASH to a LIST, which would provide you with an index of keys for the hashes. The benefits of this approach could be avoiding or lessening the amount of serialization needed, and may make it easier to use the data set in other applications and from other languages.
There are of course many other approaches you can take and that will depend on lots of factors related to what kind of data you are dealing with and how you plan to use it.
If you do go with serialization you might want to at least consider a more language agnostic serialization format, like JSON, BSON, YAML, or one of the many others. | 1 | 4 | 0 | I am considering to serialize a big set of database records for cache in Redis, using python and Cassandra. I have either to serialize each record and persist a string in redis or to create a dictionary for each record and persist in redis as a list of dictionaries.
Which way is faster? pickle each record? or create a dictionary for each record?
And second : Is there any method to fetch from database as list of dic's? (instead of a list of model obj's) | Python - Redis : Best practice serializing objects for storage in Redis | 1.2 | 1 | 0 | 2,722 |
19,027,087 | 2013-09-26T11:33:00.000 | 0 | 0 | 0 | 1 | django,python-2.7,celery | 25,423,925 | 2 | false | 1 | 0 | Try installing python-dev . This is a common error when python doesnt find the dependencies | 2 | 4 | 0 | I am a beginner in django celery so unaware of the deep concepts of the celery. I have installed all the required packages like celery, rabbitMQ and permissions as well. after goin through the documentation of celery i have wrriten my code but when i am firing the command
./manage.py celery worker -c 2
I am getting--
ImportError: No module named tasks.
all the changes in settings.py already done and in tasks.py i am importing--
from celery.task import task.
I am not able to overcome this error.
thanks.. | django celery: Import error - no module named task | 0 | 0 | 0 | 2,154 |
19,027,087 | 2013-09-26T11:33:00.000 | 0 | 0 | 0 | 1 | django,python-2.7,celery | 19,028,425 | 2 | false | 1 | 0 | If you do ./manage.py startapp sitetasks and put your tasks.py inside the new app-directory (/sitetask/) and then add sitetaks to you install_apps in settings.py.
Does that help? | 2 | 4 | 0 | I am a beginner in django celery so unaware of the deep concepts of the celery. I have installed all the required packages like celery, rabbitMQ and permissions as well. after goin through the documentation of celery i have wrriten my code but when i am firing the command
./manage.py celery worker -c 2
I am getting--
ImportError: No module named tasks.
all the changes in settings.py already done and in tasks.py i am importing--
from celery.task import task.
I am not able to overcome this error.
thanks.. | django celery: Import error - no module named task | 0 | 0 | 0 | 2,154 |
19,029,333 | 2013-09-26T13:14:00.000 | 0 | 0 | 1 | 0 | python,macos,numpy,installation,anaconda | 38,722,056 | 4 | false | 0 | 0 | I don't think the existing answer answers your specific question (about installing packages within Anaconda). When I install a new package via conda install <PACKAGE>, I then run conda list to ensure the package is now within my list of Anaconda packages. | 2 | 21 | 1 | I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded/installed Anaconda 1.7.
However, when I type in:
import numpy as np
I get an error telling me that there is no such module. I assume this has to do with the location of the installation, but I can't figure out how to:
A. Check that everything is actually installed properly
B. Check the location of the installation.
Any pointers would be greatly appreciated!
Thanks | How to check that the anaconda package was properly installed | 0 | 0 | 0 | 69,121 |
19,029,333 | 2013-09-26T13:14:00.000 | 1 | 0 | 1 | 0 | python,macos,numpy,installation,anaconda | 41,600,022 | 4 | false | 0 | 0 | Though the question is not relevant to Windows environment, FYI for windows. In order to use anaconda modules outside spyder or in cmd prompt, try to update the PYTHONPATH & PATH with C:\Users\username\Anaconda3\lib\site-packages.
Finally, restart the command prompt.
Additionally, sublime has a plugin 'anaconda' which can be used for sublime to work with anaconda modules. | 2 | 21 | 1 | I'm completely new to Python and want to use it for data analysis. I just installed Python 2.7 on my mac running OSX 10.8. I need the NumPy, SciPy, matplotlib and csv packages. I read that I could simply install the Anaconda package and get all in one. So I went ahead and downloaded/installed Anaconda 1.7.
However, when I type in:
import numpy as np
I get an error telling me that there is no such module. I assume this has to do with the location of the installation, but I can't figure out how to:
A. Check that everything is actually installed properly
B. Check the location of the installation.
Any pointers would be greatly appreciated!
Thanks | How to check that the anaconda package was properly installed | 0.049958 | 0 | 0 | 69,121 |
19,030,579 | 2013-09-26T14:08:00.000 | 2 | 0 | 0 | 0 | python,python-3.x,tkinter,python-3.3,importerror | 61,686,010 | 2 | false | 0 | 1 | ImportError: No module named 'Tkinter'
In Python 3 Tkinter is changed to tkinter
Try import tkinter as tk
Hope it helps! | 1 | 8 | 0 | I'm running Windows 7 32-bit. I've installed Python 3.2.2 and selected every module for installation (including Tcl/Tk). On my computer, I can run a script by double-clicking the .py file and it will find my Tkinter import just fine. If I run it from a command line, it says ImportError: No module named 'Tkinter'. I passed this script on to a coworker who also installed the same way, and she can't run the script at all even with double-clicking. Same Tkinter problem. Our PATHs are identical with C:\Python33 being the first item and tkinter shows in the lib folder. I'm running out of ideas. What's going on? Why is Tkinter so finicky with existing?
Update:
Apparently Tcl/Tk do not include Tkinter. The reason it worked for me was because I had installed a special Python package via our company's download system that happened to include it. This version was linked to .py extensions. In command prompt, however, my updated Python (with Tcl/Tk but without Tkinter) was the python of choice as selected by my PATH variable. My coworker did not have this special package installed so it did not work for her. I had thought it was my Python 3.3 that was running the script but it was not which is why it seemed like it worked for me. That said, if anyone else runs into this issue, check out the sys.executable and sys.version as indicated below to figure out just what is going on! | Tkinter Not Found | 0.197375 | 0 | 0 | 31,781 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.