Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why would the makemessages function for Django language localization ignore html files?
| 7,056,068 | 3 | 6 | 3,604 | 0 |
python,django,localization,internationalization
|
Your templates folder either needs to be in an app that has been listed in INSTALLED_APPS or in a directory that has been listed in TEMPLATE_DIRS - in your settings.py file
| 0 | 0 | 0 | 0 |
2011-08-14T00:04:00.000
| 2 | 1.2 | true | 7,054,082 | 0 | 0 | 1 | 1 |
I am trying to run the Django language localization on a project, but makemessages always ignores the html templates in my templates folder.
I'm running python manage.py makemessages -a from the project root, and all of the strings that are marked for translation inside .py files anywhere in the project are successfully added to the .po file.
Any of the strings in the html templates, i.e., {{ trans "String_to_translate" }} are ignored and not added to the .po file even though the necessary module is loaded at the top of the template, {% load i18n %}.
To test the possibility that the whole template folder was excluded from the makemessages function, I made a .py file and included a string for translation there, and it was successfully added to the .po file.
With all of that being said, does anyone know what could possibly be causing this problem?
Thanks in advance for your help!
EDIT: Solution comprised solely of changing the syntax of {{ trans "string" }} to {% trans "string" %}
|
Django Binary or BLOB model field
| 7,108,043 | 1 | 10 | 25,605 | 0 |
python,mysql,django,blob,django-blob
|
I have been dealing with this same issue, writing a pdf to a mediumblob field in mysql and retrieving via django. I have set the mysql field type to a mediumblob and the django field type to textfield. I have used a queryset and httpresponse to view the PDF objects in a browser (but not directly in django).
| 0 | 0 | 0 | 0 |
2011-08-15T11:03:00.000
| 2 | 1.2 | true | 7,064,197 | 0 | 0 | 1 | 1 |
I have a C# program that inserts a pdf inside a MySQL database. Now I want to retrieve that pdf via django but django's models.FileField needs an "Upload To" parameter which means behind the scenes it actually stores the File on the file system rather than in the database. Is there any way I can set up a django model so that I can store the pdf directly inside MySQL?
Regards
|
Why absolute paths to templates and css in django ? (Isn't that a bad practice ?)
| 7,064,785 | 1 | 0 | 612 | 0 |
python,django
|
Could you post a link to that piece of documentation, please?
In Django you configure, in settings.py, the search path for templates (through TEMPLATE_DIRS variable). Then, inside a view, you render a template naming its file relative to one of the path included in TEMPLATE_DIRS. That way, whenever you move you template dir you just need to modify your settings.py
As for static files, like CSS docs, Django does not need to know anything about them (unless you are serving static files through django itself, which is discouraged by django's documentation): you only need to tell your web server where to find them.
| 0 | 0 | 0 | 0 |
2011-08-15T11:51:00.000
| 2 | 0.099668 | false | 7,064,564 | 0 | 0 | 1 | 1 |
In django, the documentation asks to use the absolute paths and not the relative paths.
Then, how do they manage portability ?
If I have my template in the project folder then, even a rename of the folder will cause breakage.. !
Then what is the reason behind this practice ?
Please explain ?
|
Should I use orbited or gevent for integrating comet functionality into a django app
| 11,276,439 | 1 | 5 | 1,434 | 0 |
python,django,postgresql,comet,gevent
|
Instead of Apache + X-Sendfile you could use Nginx + X-Accel-Redirect. That way you can run a gevent/wsgi/django server behind Nginx with views that provide long-polling. No need for a separate websockets server.
I've used both Apache + X-Sendfile and Nginx + X-Accel-Redirect to serve (access-protected) content on Webfaction without any problems.
| 0 | 1 | 0 | 0 |
2011-08-15T13:08:00.000
| 2 | 0.099668 | false | 7,065,283 | 0 | 0 | 1 | 1 |
I have been working with Django for some time now and have written several apps on a setup that uses Apache 2 mod_wsgi and a PostgreSQL database on ubuntu.
I have aa app that uses xsendfile to serve files from Apache via a Django view, and also allow users to upload files via a form as well. All this working great, but I now want to ramp up the features (and the complexity I am sure) by allowing users to chat and to see when new files have been uploaded without refreshing their browser.
As I want this to be scale-able, I don't want to poll continually with AJAX as this is going to get very heavy with large numbers of users.
I have read more posts, sites and blogs then I can count on integrating comet functionality into a Django app but there are so many different opinions out there on how to do this that I am now completely confused.
Should I be using orbited, gevent, iosocket?
Where does Tornado fit into this debate?
I want the messages also be stored on the database, so do I need any special configuration
to prevent my application blocking when writing to the database?
Will running a chat server with Django have any impact on my ability to serve files from Apache?
|
Using MongoLab Database service vs Custom web service with MongoDB running on AWS
| 7,068,152 | 0 | 0 | 501 | 0 |
python,mongodb,amazon-web-services,cloud-hosting,mlab
|
One thing to consider is that you don't need to use MongoLab's REST API. You can connect directly via a driver as well.
So, if you need to implement business logic (which it sounds like you do), it makes sense to have a three tier architecture with an app server connecting to your MongoLab database via one of the drivers. In your case it sounds like this would be pymongo.
-will
| 0 | 0 | 0 | 1 |
2011-08-15T16:47:00.000
| 1 | 1.2 | true | 7,067,909 | 0 | 0 | 1 | 1 |
I am looking for feasible solutions for my Application to be backed with MongoDB. I am looking to host the MongoDB on the cloud with a python based server to interact with the DB and my app (either mobile/web). I am trying to understand how the architecture should look like.
Either i can host a mongoDB on the AWS cloud and have the server running there only.
I also tried using MongoLab and seemed to be simple accessing it using HTTP requests. but i am not sure if it exposes all the essential features of MongoDB (what ever i can do using a pymongo driver)? Also, should i go for accessing the MongoLab service directly from my application or still i should build a server in-between?
I would prefer to building an server in either case as i want to do some processing before sending the data back to application. but i am not sure in that case how my DB-server-app interaction design should be
Any suggestions?
|
which is better for my project: Django, Plone, php, or ruby on rails
| 7,070,188 | 3 | 0 | 588 | 0 |
php,python,ruby-on-rails,django,plone
|
It does not really matter in which language you would make it. If you are good programmer, the application will work fine in any of those environment. If you are not good programmer, it will sux always :) you can do it for example in Ruby on Rails, but what is the purpose if you will not be able to follow MVC structure? it's a little bit risky to use technology that you do not know. There are many issues you can run into. Just for example N+1 queries. I wouldn't recommend you switching the language, unless you really would like to learn other technology - but be aware that your application most probably will not be "pretty" :)
| 0 | 0 | 0 | 1 |
2011-08-15T20:02:00.000
| 2 | 0.291313 | false | 7,070,097 | 0 | 0 | 1 | 2 |
I have a php video hosting site, not a typical video hosting site, but i think you put it in that category.
I'm almost done with it, I'll launch it maybe next week, i created it in php because my partner wanted to get it done so fast, and the fastest way for me to do it was php, because i know it could be done with php mysql, ffmpeg and ffmpeg-php and some other multimedia packages. I don't know what it takes to do it in other languages.
now I want to launch the site, and re-write another better version, because i don't like the current version, my partner likes it but he's on the business side and I'm on the development side lol, so i decide :D
I don't know much about the php frameworks, I've seen them all, but I don't think they are that good for such websites
I was thinking maybe Django, but Django is created mainly for CMS websites, I don't know if i can use it for my site
I've never used plone but i think it's good for my site, I don't have experience with it so I would like to know what you think
Ruby on rails seems to be another option but I've never seen any video hosting site using ROR
So what is the best language to re-write my site? or should i stick with php?
edit
i already know python, so it's not hard for me to switch to python framwork if it's better for such project...I have some ROR knowledge...and I'm gonna use php for now as you said guys but I'm talking about the future
I don't want to make this question an argumentative question, because it will be closed, so let ask another question, can i do such website with django because i know django, but i have never used it for such thing, i used it for cms, i don't know if it supports ffmpeg, multimedia and such things
sorry for posting an argumentative topic, i hope my question is now better :D
|
which is better for my project: Django, Plone, php, or ruby on rails
| 7,070,199 | 2 | 0 | 588 | 0 |
php,python,ruby-on-rails,django,plone
|
I think you should stick with PHP :
you already know the language
you already have the tools for development
you already have a code base
I think you should see how your site behaves in production and see what are the most wanted user needs before spending time rewriting everything, and during this time not beeing able to answer correctly to their needs.
And while your site sends its first bits, it will give you a lot more info on what to expect from the version 2, from a technical point of view.
Edit : I am not pro-php, I just think you should stick whit the language you already know, until your business gives you time to learn something else.
| 0 | 0 | 0 | 1 |
2011-08-15T20:02:00.000
| 2 | 1.2 | true | 7,070,097 | 0 | 0 | 1 | 2 |
I have a php video hosting site, not a typical video hosting site, but i think you put it in that category.
I'm almost done with it, I'll launch it maybe next week, i created it in php because my partner wanted to get it done so fast, and the fastest way for me to do it was php, because i know it could be done with php mysql, ffmpeg and ffmpeg-php and some other multimedia packages. I don't know what it takes to do it in other languages.
now I want to launch the site, and re-write another better version, because i don't like the current version, my partner likes it but he's on the business side and I'm on the development side lol, so i decide :D
I don't know much about the php frameworks, I've seen them all, but I don't think they are that good for such websites
I was thinking maybe Django, but Django is created mainly for CMS websites, I don't know if i can use it for my site
I've never used plone but i think it's good for my site, I don't have experience with it so I would like to know what you think
Ruby on rails seems to be another option but I've never seen any video hosting site using ROR
So what is the best language to re-write my site? or should i stick with php?
edit
i already know python, so it's not hard for me to switch to python framwork if it's better for such project...I have some ROR knowledge...and I'm gonna use php for now as you said guys but I'm talking about the future
I don't want to make this question an argumentative question, because it will be closed, so let ask another question, can i do such website with django because i know django, but i have never used it for such thing, i used it for cms, i don't know if it supports ffmpeg, multimedia and such things
sorry for posting an argumentative topic, i hope my question is now better :D
|
Not receiving order placed notice in Satchmo
| 7,085,400 | 1 | 1 | 283 | 0 |
python,django,satchmo
|
I managed to find the solution for this. It was a setting under the Satchmo /settings/ page, under the Payment section:
Email Owner? Needs to be ticked.
Feel like a bit of an idiot for missing this, but the reason I skipped over the payment section is that I'm using a custom payment module so didn't think it would apply. I am now using this setting alongside my custom payment module and all is working fine.
| 0 | 0 | 0 | 1 |
2011-08-15T23:06:00.000
| 1 | 0.197375 | false | 7,071,829 | 0 | 0 | 1 | 1 |
I was wondering if there are any settings I need to do to enable Satchmo sending me an email (to the store config email address) each time an order is placed? I have set up the template:
templates/shop/email/order_placed_notice.html and enabled Send HTML Email in the settings.
The site is sending the order placed and order shipped emails to the customer no problems, but is not sending an email to the store email. I have searched through the Satchmo docs around the settings and couldn't find anything. Should I be changing something to the signals? I have gone through the signals.py, listeners.py and mail.py files and done reading on Django & Satchmo Signals but was reluctant to play around as my programming knowledge isn't too great.
Any help is appreciated.
Thanks!
|
App Engine, Python: setting Access-Control-Allow-Origin (or other headers) for static file responses
| 7,075,300 | 1 | 4 | 692 | 0 |
python,google-app-engine
|
You can't; the only thing you can do is to stream these static files adding the Access-Control-Allow-Origin header to the Response.
| 0 | 1 | 1 | 0 |
2011-08-16T07:05:00.000
| 2 | 0.099668 | false | 7,074,662 | 0 | 0 | 1 | 1 |
Is there a way to set custom headers of responses to static file requests?
E.g. I'd want to set Access-Control-Allow-Origin: * when serving static files.
|
Generating dynamic graphs
| 7,078,262 | 0 | 0 | 228 | 0 |
graphics,python
|
If performance is such an issue and you don't need fancy graphs, you may be able to get by with not creating images at all. Render explicitly sized and colored divs for a simple bar chart in html. Apply box-shadow and/or a gradient background for eye candy.
I did this in some report web pages, displaying a small 5-bar (quintiles) chart in each row of a large table, with huge speed and almost no server load. The users love it for the early and succinct feedback.
Using canvas and javascript you could improve on this scheme for other chart types. I don't know if you could use Google's charting code for this without going through their API, but a few lines or circle segments should be easy to paint yourselves if not.
| 0 | 0 | 0 | 0 |
2011-08-15T02:07:00.000
| 3 | 0 | false | 7,078,010 | 0 | 1 | 1 | 1 |
I'm building a web application in Django and I'm looking to generate dynamic graphs based on the data.
Previously I was using the Google Image Charts, but I ran into significant limitations with the api, including the URL length constraint.
I've switched to using matplotlib to create my charts. I'm wondering if this will be a bad decision for future scaling - do any production sites use matplotlib? It takes about 100ms to generate a single graph (much longer than querying the database for the data), is this simply out of the question in terms of scaling/handling multiple requests?
|
Django docutils not working
| 11,377,138 | 0 | 1 | 725 | 0 |
python,django,docutils
|
Did you restart the Django server? Django has to restart to recognize the newly installed admindocs.
| 0 | 0 | 0 | 0 |
2011-08-17T09:25:00.000
| 4 | 0 | false | 7,090,549 | 0 | 0 | 1 | 2 |
I'm trying to enable docutils for django on windows 7. I've enabled the links in urls.y and settings.py and downloaded and installed the latest snapshot of docutils. However every time I try to access the documentation link in the admin I get a page asking me to install docutils.
Any ideas?
Thanks
|
Django docutils not working
| 70,630,985 | 0 | 1 | 725 | 0 |
python,django,docutils
|
You might have installed the docutils module in the virtual env/path.
Uninstall it from the virtual path and re-install it in the global python installation folder. That should solve the problem.
Note: Do not install django-docutils, but just simply docutils
| 0 | 0 | 0 | 0 |
2011-08-17T09:25:00.000
| 4 | 0 | false | 7,090,549 | 0 | 0 | 1 | 2 |
I'm trying to enable docutils for django on windows 7. I've enabled the links in urls.y and settings.py and downloaded and installed the latest snapshot of docutils. However every time I try to access the documentation link in the admin I get a page asking me to install docutils.
Any ideas?
Thanks
|
Complex server logic and node.js
| 7,092,538 | 2 | 0 | 360 | 0 |
python,node.js
|
I think you have answered your own question. Python has greater maturity and by your own admission has the libraries you require. Could you narrow down your requirements a bit more?
| 0 | 0 | 1 | 0 |
2011-08-17T10:51:00.000
| 1 | 1.2 | true | 7,091,583 | 0 | 0 | 1 | 1 |
I love node.js, socket.io, the templating engines, etc: as a web framework, it's amazing.
A lot of my back-end work is with NLP, Machine Learning, and Data Mining, for which there exist hundreds of rock-solid Python libraries, but no Javascript libraries. If I were using Django, I'd just import the libraries and chug away.
What's the recommended approach for handling these complex tasks with node.js? Should I stick with Python web frameworks, is there a convention to dealing with these libraries, or some solution I'm missing?
|
How to see current context when debugging template errors?
| 9,424,701 | 3 | 1 | 1,407 | 0 |
python,django,django-templates
|
From the comments:
I think you need to walk up the stacktrace (in the django debug page) to actually see your context variables. I don't understand your problem exactly. If I have a template error I can inspect my context somewhere in the traceback.
Yes, setting a "breakpoint" in django can sometimes mean just inserting a non-defined variable at the point you want to inspect. The last entry in the traceback is usually the one for this variable. It will give you all context details in the traceback of the debug page.
| 0 | 0 | 0 | 0 |
2011-08-17T23:07:00.000
| 2 | 1.2 | true | 7,100,610 | 0 | 0 | 1 | 1 |
I am getting a template error during rendering, which I think would be easy to fix if I could just see what's in the context that is passed into the template that is being rendered. Django's debug error page provides a lot of information, but I'm not seeing my context anywhere. Am I missing something? Also, I am using Django-debug-toolbar, but that only seems to come up if the page successfully renders. Not being able to see the contents of the context that is passed to the template makes debugging some types of template errors hard! What do I need to do to be able to see it in this scenario? (Note that I'm not asking for a fix to my specific error, which is why I'm not providing more information about it).
|
Is there a library to benchmark my Django App for SQL requests?
| 7,111,827 | 4 | 1 | 293 | 0 |
python,django,performance,benchmarking,django-orm
|
This information is only available in debug mode. I'm not aware of any library to do exactly what you are trying to do, however you could probably rig something up pretty easily.
If you are in running in debug mode, then you can view all the queries that have been run with connection.queries. This is effectively how django debug toolbar works as well.
So alter your unittests to look at the connection.queries dict and that should get you started pretty well.
| 0 | 0 | 0 | 0 |
2011-08-18T16:53:00.000
| 1 | 1.2 | true | 7,111,149 | 0 | 0 | 1 | 1 |
I have a large complex Django App that we use that does lots of things. From looking at the Django Debug Toolbar some of our views do a lot of SQL requests. I want to increase the performance of it by reducing the number of SQL requests (e.g. by adding more select_related, more clever queries, etc.). I would like to be able to measure the improvement as I go along, to encourage me and to be able to say how much fat has been trimmed. I have a large set of django unit tests for this app that touch on lots of parts of the code.
Is there some library/programme/script can run the unittests and then print out how many SQL requests in total were executed? This way I can iterativly improve our app.
|
Why are some of my files being replicated?
| 7,112,366 | 2 | 0 | 52 | 0 |
python,html,css,django
|
Sounds like a temp file created by your editor to support restoring if you crash/forget to save/etc. I'm sure if I googled a bit I'd even be able to figure out which editor(s) use that format for their temp files.
Has nothing to do with django.
| 0 | 0 | 0 | 0 |
2011-08-18T18:31:00.000
| 3 | 0.132549 | false | 7,112,315 | 0 | 0 | 1 | 2 |
I'm running a django framework and notice that when I'm editing a file I will get another file in the form "filename.extension~". What exactly is that "~" doing there and why am I generating another file?
If it's a temp file, when does it go away?
|
Why are some of my files being replicated?
| 7,112,355 | 1 | 0 | 52 | 0 |
python,html,css,django
|
I'm not familiar with django, but I'm sure it's a temporary/backup file.
| 0 | 0 | 0 | 0 |
2011-08-18T18:31:00.000
| 3 | 0.066568 | false | 7,112,315 | 0 | 0 | 1 | 2 |
I'm running a django framework and notice that when I'm editing a file I will get another file in the form "filename.extension~". What exactly is that "~" doing there and why am I generating another file?
If it's a temp file, when does it go away?
|
Skype python API
| 7,120,949 | 1 | 3 | 8,235 | 0 |
python,linux,web-applications,skype
|
SkypeKit has a Python API, too and does not require the Skype client running. However, you do need a helper application - but on the skype developer website you can compile it for various architectures and operating systems.
If you don't have access yet it might take some time though to get it. I signed up for it when it was new and it took about 1 year until I got accvess - but maybe it's faster now (they charge something between $5 and $15 after to finally get access).
However, you REALLY need to upgrade your python version - 2.4 is ancient, that's almost like using IE5 nowadays...
| 0 | 0 | 0 | 0 |
2011-08-19T11:20:00.000
| 2 | 0.099668 | false | 7,120,806 | 0 | 0 | 1 | 1 |
Is there a skype web API or a python API that can be integrated into the Django APP.
Server : Python 2.4, linux RHEL5
|
Should I create pipeline to save files with scrapy?
| 7,135,102 | 4 | 17 | 16,042 | 0 |
python,scrapy,web-crawler,pipeline
|
It's a perfect tool for the job. The way Scrapy works is that you have spiders that transform web pages into structured data(items). Pipelines are postprocessors, but they use same asynchronous infrastructure as spiders so it's perfect for fetching media files.
In your case, you'd first extract location of PDFs in spider, fetch them in pipeline and have another pipeline to save items.
| 0 | 0 | 0 | 0 |
2011-08-19T14:51:00.000
| 3 | 0.26052 | false | 7,123,387 | 0 | 0 | 1 | 1 |
I need to save a file (.pdf) but I'm unsure how to do it. I need to save .pdfs and store them in such a way that they are organized in a directories much like they are stored on the site I'm scraping them off.
From what I can gather I need to make a pipeline, but from what I understand pipelines save "Items" and "items" are just basic data like strings/numbers. Is saving files a proper use of pipelines, or should I save file in spider instead?
|
Django statelessness?
| 7,129,066 | 2 | 3 | 4,009 | 0 |
python,django
|
Sure it's possible. But if you are writing a web application you probably won't want to do that because of threading issues.
| 0 | 0 | 0 | 0 |
2011-08-20T01:05:00.000
| 4 | 1.2 | true | 7,128,868 | 0 | 0 | 1 | 1 |
I'm just wondering if Django was designed to be a fully stateless framework?
It seems to encourage statelessness and external storage mechanisms (databases and caches) but I'm wondering if it is possible to store some things in the server's memory while my app is in develpoment and runs via manage.py runserver.
|
How to get bandwidth quota usage with Google app engine api?
| 7,142,758 | 1 | 0 | 475 | 0 |
java,python,api,google-app-engine
|
No, but you can get a very close estimate of this by adding up the length of the request headers and body for incoming requests, and the response body and headers for responses.
| 0 | 1 | 0 | 0 |
2011-08-20T12:48:00.000
| 2 | 0.099668 | false | 7,131,834 | 0 | 0 | 1 | 1 |
I want to know if Google App Engine support using google.appengine.api.quota package to get bandwidth usage, not cpu usage?
If so, how to get with Python or Java and print in webpage?
|
Django MySQLdb version doesn't match _mysql version Ubuntu
| 7,352,188 | 1 | 6 | 4,064 | 1 |
mysql,django,deployment,ubuntu,mysql-python
|
For those who come upon this question:
It turns out that ubuntu _mysql version was different from the one in my venv. Uninstalling that and re-installing in my venv did the trick.
| 0 | 0 | 0 | 0 |
2011-08-21T08:27:00.000
| 2 | 1.2 | true | 7,137,214 | 0 | 0 | 1 | 1 |
I'm trying to get a django site deployed from a repository. I was almost there, and then changed something (I'm not sure what!!) and was back to square one.
Now I'm trying to run ./manage.py syncdb and get the following error:
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: this is MySQLdb version (1, 2, 3, 'final', 0), but _mysql is version (1, 2, 2, 'final', 0)
I've searched forums for hours and none of the solutions presented helped. I tried uninstalling and re-installing MySQL-python and upgrading it. I get the same error when trying to import it from the python command line interpreter.
Does anyone have any suggestions?
|
Web content filter with Apache + mod_wsgi?
| 12,194,811 | 1 | 3 | 491 | 0 |
python,apache2,mod-wsgi,mod-python
|
You can use django middleware to intercept HTTP request/response traffic before it reaches your application (which might be in this case your graphical interface to fine tune your filter and/or database handling for storing your configurations or preset rules).
My initial imagination for your application, is that you will have a web interface for easy configuration and tuning for your system, store those configurations and rules in the database. In the middleware, put code logic that will read the configurations and rules form the database and apply them on the outgoing/incoming traffic.
I much prefer this model than doing this in django's application itself (views).
You can also put all sorts of logging and monitoring in your middleware script, and don't forget to enable it of course to make it functional :-).
| 0 | 0 | 0 | 0 |
2011-08-21T14:57:00.000
| 1 | 0.197375 | false | 7,139,023 | 0 | 0 | 1 | 1 |
I'd like to write a simple web content filter with flexible filtering rules that are written in Python. The filter is to be used as a forward proxy.
Now, I have trouble choosing the right tools for this. What do you think would be a good set of tools? So far, I've been considering Apache HTTP server with mod_proxy and mod_python or mod_wsgi, but I got stuck with the setup (mod_python is poorly documentated, IMO).
Btw, I am aware of and have experience with existing content filters such as squid and dansguardian. I am trying to write my own because the filtering capabilities of these content filters aren't sophisticated enough for my case.
|
initializing module on django+mod_wsgi+apache
| 7,147,377 | 2 | 2 | 775 | 0 |
python,django,apache,wsgi,django-wsgi
|
You are possibly just getting confused and are actually using as poor Apache/mod_wsgi configuration. Specifically, you are likely using embedded mode with Apache prefork MPM. That is bad because Apache will use lots of single thread processes and so the code has to be loaded in all of them. That is why you probably think it is happening on each request against the same process, where in reality, each request is hitting a different process.
Ensure you are using daemon mode of mod_wsgi and that your code is thread safe and so use single multithreaded process and it shouldn't have the issue.
Edit your question and add your Apache/mod_wsgi configuration snippets from Apache configuration file and state what Apache MPM you are using.
| 0 | 0 | 0 | 0 |
2011-08-22T01:11:00.000
| 1 | 1.2 | true | 7,142,284 | 0 | 0 | 1 | 1 |
My django application is running on apache+wsgi. One of the module in my django app needs to load a Java library via jpype and this Java library takes long time to initialize due to its application nature.
The problem is that, for each http request handled by django in apache+wsgi setup, this Java library is re-loaded. However, this does not happen when I run my same app in development web server (python manager.py runserver 8000). In development web server, it only loads the Java library only once.
Is there any way to change apache or mod_wsgi configuration or my django app so that it won't reload my Java library for every http request?
Many thanks.
Andy
|
apache server with mod_wsgi + python as backend, how can i be able to notified my connection status?
| 7,145,199 | 1 | 0 | 393 | 1 |
python,apache,webserver,mod-wsgi
|
You cant. It is a limitation of the API defined by the WSGI specification. So, nothing to do with Apache or mod_wsgi really as you will have the same issue with any WSGI server if you follow the WSGI specification.
If you search through the mod_wsgi mailing list on Google Groups you will find a number of discussions about this sort of problem in the past.
| 0 | 0 | 0 | 0 |
2011-08-22T06:52:00.000
| 1 | 1.2 | true | 7,144,011 | 0 | 0 | 1 | 1 |
i'm trying to build a web server using apache as the http server, mod_wsgi + python as the logic handler, the server was supposed to handler long request without returning, meaning i want to keep writing stuff into this request.
the problem is, when the link is broken, the socket is in a CLOSE_WAIT status, apache will NOT notify my python program, which means, i have to write something to get an exception, says the link is broken, but those messages were lost and can't be restored.
i tried to get the socket status before writing through /proc/net/tcp, but it could not prevent a quick connect/break connection.
anybody has any ideas, please help, very much thanks in advance!
|
How can my HTML file pass JavaScript results back to a Python script that calls it?
| 7,158,707 | 0 | 2 | 367 | 0 |
javascript,python,html,javascript-engine
|
This would be very hard to accomplish without the use of external libraries. You'd need a HTML parser to start with, so you can actually make sense of the HTML. Then you'd need a Javascript parser/lexer/engine so you could do the actual calculations. I guess it would be possible to implement this in Python, but I'd recommend looking for an open source project which already implemented this. You'd then have to parse/lex/interpret the javascript and pass back the result to python.
All in all I'd say it's easier to just port the Javascript calculation to Python, but that's just me.
| 0 | 0 | 0 | 0 |
2011-08-23T09:10:00.000
| 4 | 0 | false | 7,158,635 | 0 | 0 | 1 | 1 |
I have a python script and this python script shall call an html-file (i.e. a web page) stored locally on the computer. The html-file does some calculations (jquery,javascript and so on) and should pass the result back to the python script.
I don't want to change the setting (python script calls html-file and result is passed back to python-script) so please don't ask why.
Could anyone tell me how to solve this? How can I pass the result from html-file to the calling python function? That troubles me since 2 weeks.
Thanks!
|
How to make process wide editable variable (storage) in django?
| 7,171,161 | 0 | 1 | 398 | 0 |
python,django,data-storage
|
Before giving any specific advice to you, you need to be aware of the limitations of these systems.
ISSUES
Architectural Differences between Django and PHP/other popular language.
PHP re-reads and re-evaluates the entire code tree every time a page is accessed. This allows PHP to re-read settings from DB/cache/whatever on every request.
Initialisation
Many django apps initialise their own internal cache of settings and parameters and perform validation on these parameters at module import time (server start) rather than on every request. This means you would need a server restart anyway when modifying any settings for non db-settings enabled apps, so why not just change settings.py and restart server?
Multi-site/Multi-instance and in-memory settings
In Memory changes are strictly discouraged by Django docs because they will only affect a single server instance. In the case of multiple sites (contrib.sites), only a single site will recieve updates to shared settings. In the case of instanced servers (ep.io/gondor) any changes will only be made to a local instance, not every instance running for your site. Tread carefully
PERFORMANCE
In Memory Changes
Changing settings values while the server is running is strictly discouraged by django docs for the reasons outlined above. However there is no performance hit with this option. USE ONLY WITHIN THE CONFINES OF SPECIFIC APPS MADE BY YOU AND SINGLE SERVER/INSTANCE.
Cache (redis-cache/memcached)
This is the intermediate speed option. Reasonably fast lookup of settings which can be deserialised into complex python structures easily - great for dict-configs. IMPORTANT is that values are shared among Sites/Instances safely and are updated atomically.
DB (SLOOOOW)
Grabbing one setting at a time from DB will be very very slow unless you hack in connection pooling. grabbing all settings at once is faster but increases db transfer on every single request. Settings synched between Sites/Instances safely. Use only for 1 or 2 settings and it would be reasonable to use.
CONCLUSION
Storing configuration values in database/cache/mem can be done, but you have to be aware of the caveats, and the potential performance hit. Creating a generic replacement for settings.py will not work, however creating a support app that stores settings for other apps could be a viable solution, as long as the other apps accept that settings must be reloaded every request.
| 0 | 0 | 0 | 0 |
2011-08-24T02:35:00.000
| 1 | 1.2 | true | 7,169,852 | 0 | 0 | 1 | 1 |
I want create project wide accessible storage for project/application settings.
What i want to achieve: - Each app has it's own app specific settings stored in db - When you spawn django wsgi process each settings are stored in memory storage and are available project wide - Whenever you change any setting value in db there is a call to regenerate storage from db
So it works very close to cache but I can't use cache mechanism because it's serializing data. I can also use memcache for that purpose but i want to develop generic solution (not always you have access to memcache).
If anyone have any ideas to solve my problem i would be really gratefully for sharing.
|
Why does django-startproject (by lincoln loop) create app in conf/local?
| 10,638,394 | 0 | 2 | 309 | 0 |
python,django
|
That's because django's manage.py is by default located at the root of the project, and by default the startapp function(which is not created by the lincolnloop guys) places the apps in the current folder (where manage.py is located). this is from the official docs:
startapp [destination]
django-admin.py startapp
Creates a Django app directory structure for the given app name in the current directory or the given destination.
you can explicitly specify where to place the app with the destination parameter.
| 0 | 0 | 0 | 0 |
2011-08-24T08:08:00.000
| 1 | 0 | false | 7,172,332 | 0 | 0 | 1 | 1 |
I have tested django-startproject (https://github.com/lincolnloop/django-startproject). I have read their doc and Lincoln Loop best practice, but many of their choices are still unclear for me (the way they organize their folders, etc.).
Especially, I am quite confused by the way their bin/manage.py behave.
When I execute python bin/manage.py startapp Test, it creates the app, but instead of putting it in my project (or in apps), the directory is created in conf/local.
Is this the wanted behaviour ?
|
TabularInlines readonly fields are deleteable
| 7,477,875 | 1 | 1 | 157 | 0 |
python,django,django-admin
|
I think what you want to do is set has_delete_permission(self, obj=None). That will allow you to decide when you can and cannot delete an entire inline.
| 0 | 0 | 0 | 0 |
2011-08-24T14:02:00.000
| 1 | 1.2 | true | 7,176,746 | 0 | 0 | 1 | 1 |
I'm using TabularInlines for administrate many-to-many relationships in the django admin. When some conditions are met, I want to make the inline read only. To achieve this I override the get_readonly_fields() method from the BaseModelAdmin.
This works like a charm, with the only problem, that the read only fields are still deleteable (the checkbox for the deletion is still there and still works).
Of course I could set the can_delete field in the TabularInline to False but this prevents the deletion also for not read only cases.
My question: How can I set up the TabularInline that I can prohibit the deletion in read only fields and enable it if the fields are read/writeable?
EDIT: I use Django 1.3, but if the solution also works for 1.2 it would be perfect!
|
Django manage.py question
| 7,182,225 | 6 | 2 | 802 | 0 |
python,django,command-line
|
If you are using a recent version of Django, the manage.py file should be an "executable" file by default.
Please note, you cannot just type manage.py somecommand into the terminal as manage.py is not on the PATH, you will have to type ./ before it to run it from the current directory, i.e. ./manage.py somecommand.
If that does not work please be sure that the manage.py file has:
#!/usr/bin/env python
as its first line. And make sure it is executable: chmod +x manage.py
| 0 | 0 | 0 | 0 |
2011-08-24T20:56:00.000
| 3 | 1.2 | true | 7,182,165 | 0 | 0 | 1 | 1 |
Why is it that I have to run python manage.py somecommand and others simply run manage.py somecommand? I'm on OSX 10.6. Is this because there is a pre-set way to enable .py files to automatically run as Python scripts, and I've somehow disabled the functionality, or is that something that you explicitly enable?
|
Google App Engine python, GQL, select only one column from datastore
| 7,214,401 | 3 | 2 | 1,553 | 1 |
python,google-app-engine,gql,gqlquery
|
You can't. GQL is not SQL, and the datastore is not a relational database. An entity is stored as a single serialized protocol buffer, and it's impossible to fetch part of an entity; the whole thing needs to be deserialized.
| 0 | 0 | 0 | 0 |
2011-08-27T10:36:00.000
| 2 | 0.291313 | false | 7,213,991 | 0 | 0 | 1 | 1 |
Im trying to pull only one column from a datastore table
I have a Books model with
id, key, title, author, isbn and price
everything = db.GqlQuery('SELECT * FROM Books') gives me everything, but say i only want the title
books = db.GqlQuery('SELECT title FROM Books')
Ive tried everything people have suggested but nothing seems to work
Any help is much appreciated
Thanks
|
South: run a migration for a column that is both unique and not null
| 7,226,563 | 13 | 16 | 2,209 | 0 |
python,django,django-models,django-south
|
Yes, this is the approach you should take. You should be doing schemamigration -> datamigration -> schemamigration for this. unfortunately if there is no way to do it in SQL, south cannot do it either.
| 0 | 0 | 0 | 0 |
2011-08-29T04:23:00.000
| 1 | 1.2 | true | 7,226,036 | 0 | 0 | 1 | 1 |
Using South/Django, I am running into a problem where I'm trying to add a UNIQUE and NOT NULL column for a model with existing rows in the database. South prompts me to specify a default for the column, since it is NOT NULL. But since it also has a UNIQUE constraint, I can't add a default to the field in models.py, nor can I specify a one-off value because it'll be the same on all the rows.
The only way I can think of to get around this is to create a nullable column first, apply the migration, run a script to populate the existing rows with unique values in that column, and then add another migration to add the UNIQUE constraint to that column.
But is there a better way of accomplishing the same thing?
|
Detecting unused Django template libraries
| 7,227,875 | 1 | 1 | 219 | 0 |
python,django
|
I guess you could fold in your own load method into the django template language and add in a logmethod using Django logging, just out of curiosity, is it the complexity of the template variety that a logger makes it more conveinient to keep the helicopter view?
| 0 | 0 | 0 | 0 |
2011-08-29T08:25:00.000
| 2 | 0.099668 | false | 7,227,776 | 0 | 0 | 1 | 2 |
While rendering a template is there a way to make Django log something when unused tag libraries are loaded?
|
Detecting unused Django template libraries
| 7,233,337 | -1 | 1 | 219 | 0 |
python,django
|
Here is what I would do, although this may not be the best solution.
Write a script that goes through all of your urls.py files and builds up a tree of all the urls that exist in your project (I know there is a third-party django plugin to list all urls, but I cant seem to remember what it is)
Build a unittest that takes the list generated in (1), using the Client() to hit every url, and then query the response objects for the templates which were rendered.
Perform a logical AND with the list of total HTML files in your template directory and the list generated from this unit test.
| 0 | 0 | 0 | 0 |
2011-08-29T08:25:00.000
| 2 | -0.099668 | false | 7,227,776 | 0 | 0 | 1 | 2 |
While rendering a template is there a way to make Django log something when unused tag libraries are loaded?
|
Python+Tornado vs Scala+Lift?
| 7,286,848 | 14 | 12 | 3,553 | 0 |
python,scala,comet,lift,tornado
|
I think Python and Tornado are a great team, for the following reasons
Tornado is really an IOLoop that happens to come with an HTTP implementation that runs on it (and a few helpers).
This means that it comes with everything you need to do web development with it.
It also means that if you find, down the road, that you need other back end services to help scale your application, tornado is very likely of good use in that area. I've actually written more back end services than front end ones in Tornado (but a coworker has the exact opposite experience with it -- he's more front-end oriented and finds it just as nice to work with). A bit off-topic, but we've also used their template module outside of tornado with great success. The code is very modular and there's almost no interdependence, so reusing its components is a breeze.
You can learn it, and know it well, very, very quickly.
It would take you all of a day to figure out. It's code is clean and unbelievably well-commented, and it has decent documentation besides. I was able to produce a production service with Tornado 0.2 (ca. 2009) in about a week having never seen it before. The tornado source code is very anti-magic.
It's fast, and stable. Under load.
I don't know if it's the absolute most blazing fast thing in existence, but in the projects I've used it in, it's taking on some very heavy load, in terms of both number of concurrent users, and in terms of data transfer (high-volume image uploads, for example), and it's been a) completely rock solid in terms of stability, and b) fast enough that I haven't had to consider scaling it horizontally or getting bigger hardware.
Python is extremely flexible and adaptable.
I use Python regularly for web development using Tornado (and other things too, including Django on occasion). However, I also use it for things completely unrelated to the web services themselves, like sysadmin/automation tasks, reporting & data munging (for example, I write hadoop jobs in Python), and other things, where the standard library modules (os, sys, shutil, itertools, collections, etc) make things blindingly fast to build. I can use Python for just about anything, in just about any environment, whether the output goes over a stream, into a browser, to a fat GUI, or a console.
It also has a fantastic community around it of really smart people who are also very friendly. I can't compare it to the scala community, but in comparison with lots of other communities, Python is easily my favorite and has a lot to do with why I became so attached at the hip with it. I'm a polyglot, but if I have a question, I would most like to pose that question to a Python community member :)
| 0 | 1 | 0 | 0 |
2011-08-29T08:33:00.000
| 3 | 1.2 | true | 7,227,850 | 0 | 0 | 1 | 3 |
I'm looking to start a Google Maps based web application.
My initial thoughts are that in the first phase the focus should be on the front-end, and the backend should be easy to write and to prototype, and should aid as much as possible the development of the frontend.
There will be no 'classic' pages, just a meebo.com style interface. javascript + jquery. (meaning, very few if none at all static pages).
My eye has caught the comet-style , server push paradigm, and I'm really interested in doing some proof of concepts with this.
Do you have any recommendations or advantages and disadvantages or any experiences in working with :
Python + Tornado vs Scala + Lift ?
What other advantages or disadvantages in other areas of a web application might a choice bring?
Note : This is for max 2 developers, not a big distributed and changing team.
Thanks
|
Python+Tornado vs Scala+Lift?
| 7,231,443 | 6 | 12 | 3,553 | 0 |
python,scala,comet,lift,tornado
|
Scala is a substantially cleaner language and enables you to use object-oriented and functional paradigms as you see fit.
Python has much more syntactic sugar and embraces the "there is only one way to do it" philosophy.
Scala is usually used with IDEs like Eclipse/Idea - although support for vim/emacs also exists, too - and built with SBT. If you are not accustomed to these tools, it might take some effort to set them up the first time.
Python is often used with much more lightweight editors. Re-running an updated Python script is easier by default.
Lift is really targeted at web applications, enabling Desktop-like responsiveness and behavior. If you're just wanting to create a homepage, there are certainly other frameworks around, which don't make you learn as much as with Lift.
| 0 | 1 | 0 | 0 |
2011-08-29T08:33:00.000
| 3 | 1 | false | 7,227,850 | 0 | 0 | 1 | 3 |
I'm looking to start a Google Maps based web application.
My initial thoughts are that in the first phase the focus should be on the front-end, and the backend should be easy to write and to prototype, and should aid as much as possible the development of the frontend.
There will be no 'classic' pages, just a meebo.com style interface. javascript + jquery. (meaning, very few if none at all static pages).
My eye has caught the comet-style , server push paradigm, and I'm really interested in doing some proof of concepts with this.
Do you have any recommendations or advantages and disadvantages or any experiences in working with :
Python + Tornado vs Scala + Lift ?
What other advantages or disadvantages in other areas of a web application might a choice bring?
Note : This is for max 2 developers, not a big distributed and changing team.
Thanks
|
Python+Tornado vs Scala+Lift?
| 7,231,067 | 3 | 12 | 3,553 | 0 |
python,scala,comet,lift,tornado
|
I would suggest going with Python for these reasons:
1. Debugging
What I find especially useful when writing Python code, is the ability to easily debug ( see the pdb module ), all you need is a command prompt and a text editor to set your breakpoints.
With Scala, you will probably have to rely on a IDE to do all your debugging.
2. Easy to learn
As for programming language, I don't know what your experiences with either languages are. If you are both Python and Scala beginners, my personal opinion is that you will learn Python faster.
| 0 | 1 | 0 | 0 |
2011-08-29T08:33:00.000
| 3 | 0.197375 | false | 7,227,850 | 0 | 0 | 1 | 3 |
I'm looking to start a Google Maps based web application.
My initial thoughts are that in the first phase the focus should be on the front-end, and the backend should be easy to write and to prototype, and should aid as much as possible the development of the frontend.
There will be no 'classic' pages, just a meebo.com style interface. javascript + jquery. (meaning, very few if none at all static pages).
My eye has caught the comet-style , server push paradigm, and I'm really interested in doing some proof of concepts with this.
Do you have any recommendations or advantages and disadvantages or any experiences in working with :
Python + Tornado vs Scala + Lift ?
What other advantages or disadvantages in other areas of a web application might a choice bring?
Note : This is for max 2 developers, not a big distributed and changing team.
Thanks
|
Comparing web.py's Templator and Jinja2: strengths and weaknesses
| 7,269,896 | 1 | 1 | 1,277 | 0 |
python,templates,web.py,jinja2
|
What was hard for me in Templetor is template inheritance. Instead of simple concept of blocks which is present e.g. in Jinja2, you have to select the base template once in the app code, then do the weird attribute setting in the actual template while accessing it in the base template. Still you have problems if you need more than one “big” block like the page body.
Real blocks are much more elegant, and the flexibility of Templetor's “real” Python is not really necessary, while it probably can be unsafe.
| 0 | 0 | 0 | 0 |
2011-08-30T14:18:00.000
| 2 | 0.099668 | false | 7,244,683 | 0 | 0 | 1 | 1 |
I'm adding a simple web interface to an already existing piece of software; web.py fits the job just right and that's what I'm using. Now I'm researching what templating engine to use and came down to two alternatives: either using web.py's own Templator or using Jinja2.
I already have both working in the app and I'm writing some very simple templates in both to explore them. I must say I find Templator easier to read, that's probably due to me being a programmer and not a web designer (who would probably find Jinja easier?).
While I'm only generating (non compliant ;) ugly HTML pages now, I'll also use the templating engine to generate emails and good old plain text files.
Both softwares are "fast enough" for any practical purpose, I'd like to ask people who used extensively one or the other or both what are their strengths and weaknesses in the areas of ease of use, code cleanliness, flexibility, and so on.
|
Saving select updating data points from an external webpage to a text file
| 7,249,558 | 0 | 1 | 74 | 0 |
php,python,html
|
For Python
For timed tasks for N minutes create an UNIX cron job or Windows equivalent which runs your .py script regularly
Download the weather data using urllib2 module in .py script
Parse HTML using BeautifulSoup or lxml libraries
Select the relevant bits of HTML using XPath selectors or CSS selectors (lxml)
Process data and write it to a text file
The actual implementation is left as an exercise to a reader :)
| 0 | 0 | 1 | 0 |
2011-08-30T20:30:00.000
| 3 | 0 | false | 7,249,412 | 0 | 0 | 1 | 1 |
I am trying to take updating weather data from a website that isn't mine and put a chunk of it into a generic text file every 30 minutes. The text file should not have any html tags or anything but could be delimited by commas or periods or tabs. The website generating the data puts the data in a table with no class or id. What i need is the text from one tag and each of its individual tags within. The tag is on the same line number every time regardless of the updated data.
This seems a bit silly of a challenge as the method for getting the data doesn't seem ideal. I'm open to suggestions for different methods for getting an updated (hourly-twice dailyish) temperature/dewpoint/time/etc data point and for it to be put in a text file.
With regards to automating it every 30 minutes or so, i have an automation program that can download webpages at any time interval.
I hope i was specific enough with this rather weird(to me at least) challenge. I'm not even sure where to start. I have lots of experience with html and basic knowledge of Python, javascript, PHP, and SQL but i am open to taking code or learning syntax of other languages.
|
Mercurial plugin for Eclipse can't find Python--how to fix?
| 7,278,773 | 3 | 1 | 1,116 | 0 |
python,mercurial,eclipse-plugin,osx-lion
|
Nobody answered me, but I figured out the answer. Maybe it will help someone.
I finally realized that since 'hg -y debuginstall' at the command line was giving me the same error message, it wasn't an Eclipse problem at all (duh). Reinstalling a newer version of Mercurial solved the problem.
| 0 | 1 | 0 | 1 |
2011-08-31T18:12:00.000
| 2 | 0.291313 | false | 7,261,451 | 0 | 0 | 1 | 2 |
I'm on Mac OS X 10.7.1 (Lion). I just downloaded a fresh copy of Eclipse IDE for Java EE Developers, and installed the Mercurial plugin. I get the following error message:
abort: couldn't find mercurial libraries in [...assorted Python directories...].
I do have Python 2.6.1 and 3.2.1 installed. I also have a directory System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7, which is on the list of places it looked for the Mercurial libraries. hg -y debuginstall gives me the same message.
What are these libraries named, where is Eclipse likely to have put them when I installed the plugin, and how do I tell Eclipse where they are (or where should I move them to)?
Thanks, Dave
Full error message follows:
abort: couldn't find mercurial libraries in
[/usr/platlib/Library/Python/2.6/site-packages /usr/local/bin
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC
/Library/Python/2.7/site-packages] (check your install and PYTHONPATH)
|
Mercurial plugin for Eclipse can't find Python--how to fix?
| 12,130,976 | 0 | 1 | 1,116 | 0 |
python,mercurial,eclipse-plugin,osx-lion
|
I had two installation of mercurial in mac.
One was installed directly and another using macport.
Removing the direct installation solved the problem.
Remove the direct installation using
easy_install -m mercurial
Update "Mercurial Executable" path to "/opt/local/bin/hg"
Eclipse->Preference->Team->Mercurial->
Restart eclipse
| 0 | 1 | 0 | 1 |
2011-08-31T18:12:00.000
| 2 | 0 | false | 7,261,451 | 0 | 0 | 1 | 2 |
I'm on Mac OS X 10.7.1 (Lion). I just downloaded a fresh copy of Eclipse IDE for Java EE Developers, and installed the Mercurial plugin. I get the following error message:
abort: couldn't find mercurial libraries in [...assorted Python directories...].
I do have Python 2.6.1 and 3.2.1 installed. I also have a directory System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7, which is on the list of places it looked for the Mercurial libraries. hg -y debuginstall gives me the same message.
What are these libraries named, where is Eclipse likely to have put them when I installed the plugin, and how do I tell Eclipse where they are (or where should I move them to)?
Thanks, Dave
Full error message follows:
abort: couldn't find mercurial libraries in
[/usr/platlib/Library/Python/2.6/site-packages /usr/local/bin
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC
/Library/Python/2.7/site-packages] (check your install and PYTHONPATH)
|
What is the difference between django.conf.settings and global_settings?
| 7,269,703 | 2 | 0 | 339 | 0 |
python,django
|
settings is a proxy object that you use in your code to access the settings.
global_settings is a module internal to Django, containing default settings, used when you leave out a variable out of project's settings. I.e. you don't touch it, unless you're changing Django core.
| 0 | 0 | 0 | 0 |
2011-09-01T11:31:00.000
| 1 | 1.2 | true | 7,269,669 | 0 | 0 | 1 | 1 |
What is the difference between django.conf.settings and django.conf.global_settings ?
|
Logger Entity in App engine
| 7,342,091 | 1 | 2 | 157 | 0 |
python,google-app-engine,nosql,google-cloud-datastore
|
There are a few ways to do this:
Accumulate logs and write them in a single datastore put at the end of the request. This is the highest latency option, but only slightly - datastore puts are fairly fast. This solution also consumes the least resources of all the options.
Accumulate logs and enqueue a task queue task with them, which writes them to the datastore (or does whatever else you want with them). This is slightly faster (task queue enqueues tend to be quick), but it's slightly more complicated, and limited to 100kb of data (which hopefully shouldn't be a limitation).
Enqueue a pull task with the data, and have a regular push task or a backend consume the queue and batch-and-insert into the datastore. This is more complicated than option 2, but also more efficient.
Run a backend that accumulates and writes logs, and make URLFetch calls to it to store logs. The urlfetch handler can write the data to the backend's memory and return asynchronously, making this the fastest in terms of added user latency (less than 1ms for a urlfetch call)! This will require waiting for Python 2.7, though, since you'll need multi-threading to process the log entries asynchronously.
You might also want to take a look at the Prospective Search API, which may allow you to do some filtering and pre-processing on the log data.
| 0 | 1 | 0 | 0 |
2011-09-01T17:24:00.000
| 2 | 1.2 | true | 7,274,049 | 0 | 0 | 1 | 2 |
Is it viable to have a logger entity in app engine for writing logs? I'll have an app with ~1500req/sec and am thinking about doing it with a taskqueue. Whenever I receive a request, I would create a task and put it in a queue to write something to a log entity (with a date and string properties).
I need this because I have to put statistics in the site that I think that doing it this way and reading the logs with a backend later would solve the problem. Would rock if I had programmatic access to the app engine logs (from logging), but since that's unavailable, I dont see any other way to do it..
Feedback is much welcome
|
Logger Entity in App engine
| 7,332,700 | 0 | 2 | 157 | 0 |
python,google-app-engine,nosql,google-cloud-datastore
|
How about keeping a memcache data structure of request info (recorded as they arrive) and then run an every 5 minute (or faster) cron job that crunches the stats on the last 5 minutes of requests from the memcache and just records those stats in the data store for that 5 minute interval. The same (or a different) cron job could then clear the memcache too - so that it doesn't get too big.
Then you can run big-picture analysis based on the aggregate of 5 minute interval stats, which might be more manageable than analyzing hours of 1500req/s data.
| 0 | 1 | 0 | 0 |
2011-09-01T17:24:00.000
| 2 | 0 | false | 7,274,049 | 0 | 0 | 1 | 2 |
Is it viable to have a logger entity in app engine for writing logs? I'll have an app with ~1500req/sec and am thinking about doing it with a taskqueue. Whenever I receive a request, I would create a task and put it in a queue to write something to a log entity (with a date and string properties).
I need this because I have to put statistics in the site that I think that doing it this way and reading the logs with a backend later would solve the problem. Would rock if I had programmatic access to the app engine logs (from logging), but since that's unavailable, I dont see any other way to do it..
Feedback is much welcome
|
Passing data from Django to C++ application and back
| 7,275,284 | 2 | 6 | 3,443 | 0 |
c++,python,django,architecture,quickfix
|
Well you have to use some IPC method. One that you don't mention here is having the C++ process listen to a socket. That would add in flexibility (with slight speed cost) that the processes don't even need to be on the same machine.
I've been doing a sort of similar thing, coming from C++ but wanting to write UX in python. My computational backend is C++, and I compile a python module and generate html with flask for the UX. My C++ and python live in the same process so I haven't addressed your core question in practice yet.
One piece of advice I would give is to keep all of your IPC stuff in C++, and write a small python module in C++ using Boost.Python. This will let the python process doing 95% of the work in a pythony world, but give you the bit-level confidence I would want as a C++ dev for the data you are sending over to C++. Boost.Python has made bridging C++ and python web frameworks a breeze for me.
| 0 | 0 | 0 | 0 |
2011-09-01T18:59:00.000
| 4 | 1.2 | true | 7,275,109 | 0 | 0 | 1 | 1 |
We are creating a trading application, where the backend is totally in C++ (using QuickFix engine). We would like to build a web application in Django on top of this backend, where the user can place his orders. Both the Django (python) and the C++ application will be running in their own processes and address space. What do you think would be the best idea to pass orders/messages from Django to C++?
Also, this is a trading application, so latency is the biggest concern. So, I do not want to put orders into a database from Django and then fetch from C++ application.
I'm currently looking at doing it via shared memory or some other IPC mechanism. Is this a good idea?
|
What exactly are Django Apps
| 7,276,084 | 0 | 1 | 538 | 0 |
python,django,django-apps
|
A portal = A django project
A ads system, gallery photos, catalog of products = Apps
| 0 | 0 | 0 | 0 |
2011-09-01T20:18:00.000
| 4 | 0 | false | 7,276,011 | 0 | 0 | 1 | 2 |
I want to switch from Rails to Django, to broaden my mind, and a question has bobbed up in my mind.
My Rails app is quite a mess, since my hobby-based development approach is a patch-and-glue one. I have seen very early that Django divied between a project and an app. According to their site, a project is made of many apps, and one app can be used for many projects.
This intrigued me, since that would made the lines between my site's areas clearer. I tried to find some more examples and information on that, but I couldn't answer my question, which is:
How big/small is such an app? Are they able/supposed to interact closely?
It is, for example smart to have one app to deal with user's profiles, and another app to deal with blog-posts and comments, from those users? (In my site, a user can have several blogs, with different profiles). Or are they meant to be used otherwise?
|
What exactly are Django Apps
| 7,276,061 | 6 | 1 | 538 | 0 |
python,django,django-apps
|
A django App is a fancy name for a python package. Really, that's it. The only thing that would distinguish a django app from other python packages is that it makes sense for it to appear in the INSTALLED_APPS list in settings.py, because it contains things like templates, models, or other features that can be auto-discovered by other django features.
A good django app will do just one thing, do it well, and not be tightly coupled to any other app that might use it. A wide variety of apps are provided with django in the contrib namespace that follow this convention.
In your example, a good way to devise apps is to have one for user profiles (or use one of the many existing profile apps), one app for blog posts (or one of the many that already do this), one app for comments, separate from blog posts (again, you can use an existing app for this), and finally, a very tiny app that ties the three together, since they don't and shouldn't depend on each other directly.
| 0 | 0 | 0 | 0 |
2011-09-01T20:18:00.000
| 4 | 1.2 | true | 7,276,011 | 0 | 0 | 1 | 2 |
I want to switch from Rails to Django, to broaden my mind, and a question has bobbed up in my mind.
My Rails app is quite a mess, since my hobby-based development approach is a patch-and-glue one. I have seen very early that Django divied between a project and an app. According to their site, a project is made of many apps, and one app can be used for many projects.
This intrigued me, since that would made the lines between my site's areas clearer. I tried to find some more examples and information on that, but I couldn't answer my question, which is:
How big/small is such an app? Are they able/supposed to interact closely?
It is, for example smart to have one app to deal with user's profiles, and another app to deal with blog-posts and comments, from those users? (In my site, a user can have several blogs, with different profiles). Or are they meant to be used otherwise?
|
Memory model for apache/modwsgi application in python?
| 7,293,404 | 0 | 4 | 193 | 1 |
python,apache,memory-management,mod-wsgi
|
All Python globals are created when the module is imported. When module is re-imported the same globals are used.
Python web servers do not do threading, but pre-forked processes. Thus there is no threading issues with Apache.
The lifecycle of Python processes under Apache depends. Apache has settings how many child processes are spawned, keep in reserve and killed. This means that you can use globals in Python processes for caching (in-process cache), but the process may terminate after any request so you cannot put any persistent data in the globals. But the process does not necessarily need to terminate and in this regard Python is much more efficient than PHP (the source code is not parsed for every request - but you need to have the server in reload mode to read source code changes during the development).
Since globals are per-process and there can be N processes, the processes share "web server global" state using mechanisms like memcached.
Usually Python globals only contain
Setting variables set during the process initialization
Cached data (session/user neutral)
| 0 | 0 | 0 | 0 |
2011-09-03T13:09:00.000
| 2 | 0 | false | 7,293,290 | 0 | 0 | 1 | 1 |
In a regular application (like on Windows), when objects/variables are created on a global level it is available to the entire program during the entire duration the program is running.
In a web application written in PHP for instance, all variables/objects are destroyed at the end of the script so everything has to be written to the database.
a) So what about python running under apache/modwsgi? How does that work in regards to the memory?
b) How do you create objects that persist between web page requests and how do you ensure there isn't threading issues in apache/modwsgi?
|
Session database table cleanup
| 70,458,388 | 0 | 44 | 26,514 | 0 |
python,django,django-sessions
|
I know this post is old but I tried this command/attribute and it worked for me.
In the 'base.html' file, I inserted:
{{ request.session.clear_expired }}
This clears expired records from the django_session table when the user clicks on any link in the template after the session expires.
Even so, it is necessary to create a routine to clear expired records over a period longer than one day. This is necessary to clear logs when user closes browser with open sessions.
I used Django 3.2.4
| 0 | 0 | 0 | 0 |
2011-09-03T22:10:00.000
| 5 | 0 | false | 7,296,159 | 0 | 0 | 1 | 1 |
Does this table need to be purged or is it taken care of automatically by Django?
|
How to use Python's pip to download and keep the zipped files for a package?
| 62,803,853 | 0 | 144 | 173,682 | 0 |
python,download,pip,zip
|
I would prefer (RHEL) - pip download package==version --no-deps --no-binary=:all:
| 0 | 0 | 0 | 0 |
2011-09-04T15:42:00.000
| 8 | 0 | false | 7,300,321 | 1 | 0 | 1 | 3 |
If I want to use the pip command to download a package (and its dependencies), but keep all of the zipped files that get downloaded (say, django-socialregistration.tar.gz) - is there a way to do that?
I've tried various command-line options, but it always seems to unpack and delete the zipfile - or it gets the zipfile, but only for the original package, not the dependencies.
|
How to use Python's pip to download and keep the zipped files for a package?
| 48,927,464 | 9 | 144 | 173,682 | 0 |
python,download,pip,zip
|
Use pip download <package1 package2 package n> to download all the packages including dependencies
Use pip install --no-index --find-links . <package1 package2 package n> to install all the packages including dependencies.
It gets all the files from CWD.
It will not download anything
| 0 | 0 | 0 | 0 |
2011-09-04T15:42:00.000
| 8 | 1 | false | 7,300,321 | 1 | 0 | 1 | 3 |
If I want to use the pip command to download a package (and its dependencies), but keep all of the zipped files that get downloaded (say, django-socialregistration.tar.gz) - is there a way to do that?
I've tried various command-line options, but it always seems to unpack and delete the zipfile - or it gets the zipfile, but only for the original package, not the dependencies.
|
How to use Python's pip to download and keep the zipped files for a package?
| 60,539,440 | 3 | 144 | 173,682 | 0 |
python,download,pip,zip
|
installing python packages offline
For windows users:
To download into a file
open your cmd and folow this:
cd <*the file-path where you want to save it*>
pip download <*package name*>
the package and the dependencies will be downloaded in the current working directory.
To install from the current working directory:
set your folder where you downloaded as the cwd then follow these:
pip install <*the package name which is downloded as .whl*> --no-index --find-links <*the file locaation where the files are downloaded*>
this will search for dependencies in that location.
| 0 | 0 | 0 | 0 |
2011-09-04T15:42:00.000
| 8 | 0.07486 | false | 7,300,321 | 1 | 0 | 1 | 3 |
If I want to use the pip command to download a package (and its dependencies), but keep all of the zipped files that get downloaded (say, django-socialregistration.tar.gz) - is there a way to do that?
I've tried various command-line options, but it always seems to unpack and delete the zipfile - or it gets the zipfile, but only for the original package, not the dependencies.
|
What kind of tests should one write in Django
| 7,301,849 | 2 | 4 | 288 | 0 |
python,django,unit-testing,testing,django-testing
|
Yes, Django unit tests, using the Client feature, are capable of testing whether or not your routes and forms are correct.
If you want full-blown behavior-driven testing from the outside, you can used a BDD framework like Zombie.
As for which tests you need, Django author Jacob Kaplan-Moss answered the question succinctly: "All of them."
My general testing philosophy is to work until something stupid happens, then write a test to make sure that stupid thing never happens again.
| 0 | 0 | 0 | 1 |
2011-09-04T19:41:00.000
| 1 | 1.2 | true | 7,301,681 | 0 | 0 | 1 | 1 |
Let's says I have a Djano app. Users can sign up, get a activation mail, activate their accounts and log in. After logging in, users can can create, update and delete objects rhough a custom Form which uses the Manager to handle the Model.
What should I be testing here — should I use the request framework to make requests and test the whole chain via the Views and Forms or should I be writing unit tests to test the Manager and the Model?
When testing the whole chain, I get to see that the URLs are configured properly, the Views work as expecvted, the Form cleans the data properly and it would also test the Models and Managers. It seems that the Django test framework is more geared toward unit-testing than this kind of test. (Is this something that should be tested with Twill and Selenium?)
When writing unit tests, I would get to test the Manger and the Models but the URLs and the Forms don't really come into play, do they?!
A really basic question but I'd like to get some of the fundamentals correct.
Thank you everyone.
|
Is there an equivalent of python's pulldom for java?
| 7,309,549 | 1 | 0 | 180 | 0 |
java,python,xml,sax,stax
|
The Java approach seems to be that either you get streaming or DOM parser but not both; while python allows mixing the two.
| 0 | 0 | 1 | 0 |
2011-09-05T11:30:00.000
| 1 | 0.197375 | false | 7,307,423 | 0 | 0 | 1 | 1 |
StAX seems to be a pulling parser (like SAX but without inversion of control). But I didn't find the equivalent of python's expandNode which is what I was interested in in the first place, I don't care about inversion of control.
For those who don't know pulldom, it's a S(t)AX parser where at any point you can obtain the current subtree as a DOM Node.
|
Haystack in INSTALLED_APPS results in Error: cannot import name openProc
| 18,619,990 | 3 | 16 | 4,169 | 0 |
python,django-haystack
|
I had ran pip install haystack and got this error, then I ran pip install django-haystack problem solved!
| 0 | 0 | 0 | 0 |
2011-09-05T20:14:00.000
| 5 | 0.119427 | false | 7,312,374 | 1 | 0 | 1 | 4 |
I am pretty stuck right now. I have a Django project that's been working great until I tried to add Haystack/Whoosh for search. I've had this same stack in other projects working fine.
Whenever I have "haystack" in my settings.INSTALLED_APPS and I try manage.py runserver or manage.py shell I get 'Error: cannot import name openProc'
I thought that this might be a dependency of Haystack that didn't get installed correctly, so I removed Haystack from site-packages and reinstalled, but the same thing keeps happening. Googling openProc and related keywords has turned up nothing.
I'm hoping that someone else has run into this error, or at least that now there will be something in Google that might have an answer! I know these cannot import name <something> errors can be tricky, but this one has me especially stumped because it's related to an external package.
|
Haystack in INSTALLED_APPS results in Error: cannot import name openProc
| 7,312,455 | 17 | 16 | 4,169 | 0 |
python,django-haystack
|
It turns out I was able to get it working by installing the latest source code using pip install git+git://github.com/toastdriven/django-haystack.git
Something was wrong with the version I got doing pip install haystack
| 0 | 0 | 0 | 0 |
2011-09-05T20:14:00.000
| 5 | 1 | false | 7,312,374 | 1 | 0 | 1 | 4 |
I am pretty stuck right now. I have a Django project that's been working great until I tried to add Haystack/Whoosh for search. I've had this same stack in other projects working fine.
Whenever I have "haystack" in my settings.INSTALLED_APPS and I try manage.py runserver or manage.py shell I get 'Error: cannot import name openProc'
I thought that this might be a dependency of Haystack that didn't get installed correctly, so I removed Haystack from site-packages and reinstalled, but the same thing keeps happening. Googling openProc and related keywords has turned up nothing.
I'm hoping that someone else has run into this error, or at least that now there will be something in Google that might have an answer! I know these cannot import name <something> errors can be tricky, but this one has me especially stumped because it's related to an external package.
|
Haystack in INSTALLED_APPS results in Error: cannot import name openProc
| 10,380,456 | 0 | 16 | 4,169 | 0 |
python,django-haystack
|
Installing a past version with pip install haystack==0.10 worked for me, but I think when I have time I'm going to try to migrate to Haystack 2.0.
| 0 | 0 | 0 | 0 |
2011-09-05T20:14:00.000
| 5 | 0 | false | 7,312,374 | 1 | 0 | 1 | 4 |
I am pretty stuck right now. I have a Django project that's been working great until I tried to add Haystack/Whoosh for search. I've had this same stack in other projects working fine.
Whenever I have "haystack" in my settings.INSTALLED_APPS and I try manage.py runserver or manage.py shell I get 'Error: cannot import name openProc'
I thought that this might be a dependency of Haystack that didn't get installed correctly, so I removed Haystack from site-packages and reinstalled, but the same thing keeps happening. Googling openProc and related keywords has turned up nothing.
I'm hoping that someone else has run into this error, or at least that now there will be something in Google that might have an answer! I know these cannot import name <something> errors can be tricky, but this one has me especially stumped because it's related to an external package.
|
Haystack in INSTALLED_APPS results in Error: cannot import name openProc
| 22,312,453 | 1 | 16 | 4,169 | 0 |
python,django-haystack
|
I had this issue as well, and noticed it was because I had the old config vars in settings.py - namely HAYSTACK_SITECONF. Once it was removed, the error went away.
| 0 | 0 | 0 | 0 |
2011-09-05T20:14:00.000
| 5 | 0.039979 | false | 7,312,374 | 1 | 0 | 1 | 4 |
I am pretty stuck right now. I have a Django project that's been working great until I tried to add Haystack/Whoosh for search. I've had this same stack in other projects working fine.
Whenever I have "haystack" in my settings.INSTALLED_APPS and I try manage.py runserver or manage.py shell I get 'Error: cannot import name openProc'
I thought that this might be a dependency of Haystack that didn't get installed correctly, so I removed Haystack from site-packages and reinstalled, but the same thing keeps happening. Googling openProc and related keywords has turned up nothing.
I'm hoping that someone else has run into this error, or at least that now there will be something in Google that might have an answer! I know these cannot import name <something> errors can be tricky, but this one has me especially stumped because it's related to an external package.
|
Pythonic thread-safe object
| 7,325,389 | 5 | 3 | 1,682 | 0 |
python,multithreading,object,global
|
If you don't want to lock, then either don't use globals, or use thread-local storage (in a webapp, you can be fairly sure that a request won't cross thread boundary). If global state can be avoided, it should be avoided. This makes multi-threading way easier to implement and debug.
I also disagree that passing objects around makes the application harder to maintain — it's usually the other way around — global state hides dependencies in addition to requiring careful synchronisation.
Well, there also lock-free approaches, like STM or whatnot, but it's probably an overkill for a web application.
| 0 | 0 | 0 | 0 |
2011-09-06T15:24:00.000
| 1 | 1.2 | true | 7,322,299 | 1 | 0 | 1 | 1 |
After reading a lot on this subject and discussing on IRC, the response seems to be: stay away from threads. Sorry for repeating this question, my intention is to go deeper in the subject by not accepting the "threading is evil" answer, with the hope to find a common solution.
EDIT: Just Say No to the combined evils of locking, deadlocks, lock granularity, livelocks, nondeterminism and race conditions. --Guido van Rossum
I'm developing a Python web application, and I'd like to create a global object for each user which is only accessible by the current user. (for example the requested URI)
The suggested way is to pass the object around, which IMO makes the application harder to maintain, and not beautiful code if I need the same value in different places (some might be 3rd party plugins).
I see that many popular frameworks (Django, CherryPy, Flask) use Python threading locks to solve the issue.
If all these frameworks go against the Pythonic way and feel the need to create a globally accessible object, it means that the community needs this sort of thing. And me too.
Is the "best" way to pass objects around?
Is the only alternative solution to use the "evil" threading locks?
Would it be more Pythonic to store this information in a database or memcached?
Thanks in advance!
|
How do you disable javascript in python's mechanize?
| 9,391,655 | 0 | 5 | 663 | 0 |
javascript,python,html,mechanize
|
Mechanize doesn't deal with Javascript. It only take care of HTML. So you can't stop Javascript running using Mechanize. You need to find some other solution.
| 0 | 0 | 0 | 1 |
2011-09-06T23:07:00.000
| 1 | 0 | false | 7,327,182 | 0 | 0 | 1 | 1 |
I don't know why, but I cant find it anywhere. All i need is the command to disable javascript in python's mechanize.
|
Referencing an external library in a Python appengine project, using Pydev/Eclipse
| 7,329,481 | 4 | 5 | 979 | 0 |
python,google-app-engine,pydev
|
The dev_appserver and the production environment don't have any concept of projects or libraries, so you need to structure your app so that all the necessary libraries are under the application's root. The easiest way to do this, usually, is to symlink them in as subdirectories, or worst-case, to copy them (or, using version control, make them sub-repositories).
How that maps to operations in your IDE depends on the IDE, but in general, it's probably easiest to get the app structured as you need it on disk, and work backwards from that to get your IDE set up how you like it.
| 0 | 1 | 0 | 0 |
2011-09-07T04:46:00.000
| 1 | 1.2 | true | 7,328,959 | 0 | 0 | 1 | 1 |
it's a couple of months I've started development in Python - having myself a C# and Java background.
I'm currently working on 2 different python/appengine applications, and as often happens in those cases, both application share common code - so I would like to refactor and move the common/generic code into a shared place.
In either Java or C# I'd just create a new library project, move the code into the new project and add a reference to the library from the main projects.
I tried the same in Python, but I am unable to make it work.
I am using Eclipse with Pydev plugin.
I've created a new Pydev project, moved the code, and attempted to:
reference the library project from the main projects (using Project Properties -> Project References)
add the library src folder folder into the main projects (in this case I have an error - I presume it's not possible to leave the project boundaries when adding an existing source folder)
add as external library (pretty much the same as google libraries are defined, using Properties -> External libraries)
Import as link (from Import -> File System and enabling "Create links in workspace")
In all cases I am able to reference the library code while developing, but when I start debugging, the appengine development server throws an exception because it can't find what I have moved into a separate library project.
Of course I've searched for a solution a lot, but it looks like nobody has experienced the same problem - or maybe nobody doesn't need to do the same :)
The closest solution I've been able to find is to add an ant script to zip the library sources and copy in the target project - but this way debugging is a pain, as I am unable to step into the library code.
Any suggestion?
Needless to say, the proposed solution must take into account that the library code has to be included in the upload process to appengine...
Thanks
|
What makes other languages faster than Java in terms of Rapid Development?
| 7,332,862 | 0 | 0 | 970 | 0 |
java,php,.net,python,ruby
|
For rapid prototyping the more dynamic the language the better. Something like excel is a good for rapid prototyping. You can have a formula and a graph with a dozen clicks.
However in the long run you may need to migrate your system to something more enterprise friendly. This doesn't always mean you should start this way.
Even if you start in Java you may find you want to migrate some of your code to C for performance reasons.
| 0 | 0 | 0 | 1 |
2011-09-07T11:01:00.000
| 4 | 0 | false | 7,332,758 | 0 | 0 | 1 | 1 |
Many people say developing in Python, Ruby, PHP ...etc is much faster than Java.
Question is, why? In terms of coding, IDEs, available libraries... etc. Or is the speed in making the first prototype only?
I'm interested in answers from people who worked long time on Java and long time on other languages.
Note: I have developed for .Net before Java and yes it was faster to make some apps, but on the long run (large web projects) it will become like Java.
|
Separate Django sites with a common authetication/registration backend
| 7,335,351 | 2 | 6 | 227 | 0 |
python,django,django-sites
|
For this case, you would have 2 settings.py files called settings_A.py and settings_B.py which specify from settings import *
A would have SITE=1 and B would have SITE=B. you can then set these files in your apache configs by setting the environment variable for each virtual host DJANGO_SETTINGS_MODULE=settings_A and DJANGO_SETTINGS_MODULE=settings_B
Then you set up the contrib.sites app with your 2 domain names bound to the appropriate site ID, and your flatpages will be able to be bound to either or both sites.
Lastly, in settings_A.py settings_B.py you either specify seperate root urlconfs or you use the use settings.SITE in your urlconfs to enable and disable groups of urls for each site.
Hope this helps
EDIT: To clarify: as long as you use the same database and SECRET_KEY between both sites, you can use the same user accounts between them too. If the sites are of the form example.com and private.example.com then setting SESSION_COOKIE_DOMAIN to .example.com will allow the session to carry over between both sites.
| 0 | 0 | 0 | 0 |
2011-09-07T13:16:00.000
| 3 | 0.132549 | false | 7,334,498 | 0 | 0 | 1 | 1 |
I need split my current Django application into two sites.
Site A will contain the public facing site which would contain all the static pages and the registration system.
The other site — Site B — is the site for registered users. They can also log-in to application site through Site B.
If I'm not mistaken, I can use django.contrib.sites framework to accomplish the task of having multiple sites but can have a common authetication/registration backend?
How can I accomplish this?
Thanks.
|
Testing REST API with database backend
| 7,355,552 | 8 | 11 | 7,898 | 0 |
python,unit-testing,testing,rest,flask
|
There are 2 standard ways of approaching a test that depends on something else (object, function call, etc).
You can use mocks in place of the objects the code you are testing depends on.
You can load a fixture or do the creation/call in the test setup.
Some people like "classical" unit tests where only the "unit" of code is tested. In these cases you typically use mocks and stubs to replace the dependencies.
Other like more integrative tests where most or all of the call stack is tested. In these cases you use a fixture, or possibly even do calls/creations in a setup function.
Generally you would not make one test depend on another. All tests should:
clean up after themselves
be runnable in isolation
be runnable as part of a suite
be consistent and repeatable
If you make one test dependent on another they cannot be run in isolation and you are also forcing an order to the tests run. Enforcing order in tests isn't good, in fact many people feel you should randomize the order in which your tests are run.
| 0 | 0 | 0 | 1 |
2011-09-07T15:03:00.000
| 2 | 1.2 | true | 7,336,101 | 0 | 0 | 1 | 1 |
I want to know the best/different ways to test a REST API which uses a database backend. I've developed my API with Flask in Python and want to use unittest or nose.
But my problem, is that some resources require another resource to create them in the first place. Is there a way to say that to test the creation of a blog post requires that another test involving the creation of the author was successful?
|
Python: Conditional variables based on whether nosetest is running
| 7,341,671 | 10 | 8 | 1,720 | 0 |
python,nose,peewee
|
Perhaps examining sys.argv[0] to see what command is running?
| 0 | 0 | 0 | 0 |
2011-09-07T21:57:00.000
| 2 | 1.2 | true | 7,341,005 | 1 | 0 | 1 | 1 |
I'm running nosetests which have a setup function that needs to load a different database than the production database. The ORM I'm using is peewee which requires that the database for a model is set in the definition.
So I need to set a conditional variable but I don't know what condition to use in order to check if nosetest is running the file.
I read on Stack Overflow that you can check for nose in sys.modules but I was wondering if there is a more exact way to check if nose is running.
|
Django referrer question
| 7,346,457 | 5 | 3 | 2,659 | 0 |
python,regex,django,referrer
|
in the meta dictionary of request there is a HTTP_REFERER value .. I think that can help you
| 0 | 0 | 0 | 0 |
2011-09-08T10:09:00.000
| 1 | 1.2 | true | 7,346,342 | 0 | 0 | 1 | 1 |
I want to know who refers my webpage, so in my models I have:referrer = models.CharField(max_length=30, default='google',
verbose_name=_('referrer'), help_text=_('Referrer'))
This are the URL's for my page:url(r'^$', app_views.index, name='index_default') and url(r'^(\w+)/$', app_views.index, name='index_default2') I want to send my referrer parameter to the flash embeddeded within my HTML along flashvars:< param name="FlashVars" value="referrer={{ referrer }}" /> How should the view look like in order to catch the referrer matched with the regular expression? Something like def index(request):
return render_to_response('index.html',
{
'referrer':referrer,
},
context_instance=RequestContext(request))
|
Nosetests & Combined Coverage
| 24,001,681 | 1 | 11 | 3,218 | 0 |
python,unit-testing,nose
|
nosetests --with-coverage -i project1/*.py -i project2/*.py
| 0 | 0 | 0 | 1 |
2011-09-08T17:45:00.000
| 2 | 0.099668 | false | 7,352,319 | 0 | 0 | 1 | 1 |
I have many projects that I'm programatically running:
nosetest --with-coverage --cover-html-dir=happy-sauce/
The problem is that for each project, the coverage module overwrites the index.html file, instead of appending to it. Is there a way to generate a combined super-index.html file, that contains the results for all my projects?
Thanks.
|
Mocking a Django Queryset in order to test a function that takes a queryset
| 7,363,032 | -20 | 38 | 24,744 | 0 |
python,django,unit-testing,django-queryset,django-testing
|
Not that I know of, but why not use an actual queryset? The test framework is all set up to allow you to create sample data within your test, and the database is re-created on every test, so there doesn't seem to be any reason not to use the real thing.
| 0 | 0 | 0 | 0 |
2011-09-09T14:10:00.000
| 9 | 1.2 | true | 7,362,952 | 0 | 0 | 1 | 1 |
I have a utility function in my Django project, it takes a queryset, gets some data from it and returns a result. I'd like to write some tests for this function. Is there anyway to 'mock' a QuerySet? I'd like to create an object that doesn't touch the database, and i can provide it with a list of values to use (i.e. some fake rows) and then it'll act just like a queryset, and will allow someone to do field lookups on it/filter/get/all etc.
Does anything like this exist already?
|
Encoding issue between Django and client
| 7,365,053 | 0 | 0 | 426 | 0 |
python,django,character-encoding
|
Check the file encoding for your Python files. Make sure they're UTF-8. And also, make sure that the client side is also UTF-8.
| 0 | 0 | 0 | 0 |
2011-09-09T17:02:00.000
| 2 | 0 | false | 7,365,035 | 0 | 0 | 1 | 1 |
I'm trying to send latin characters (é, è...) to the client side using Django, and I can't get it to work. In Django I tried to write directly latin characters in my python files, but I had errors. I then used unicode (writing 'Soci\u00E9t\u00E9' for 'Société'), but when sending it to the client side I get the raw unicode characters.
Can anybody help?
Julien
|
Django-lettuce: where to keep language file
| 7,370,790 | 0 | 0 | 163 | 0 |
python,django,bdd,lettuce
|
I do not know of lettuce, but I am guessing that you can include your file languages.py anywhere python can find it.
I do this by appending the directory containing my files into sys.path. You could overwrite PYTHONPATH instead. A directory containing python code should have a __init__.py file.
I am not sure if you already tried the above. If you did, you can ignore my answer.
| 0 | 0 | 0 | 0 |
2011-09-10T09:09:00.000
| 1 | 0 | false | 7,370,757 | 0 | 0 | 1 | 1 |
I've added lettuce into my django project, where to keep the languages.py file in django project instead of modifying lettuce itself?
Sultan
|
Python/Django - Web Application Performance
| 7,374,796 | 1 | 0 | 1,518 | 0 |
php,python,django,performance
|
You can think about PostgreSQL as Oracle, so from what I've found on the internet (because I am also a beginner) here is the order of DBs from smaller projects, to biggest:
SQLite
MySql
PostgreSQL
Oracle
| 0 | 0 | 0 | 1 |
2011-09-10T17:18:00.000
| 3 | 0.066568 | false | 7,373,299 | 0 | 0 | 1 | 3 |
I'm currently working on a social web application using python/django. Recently I heard about the PHP's weakness on large scale projects, and how hippo-php helped Facebook to overcome this barrier. Considering a python social web application with lot of utilization, could you please tell me if a similar custom tool could help this python application? In what way? I mean which portion (or layer) of application need to be written for example in c++? I know that it's a general question but someone with relevant experience I think that could help me.
Thank you in advance.
|
Python/Django - Web Application Performance
| 7,376,098 | 1 | 0 | 1,518 | 0 |
php,python,django,performance
|
Don't try to scale too early! Of course you can try to be prepared but most times you can not really know where you need to scale and therefore spend a lot of time and money in wrong direction before you recognize it.
Start your webapp and see how it goes (agreeing with Spacedman here).
Though from my experience the language of your web app is less likely going to be the bottleneck. Most of the time it starts with the database. Many times it simply a wrong line of code (be it just a for loop) and many other times its something like forgetting to use sth. like memcached or task management. As said, find out where it is. In most cases its better to check something else before blaming the language speed for it (since its most likely not the problem!).
| 0 | 0 | 0 | 1 |
2011-09-10T17:18:00.000
| 3 | 0.066568 | false | 7,373,299 | 0 | 0 | 1 | 3 |
I'm currently working on a social web application using python/django. Recently I heard about the PHP's weakness on large scale projects, and how hippo-php helped Facebook to overcome this barrier. Considering a python social web application with lot of utilization, could you please tell me if a similar custom tool could help this python application? In what way? I mean which portion (or layer) of application need to be written for example in c++? I know that it's a general question but someone with relevant experience I think that could help me.
Thank you in advance.
|
Python/Django - Web Application Performance
| 7,373,467 | 3 | 0 | 1,518 | 0 |
php,python,django,performance
|
The portion to rewrite in C++ is the portion that is too slow in Python. You need to figure out where your bottleneck is, which you can do by load testing or just waiting until users complain.
Of course, even rewriting in C++ might not help. Your bottleneck might be the database (move to a separate, faster DB server or use sharding) or disk, or memory, or anything. Find bottleneck, work out how to eliminate bottleneck, implement. With 'test' inbetween all those phases. General advice.
There's normally no magic bullet, and I imagine Facebook did a LOT of testing and analysis of their bottlenecks before they tried anything.
| 0 | 0 | 0 | 1 |
2011-09-10T17:18:00.000
| 3 | 1.2 | true | 7,373,299 | 0 | 0 | 1 | 3 |
I'm currently working on a social web application using python/django. Recently I heard about the PHP's weakness on large scale projects, and how hippo-php helped Facebook to overcome this barrier. Considering a python social web application with lot of utilization, could you please tell me if a similar custom tool could help this python application? In what way? I mean which portion (or layer) of application need to be written for example in c++? I know that it's a general question but someone with relevant experience I think that could help me.
Thank you in advance.
|
Low memory and fastest querying database for a Python project
| 7,377,444 | 1 | 3 | 830 | 1 |
python,database,nosql,rdbms
|
I would recommend Postresql, only because it does what you want, can scale, is fast, rather easy to work with and stable.
It is exceptionally fast at the example queries given, and could be even faster with document querying.
| 0 | 0 | 0 | 0 |
2011-09-10T23:49:00.000
| 2 | 1.2 | true | 7,375,415 | 0 | 0 | 1 | 1 |
I'm migrating a GAE/Java app to Python (non-GAE) due new pricing, so I'm getting a little server and I would like to find a database that fits the following requirements:
Low memory usage (or to be tuneable or predictible)
Fastest querying capability for simple document/tree-like data identified by key (I don't care about performance on writing and I assume it will have indexes)
Bindings with Pypy 1.6 compatibility (or Python 2.7 at least)
My data goes something like this:
Id: short key string
Title
Creators: an array of another data structure which has an id - used as key -, a name, a site address, etc.
Tags: array of tags. Each of them can has multiple parent tags, a name, an id too, etc.
License: a data structure which describes its license (CC, GPL, ... you say it) with name, associated URL, etc.
Addition time: when it was add in our site.
Translations: pointers to other entries that are translations of one creation.
My queries are very simple. Usual cases are:
Filter by tag ordered by addition time.
Select a few (pagination) ordered by addition time.
(Maybe, not done already) filter by creator.
(Not done but planned) some autocomplete features in forms, so I'm going to need search if some fields contains a substring ('LIKE' queries).
The data volume is not big. Right now I have about 50MB of data but I'm planning to have a huge dataset around 10GB.
Also, I want to rebuild this from scratch, so I'm open to any option. What database do you think can meet my requirements?
Edit: I want to do some benchmarks around different options and share the results. I have selected, so far, MongoDB, PostgreSQL, MySQL, Drizzle, Riak and Kyoto Cabinet.
|
how to make web framework based on Python like django?
| 7,378,820 | 1 | 14 | 12,556 | 0 |
python,django
|
I would also browse the documentation for Paste and read a bit from Ian Bicking.. He lays out the conceptual blocks so to speak, quite well and has bloggedthe lessons learned as it was developed.
Web2py doco also
but yes as UKU said: WSGI is a modern requirement.
| 0 | 0 | 0 | 0 |
2011-09-11T13:11:00.000
| 4 | 0.049958 | false | 7,378,398 | 0 | 0 | 1 | 1 |
I'm just wondering what knowledge or techniques are needed to make a web framework like django.
so the webframework is able to serve as a cloud computing (a website can be scaled horizontally by sending some stuffs that must be solved to other server.) when needed, and can be used to build a website fast like django if a developer want to build a just simple website.
Sorry. my English is very awkward cause I'm an Korean.
just give me some approaches or instructions about what techniques are needed to build a web framework or what I have to do or learn.
Thanks a lot.
|
Pyaudio for external interfaces (Mac OSX)
| 8,441,627 | 1 | 3 | 629 | 0 |
macos,audio,python
|
A couple shots in the dark - Verify if you're opening the device correctly - looks like the Fireface can be both half or full duplex (pref pane configurable?), and pyaudio apparently cares (i.e. you can't specify an output if you specify an input or vise versa.)
Another thing to check out is the audio routing - under /Applications/Utilities/Audio Midi Setup.app, depending on how you have the signals coming in you might be connecting to the wrong one and not realizing it.
| 0 | 0 | 0 | 1 |
2011-09-06T09:50:00.000
| 1 | 0.197375 | false | 7,379,439 | 0 | 0 | 1 | 1 |
Using Python and PyAudio, I can't seem to record sound to a wav file from an external audio interface (RME Fireface), but i am able to do so with the in built mic on my iMac. I set the default device to Fireface in System preferences, and when i run the code, the wav file is created but no sound comes out when i play it. The code is as given on the PyAudio webpage. Is there any way to rectify this?
|
In which cases can I reuse a django project for multiple applications?
| 7,396,202 | 1 | 2 | 95 | 0 |
python,django,web-applications
|
A website is usually a project. In that website, you may have multiple features (a blog, a wiki, etc.). Each of those features should be an application in the project.
| 0 | 0 | 0 | 0 |
2011-09-13T02:26:00.000
| 2 | 1.2 | true | 7,396,170 | 0 | 0 | 1 | 1 |
I just finished doing the tutorial django app. I now want to build my own app. Should I just create a new app within the tutorial project folder or should I create a new project folder with a new app?
I am unsure in which cases it makes sense to re-use a project and create multiple apps under that project vs. making new projects for each new app
|
Fastest/most efficient in App Engine, local file read or memcache hit?
| 7,400,621 | 6 | 5 | 478 | 0 |
python,performance,google-app-engine
|
If they are just few kbytes I would load them on the instance memory; amongst the storage choices (Memcache, Datastore, Blobstore and so on) on Google App Engine , instance memory option shoud be the fastest.
| 0 | 1 | 0 | 0 |
2011-09-13T09:58:00.000
| 1 | 1.2 | true | 7,399,965 | 0 | 0 | 1 | 1 |
I have a couple of smaller asset files (text templates typically 100 - a few K bytes) in my app that I'm considering caching using memcached. But does anyone here know if loading a local file or requesting it from memcache is the fastest/most resource efficient?
(I'll be using the Python version of App Engine)
|
django flatpages works with DEBUG=True, doesn't work with DEBUG=False
| 7,405,517 | 1 | 0 | 474 | 0 |
python,django,django-flatpages
|
When debug is set to False django renders the 500.html template instead of the debug stack-trace thing.
It might be that on a http 404 (not found) exception it tries to render the 404.html template and if it's not found than tries with the 500.html (internal error).
It is not a problem in itself but just a configuration.
| 0 | 0 | 0 | 0 |
2011-09-13T10:12:00.000
| 1 | 1.2 | true | 7,400,156 | 0 | 0 | 1 | 1 |
I can see that django looks for 500.html when DEBUG is False. What could be the problem?
|
How to allow dynamically created file to be downloaded in CherryPy?
| 7,401,164 | 2 | 1 | 1,404 | 0 |
python,cherrypy
|
Add 'Content-Disposition: attachment; filename="<file>"' header to response
| 0 | 0 | 1 | 0 |
2011-09-13T10:47:00.000
| 3 | 1.2 | true | 7,400,601 | 0 | 0 | 1 | 1 |
I'm trying to use CherryPy for a simple website, having never done Python web programming before.
I'm stuck trying to allow the download of a file that is dynamically created. I can create a file and return it from the handler, or call serve_fileobj() on the file, but in either case the contents of the file are simply rendered to the screen, rather than downloaded.
Does CherryPy offer any useful methods here? How can this be accomplished?
|
Force Django to use a period for specific decimals
| 7,403,926 | 1 | 0 | 123 | 0 |
python,django,django-templates
|
The best way that does is casting the number to a string before I send it to the template, which works perfectly every time.
I'd like to know if there's a better way to do it.
Nope.
| 0 | 0 | 0 | 0 |
2011-09-13T14:48:00.000
| 1 | 1.2 | true | 7,403,836 | 0 | 0 | 1 | 1 |
I've got an issue with the Django template localisation. In the German locale, it forces decimals to use a comma as the separator, which is fine for display. However I also need to modify some JS tracking to send a price to the reporting engine.
Django is also localising that value (It's inline with the HTML, bad I know, but I didn't write it & I can't change that right now)
Is there any way to force the template to use the period for that particular number? I've tried:
{{ price|floatformat:2 }}
but that doesn't work.
The only way that does is casting the number to a string before I send it to the template, which works well enough but I'd like to know if there's a better way to do it.
|
Dynamically serve static content based on server name in Django
| 7,409,817 | 0 | 1 | 417 | 0 |
python,django
|
For example, Apache has mod_rewrite that you can use to rewrite URLs:
RewriteCond %{HTTP_REFERER} ^www.domain1.com$ [NC]
RewriteRule /static/[^/]+ /static/domain1/$1 [L]
RewriteCond %{HTTP_REFERER} ^www.domain2.com$ [NC]
RewriteRule /static/[^/]+ /static/domain2/$1 [L]
(this is untested)
other servers also have similar functionality.
Just make sure your django application emits static urls that are site-independent and can be correctly rewritten.
| 0 | 0 | 0 | 0 |
2011-09-13T23:23:00.000
| 3 | 0 | false | 7,409,666 | 0 | 0 | 1 | 1 |
I'm writing a web application in Django that is accessed from multiple domains to the same IP address. The idea is that each domain the application is accessed from will receive unique branding.
So for example, if there were two domains, reseller.com and oem.com, and you went to oem.com, it would take you to to the same website as reseller.com, but with differently themed content (say, sent from /static/oem.com/{files} instead of /static/reseller.com/{files}).
Basically my idea has been to define a custom template tag, that receives the SERVER_NAME as an argument, which would return the location of the content.
Are there any alternatives, or simply easier options?
Edit: I should probably add that I'm using MongoDB for this project, and as such it's more than likely Django's ORM won't be used for the project.
Edit again: More clarification; I'm using nginx.
|
How to access a static file in controller in turbogears
| 7,521,808 | 0 | 0 | 249 | 0 |
python,turbogears,turbogears2
|
the more simple I know :
jest get the diname of base of the project
filename = os.path.join(os.path.dirname(myproject.__file__), 'public', 'xml', 'file.xml')
| 0 | 0 | 0 | 0 |
2011-09-14T14:24:00.000
| 1 | 0 | false | 7,417,964 | 0 | 0 | 1 | 1 |
I have an XML file in /my_project/public/xml/file.xml, that I want to read and parse it in one method in controller. The file can be easily accessed through a template, but I have no experience with accessing files in controller.
|
Wildcards in getElementsByTagName (xml.dom.minidom)
| 7,421,454 | 0 | 0 | 897 | 0 |
python
|
As getElementsByTagName returns a DOMElement list you could just simply concatenate the two lists.
Alternatively XPath supports and/or operators, so you could use that. That would require using the elementTree or lxml modules instead.
| 0 | 0 | 1 | 0 |
2011-09-14T18:42:00.000
| 2 | 0 | false | 7,421,351 | 0 | 0 | 1 | 1 |
I'm trying to parse a ODF-document with xml.dom.minidom. I would like to get all elements that are text:p OR text:h. Seems like there would be a way to add a wildcard in the getElementsByTagName method. Or is it?
Is there a better way to parse a odf-document without uno?
|
Plone Content Type-Specific Portlet Assignment
| 7,435,407 | 1 | 5 | 911 | 0 |
python,plone,portlet
|
Do the assignment to your portaltype live on a site via Sitesetup (controlpanel) -> Types -> "Manage portlets assigned to this content type".
Then export the configuration via ZMI -> portal_setup -> Export-Tab -> select 'Portlets' -> click 'export' on bottom.
Extract the types/YourType.xml-file and copy the relevant parts in your package's profiles/default/types/YourType.xml.
| 0 | 0 | 0 | 1 |
2011-09-15T14:15:00.000
| 2 | 0.099668 | false | 7,432,317 | 0 | 0 | 1 | 1 |
I'm developing a content type for Plone 4, and I'd like to block all user, group, and context portlets it may inherit from its parent object. I'm thoroughly confused by the documentation at this point–in portlets.xml, <blacklist/> only seems to address path-specific blocking. <assignment/> seems like what I want, but it seems too specific–I don't want to manage the assignment for all possible portlets on my content type.
There are hints that I've found that customizing an ILeftColumn and IRightColumn portlet manager specific to the content type, but I can't find any good examples. Does anyone have any hints or suggestions? I feel like I'm missing something dead simple.
|
i am getting error "no schemaLocation attribute in import" when using Python client accessing JIRA via SOAP
| 7,438,706 | 0 | 1 | 549 | 0 |
python,soap,jira,soappy
|
I changed the JIRA Python CLI code to use suds instead of SOAPpy a while ago and haven't looked back. SOAPpy is pretty old and seems unsupported now.
| 0 | 0 | 1 | 0 |
2011-09-15T16:53:00.000
| 1 | 1.2 | true | 7,434,578 | 0 | 0 | 1 | 1 |
here is the sample code :
#!/usr/bin/env python
# Sample Python client accessing JIRA via SOAP. By default, accesses
# http://jira.atlassian.com with a public account. Methods requiring
# more than basic user-level access are commented out. Change the URL
# and project/issue details for local testing.
#
# Note: This Python client only works with JIRA 3.3.1 and above (see
# http://jira.atlassian.com/browse/JRA-7321)
#
# Refer to the SOAP Javadoc to see what calls are available:
import SOAPpy, getpass, datetime
soap = SOAPpy.WSDL.Proxy('http://jira.company.com:8080/rpc/soap/jirasoapservice-v2?wsdl')
jirauser='username'
passwd='password'
# This prints available methods, but the WSDL doesn't include argument
# names so its fairly useless. Refer to the Javadoc URL above instead
#print 'Available methods: ', soap.methods.keys()
def listSOAPmethods():
for key in soap.methods.keys():
print key, ': '
for param in soap.methods[key].inparams:
print '\t', param.name.ljust(10), param.type
for param in soap.methods[key].outparams:
print '\tOut: ', param.name.ljust(10), param.type
auth = soap.login(jirauser, passwd)
issue = soap.getIssue(auth, 'QA-79')
print "Retrieved issue:", issue
print "Done!"
The complete error is as follows , in order to provide the complete context:
IMPORT: http://service.soap.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://jira.mycompany.com:8080/rpc/soap/jirasoapservice-v2
no schemaLocation attribute in import
IMPORT: http://exception.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://schemas.xmlsoap.org/soap/encoding/
no schemaLocation attribute in import
/usr/local/lib/python2.6/dist-packages/wstools-0.3-py2.6.egg/wstools/XMLSchema.py:3107: DeprecationWarning: object.__init__() takes no parameters
tuple.__init__(self, args)
IMPORT: http://service.soap.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://beans.soap.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://jira.mycompany.com:8080/rpc/soap/jirasoapservice-v2
no schemaLocation attribute in import
IMPORT: http://schemas.xmlsoap.org/soap/encoding/
no schemaLocation attribute in import
IMPORT: http://service.soap.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://beans.soap.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://exception.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://schemas.xmlsoap.org/soap/encoding/
no schemaLocation attribute in import
IMPORT: http://beans.soap.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://jira.mycompany.com:8080/rpc/soap/jirasoapservice-v2
no schemaLocation attribute in import
IMPORT: http://exception.rpc.jira.atlassian.com
no schemaLocation attribute in import
IMPORT: http://schemas.xmlsoap.org/soap/encoding/
no schemaLocation attribute in import
|
How to duplicate virtualenv
| 7,441,158 | 0 | 162 | 113,368 | 0 |
python,django,virtualenv
|
Can you not simply:
Copy the existing virtual env directory to a new one
Update to the new Django?
| 0 | 0 | 0 | 0 |
2011-09-15T23:43:00.000
| 8 | 0 | false | 7,438,681 | 1 | 0 | 1 | 2 |
I have an existing virtualenv with a lot of packages but an old version of Django.
What I want to do is duplicate this environment so I have another environment with the exact same packages but a newer version of Django. How can I do this?
|
How to duplicate virtualenv
| 66,396,104 | 1 | 162 | 113,368 | 0 |
python,django,virtualenv
|
In case you use pip "venv". I copy pasted the folder holding the virtual environment and manually changed the files in the bin folder of the copied folder.
I don't know if its efficient,but it works!
| 0 | 0 | 0 | 0 |
2011-09-15T23:43:00.000
| 8 | 0.024995 | false | 7,438,681 | 1 | 0 | 1 | 2 |
I have an existing virtualenv with a lot of packages but an old version of Django.
What I want to do is duplicate this environment so I have another environment with the exact same packages but a newer version of Django. How can I do this?
|
virtual environment from Mac to Linux
| 7,450,904 | 3 | 2 | 2,982 | 0 |
python,django,linux,macos,virtualenv
|
You can just recreate the virtual environment on Ubuntu. The virtual env will have the python binary which will be different on a different system.
| 0 | 1 | 0 | 0 |
2011-09-16T22:01:00.000
| 2 | 1.2 | true | 7,450,835 | 1 | 0 | 1 | 1 |
I recently made a django project using virtualenv on my mac. That mac broke, but I saved the files and now I want to work on my project using my linux computer. I am now having some difficulty running the virtual environment in Ubuntu.
Does it even make sense to try and use a virtual env made in Mac OS on Ubuntu?
Thanks
|
GAE Lookup Table Incompatible with Transactions?
| 7,466,485 | 1 | 0 | 143 | 1 |
python,google-app-engine,transactions,google-cloud-datastore,entity-groups
|
If you can, try and fit the data into instance memory. If it won't fit in instance memory, you have a few options available to you.
You can store the data in a resource file that you upload with the app, if it only changes infrequently, and access it off disk. This assumes you can build a data structure that permits easy disk lookups - effectively, you're implementing your own read-only disk based table.
Likewise, if it's too big to fit as a static resource, you could take the same approach as above, but store the data in blobstore.
If your data absolutely must be in the datastore, you may need to emulate your own read-modify-write transactions. Add a 'revision' property to your records. To modify it, fetch the record (outside a transaction), perform the required changes, then inside a transaction, fetch it again to check the revision value. If it hasn't changed, increment the revision on your own record and store it to the datastore.
Note that the underlying RPC layer does theoretically support multiple independent transactions (and non-transactional operations), but the APIs don't currently expose any way to access this from within a transaction, short of horrible (and I mean really horrible) hacks, unfortunately.
One final option: You could run a backend provisioned with more memory, exposing a 'SpellCheckService', and make URLFetch calls to it from your frontends. Remember, in-memory is always going to be much, much faster than any disk-based option.
| 0 | 1 | 0 | 0 |
2011-09-16T22:55:00.000
| 2 | 1.2 | true | 7,451,163 | 0 | 0 | 1 | 2 |
My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise.
My current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data.
The problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions.
Note that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example.
One solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution.
Ideally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however.
Storing the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction.
Does anyone know of a suitable implementation for large global lookup tables?
Please understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.
|
GAE Lookup Table Incompatible with Transactions?
| 7,452,303 | 1 | 0 | 143 | 1 |
python,google-app-engine,transactions,google-cloud-datastore,entity-groups
|
First, if you're under the belief that a namespace is going to help avoid key collisions, it's time to take a step back. A key consists of an entity kind, a namespace, a name or id, and any parents that the entity might have. It's perfectly valid for two different entity kinds to have the same name or id. So if you have, say, a LookupThingy that you're matching against, and have created each member by specifying a unique name, the key isn't going to collide with anything else.
As for the challenge of doing the equivalent of a spell-check against an unparented lookup table within a transaction, is it possible to keep the lookup table in code?
Or can you think of an analogy that's closer to what you need? One that motivates the need to do the lookup within a transaction?
| 0 | 1 | 0 | 0 |
2011-09-16T22:55:00.000
| 2 | 0.099668 | false | 7,451,163 | 0 | 0 | 1 | 2 |
My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise.
My current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data.
The problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions.
Note that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example.
One solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution.
Ideally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however.
Storing the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction.
Does anyone know of a suitable implementation for large global lookup tables?
Please understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.
|
Is there a JavaScript (ECMAScript) implementation written in Python?
| 7,531,060 | 2 | 18 | 2,858 | 0 |
javascript,python,interpreter,vm-implementation
|
I would recommend that you just stick to node.js on your local development box, translate your CoffeeScript files over to JavaScript, and deploy the translated scripts with your apps.
I get that you want to avoid having node.js on your servers, that's all fair and good. Jumping through hoops with Python invoking JavaScript to translate CoffeeScript seems more hassle to me than it's worth.
| 0 | 0 | 0 | 0 |
2011-09-17T00:25:00.000
| 6 | 0.066568 | false | 7,451,619 | 1 | 0 | 1 | 1 |
Are there any JavaScript (ECMAScript) implementations written in pure Python? It is okay even if its implementation is very slow.
|
Django / general caching question
| 7,455,582 | 2 | 1 | 728 | 0 |
python,django,caching,memcached
|
It's generally difficult to delete large categories of keys. A better approach is for each site to have a generation number associated with it. Start the generation at 1. Use the generation number in the cache keys for that site. When you make a fundamental change, or any other time you want to invalidate the entire cache for the site, increment the site's generation number. Now all cache accesses will be misses, until everything is cached anew. All the old data will still be in the cache, but it will be discarded as it ages, and it isn't accessed any more.
This scheme is extremely efficient, since it doesn't require finding or touching all the old data at all. It can also be generalized to any class of cache contents, it doesn't have to be per-site.
| 0 | 0 | 0 | 0 |
2011-09-17T14:03:00.000
| 2 | 1.2 | true | 7,455,325 | 0 | 0 | 1 | 1 |
One common pattern when caching with Django is to use the current site's ID in each cache key, in order to, in essence, namespace your keys. The problem I have is that I'd love to be able to delete all values in cache under a namespace (e.g. Delete all cache values for site 45 because they've made some fundamental change). The current pattern to dealing with this send signals all over the place, etc. I've used the Site.id cache key example because that is a common pattern that others may recognize, but the way I'm using cache for a custom multi-tenant application makes this problem even more profound, so my question: Is there a cache back-end and/or pattern that works well for deleting objects in a namespaced way, or pseudo-namespaced way, that is not extraordinarily expensive (ie, not looping through all possible cache keys for a given namespace, deleting the cache for each)? I would prefer to use memecached, but am open to any solution that works well, plug-in or not.
|
Python Flask - login with email address, not username
| 7,456,386 | 2 | 0 | 573 | 0 |
python,email,login,flask
|
You can use the Flask's flaskr example (exists in Flask git repo under examples) and rename the username there to email.
| 0 | 0 | 0 | 0 |
2011-09-17T15:30:00.000
| 1 | 1.2 | true | 7,455,843 | 0 | 0 | 1 | 1 |
I want users to login with their email address, not with a username. How easy is it to do that with Flask ? Where can I find an example ?
|
Store images temporary in Google App Engine?
| 7,461,775 | 4 | 2 | 424 | 0 |
python,google-app-engine,hosting
|
GAE has a BlobStore API, which can work pretty much as a file storage, but probably it's not what you whant. Actually, the right answer depends on what kind of API you're using - it may support file-like objects, so you could pass urllib response object, or accept URLs, or tons of other interesting features
| 0 | 1 | 0 | 0 |
2011-09-18T11:18:00.000
| 2 | 1.2 | true | 7,461,111 | 0 | 0 | 1 | 2 |
I'm writing an app with Python, which will check for updates on a website(let's call it A) every 2 hours, if there are new posts, it will download the images in the post and post them to another website(call it B), then delete those images.
Site B provide API for upload images with description, which is like:
upload(image_path, description), where image_path is the path of the image on your computer.
Now I've finished the app, and I'm trying to make it run on Google App Engine(because my computer won't run 7x24), but it seems that GAE won't let you write files on its file system.
How can I solve this problem? Or are there other choices for free Python hosting and providing "cron job" feature?
|
Store images temporary in Google App Engine?
| 7,465,743 | 1 | 2 | 424 | 0 |
python,google-app-engine,hosting
|
You shouldn't need to use temporary storage at all - just download the image with urlfetch into memory, then use another urlfetch to upload it to the destination site.
| 0 | 1 | 0 | 0 |
2011-09-18T11:18:00.000
| 2 | 0.099668 | false | 7,461,111 | 0 | 0 | 1 | 2 |
I'm writing an app with Python, which will check for updates on a website(let's call it A) every 2 hours, if there are new posts, it will download the images in the post and post them to another website(call it B), then delete those images.
Site B provide API for upload images with description, which is like:
upload(image_path, description), where image_path is the path of the image on your computer.
Now I've finished the app, and I'm trying to make it run on Google App Engine(because my computer won't run 7x24), but it seems that GAE won't let you write files on its file system.
How can I solve this problem? Or are there other choices for free Python hosting and providing "cron job" feature?
|
Multiple python web applications running on multiple domains (virtualhost)?
| 7,469,888 | 1 | 0 | 459 | 0 |
python,hosting,python-3.x,web-hosting,cherrypy
|
Apache/mod_wsgi can do what is required. Each mounted web application under mod_wsgi will run in a distinct sub interpreter in the same process so can be using different code bases. Better still, you use daemon mode of mod_wsgi and delegate each web application to distinct process so not risk of them interfering with each other.
| 0 | 0 | 0 | 0 |
2011-09-19T09:01:00.000
| 2 | 1.2 | true | 7,468,544 | 0 | 0 | 1 | 1 |
I am stuck and desperate.
Is it possible to serve multiple python web applications on multiple different domains using virtualhost on cherrypy? Hmm wait... I will answer myself: Yes, it is possible. With virtual host dispatcher it is possible, until i require this:
I need to use more instances of same application but in different versions. This means that I need to somehow split the namespace for the python import for these applications.
Example:
I have application MyApp and there are two versions of it. I have got two domains app1.com and app2.com.
When I access app1.com I would like to get the application MyApp in version 1. When I access app2.com, it should be MyApp in version 2.
I am now using the VirtualHostDispatcher of cherrypy 3.2 and the problem is that, when I use import from the methods of MyApp version 1 and the MyApp version 2 has been loaded before, python will use the already imported module (because of module cache).
Yes.. it is possible to wrap the import and clean the python module cache everytime (i use this for the top level application object instantiation), but it seems quite unclean for me... And I think that it is also inefficient...
So, what do you recommend me?
I was thinking about using apache2 and cherrypy using Mod_WSGI, but it seems that this does not solve the import problem, becuase there is still one python process for all apps togetger.
Maybe, I am thinking about the whole problem completely wrong and I will need to re-think it. I am opened for every idea or tip. Only limitation is that i want to use Python 3. Anything else is still opened for discussion :-)
Thank you for every response!
|
Force commit of nested save() within a transaction
| 7,473,401 | 1 | 3 | 839 | 1 |
python,sql,django
|
No, both your main saves and the status bar updates will be conducted using the same database connection so they will be part of the same transaction.
I can see two options to avoid this.
You can either create your own, separate database connection and save the status bar updates using that.
Don't save the status bar updates to the database at all and instead use a cache to store them. As long as you don't use the database cache backend (ideally you'd use memcached) this will work fine.
My preferred option would be the second one. You'll need to delve into the Django internals to get your own database connection so that could is likely to end up fragile and messy.
| 0 | 0 | 0 | 0 |
2011-09-19T14:15:00.000
| 1 | 0.197375 | false | 7,472,348 | 0 | 0 | 1 | 1 |
I have a function where I save a large number of models, (thousands at a time), this takes several minutes so I have written a progress bar to display progress to the user. The progress bar works by polling a URL (from Javascript) and looking a request.session value to see the state of the first call (the one that is saving).
The problem is that the first call is within a @transaction.commit_on_success decorator and because I am using Database Backed sessions when I try to force request.session.save() instead of it immediately committing it is appended to the ongoing transaction. This results in the progress bar only being updated once all the saves are complete, thus rendering it useless.
My question is, (and I'm 99.99% sure I already know the answer), can you commit statements within a transaction without doing the whole lot. i.e. I need to just commit the request.session.save() whilst leaving all of the others..
Many thanks, Alex
|
What migration order does South follow across different apps?
| 7,474,845 | 12 | 8 | 1,935 | 0 |
python,django,django-south
|
South migrates apps in the order they appear in the INSTALLED_APPS tuple in settings.py. So just make sure App-B comes before App-A in your settings.py, and it should work :)
| 0 | 0 | 0 | 0 |
2011-09-19T17:18:00.000
| 2 | 1 | false | 7,474,745 | 0 | 0 | 1 | 1 |
I've recently begun using South for migrations in my Django project. All was going well until recently when I ran into a peculiar issue.
I have two apps in my project, say, App-A and App-B. A model in App-A has a foreign key to a model in App-B. When I've been trying to build my system, I ran syndb which created all the auth_ and the south_ tables. Then I ran migrate which threw up errors. When it tried creating the model from App-A, which referenced a model from App-B, the model App-B wasn't migrated/created as yet and therefore the error.
In order to resolve this, I had to manually migrate App-B first and then App-A. Am i doing something wrong here? How is South supposed to know the migration order across apps?
Thanks.
|
Netbeans 7 not starting up after python plugin installation
| 8,114,362 | 1 | 1 | 2,129 | 0 |
python,macos,netbeans
|
I was having a problem with Netbeans 7 not starting. Netbeans had first errored out with no error message. Then it wouldn't start or give me an error. I looked in the .netbeans directory in my user directory, and found and attempted to delete the 'lock' file in that directory. When I first tried to delete it, it said it was in use. So with task manager, I had to go to processes tab and find netbeans. I killed that task, then was able to delete 'lock'. Then netbeans started.
| 0 | 1 | 0 | 0 |
2011-09-19T17:31:00.000
| 6 | 0.033321 | false | 7,474,887 | 0 | 0 | 1 | 5 |
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install.
I am using a mac osx
I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.