Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Maintain a large dictionary in memory for Django-Python?
| 10,597,896 | 1 | 10 | 2,666 | 0 |
python,django,dictionary,memcached,redis
|
5Mb isn't that large. You could keep it in memory in process, and I recommend that you do, until it becomes clear from profiling and testing that that approach isn't meeting your needs. Always do the simplest thing possible.
Socket communication doesn't of itself introduce much of an overhead. You could probably pare it back a little by using a unix domain socket. In any case, if you're not keeping your data in process, you're going to have to talk over some kind of pipe.
| 0 | 0 | 0 | 0 |
2012-05-15T06:13:00.000
| 4 | 0.049958 | false | 10,595,058 | 1 | 0 | 1 | 3 |
I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this !
|
Maintain a large dictionary in memory for Django-Python?
| 10,595,177 | 1 | 10 | 2,666 | 0 |
python,django,dictionary,memcached,redis
|
In past for a similar problem I have used the idea of a dump.py . I would think that all of the other data structures would require a layer to convert objects of one kind into python objects . However I would still think that this would depend on data size and the amount of data you are handling . Memcache and redis should have better indexing and look up when it comes to really large data sets and things like regex based lookup . So my recommendation would be
json -- if you are serving the data over http to some other service
python file - if data structure is not too large and you need not any special kind of look ups
memcache and redis -- if the data becomes really large
| 0 | 0 | 0 | 0 |
2012-05-15T06:13:00.000
| 4 | 0.049958 | false | 10,595,058 | 1 | 0 | 1 | 3 |
I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this !
|
Maintain a large dictionary in memory for Django-Python?
| 10,595,172 | 2 | 10 | 2,666 | 0 |
python,django,dictionary,memcached,redis
|
Memcached, though a great product, is trumped by Redis in my book. It offers lots of things that memcached doesn't, like persistence.
It also offers more complex data structures like hashses. What is your particular data dump? How big is it, and how large / what type of values?
| 0 | 0 | 0 | 0 |
2012-05-15T06:13:00.000
| 4 | 0.099668 | false | 10,595,058 | 1 | 0 | 1 | 3 |
I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this !
|
python google app engine programming
| 10,603,141 | 3 | 1 | 116 | 0 |
python,google-app-engine
|
On App Engine, I think the best way to do this is to store the total pages inside the Shelf. Add an IntegerProperty field to the shelf, I'll call it totalPages. Every time you add or remove a book to the shelf, update totalPages appropriately. Note that this will need to be done in a transaction.
Then it's easy to search the Shelf objects by totalPages.
| 0 | 0 | 0 | 0 |
2012-05-15T08:38:00.000
| 1 | 1.2 | true | 10,597,019 | 0 | 0 | 1 | 1 |
I have 2 models > Shelf and Book in my models.py. The model Book has a field ReferenceProperty(Shelf), it also has a IntegerProperty field that stores the no. of pages in the book. What i am trying to achieve is to get a list of Top 25 Shelf Names according to highest number of Pages (which should be the sum of pages of all the books in that shelf) in descending order.
i am a beginner with python programming. Please advise me.
|
How to access a specific start_url in a Scrapy CrawlSpider?
| 10,611,818 | 3 | 7 | 5,316 | 0 |
python,django,scrapy
|
If I understand correctly the problem, You can get url from response.url and then write to item['url'].
In Spider: item['url'] = response.url
And in pipeline: url = item['url'].
Or put response.url into meta as warvariuc wrote.
| 0 | 0 | 0 | 0 |
2012-05-15T10:22:00.000
| 4 | 0.148885 | false | 10,598,691 | 0 | 0 | 1 | 2 |
I'm using Scrapy, in particular Scrapy's CrawlSpider class to scrape web links which contain certain keywords. I have a pretty long start_urls list which gets its entries from a SQLite database which is connected to a Django project. I want to save the scraped web links in this database.
I have two Django models, one for the start urls such as http://example.com and one for the scraped web links such as http://example.com/website1, http://example.com/website2 etc. All scraped web links are subsites of one of the start urls in the start_urls list.
The web links model has a many-to-one relation to the start url model, i.e. the web links model has a Foreignkey to the start urls model. In order to save my scraped web links properly to the database, I need to tell the CrawlSpider's parse_item() method which start url the scraped web link belongs to. How can I do that? Scrapy's DjangoItem class does not help in this respect as I still have to define the used start url explicitly.
In other words, how can I pass the currently used start url to the parse_item() method, so that I can save it together with the appropriate scraped web links to the database? Any ideas? Thanks in advance!
|
How to access a specific start_url in a Scrapy CrawlSpider?
| 43,817,221 | 1 | 7 | 5,316 | 0 |
python,django,scrapy
|
Looks like warvariuc's answer requires a slight modification as of Scrapy 1.3.3: you need to override _parse_response instead of parse. Overriding make_requests_from_url is no longer necessary.
| 0 | 0 | 0 | 0 |
2012-05-15T10:22:00.000
| 4 | 0.049958 | false | 10,598,691 | 0 | 0 | 1 | 2 |
I'm using Scrapy, in particular Scrapy's CrawlSpider class to scrape web links which contain certain keywords. I have a pretty long start_urls list which gets its entries from a SQLite database which is connected to a Django project. I want to save the scraped web links in this database.
I have two Django models, one for the start urls such as http://example.com and one for the scraped web links such as http://example.com/website1, http://example.com/website2 etc. All scraped web links are subsites of one of the start urls in the start_urls list.
The web links model has a many-to-one relation to the start url model, i.e. the web links model has a Foreignkey to the start urls model. In order to save my scraped web links properly to the database, I need to tell the CrawlSpider's parse_item() method which start url the scraped web link belongs to. How can I do that? Scrapy's DjangoItem class does not help in this respect as I still have to define the used start url explicitly.
In other words, how can I pass the currently used start url to the parse_item() method, so that I can save it together with the appropriate scraped web links to the database? Any ideas? Thanks in advance!
|
debugging and intellisensing pyramid application
| 10,601,595 | 2 | 1 | 616 | 0 |
python,debugging,intellisense,pyramid
|
To my mind there's two viable option out there.
I use both actually.
Eclipse + Aptana Studio + Pydev or Aptana Studio
Pros
Free
Decent auto completion (IntelliSense like system)
More plug-ins (since it's based on eclipse)
Support django template
Cons
relatively poor html editor
no mako or jinja2 support (as far as I know)
Pycharm
Pros
better auto completion
Support mako,jinja2 and django template
Good HTML edtitor
Cons
Not Free
Both support debug without too many problems.
| 0 | 0 | 0 | 0 |
2012-05-15T12:48:00.000
| 1 | 0.379949 | false | 10,601,010 | 1 | 0 | 1 | 1 |
There is a wonderful video on youtube where it is explained how to debug Django applications with Python Tools for Visual Studio.
I wonder if the same thing is possible with the Pyramid applications? Moreover I would love to use VS' IntelliSense (hinting system) while writing for the Pyramid framework.
Or may be there are another ways to achieve the same debug+IntelliSense effect. I'd be glad to hear any suggestions.
|
How is twisted's Deferred implemented?
| 10,624,853 | 3 | 2 | 673 | 0 |
python,asynchronous,twisted,twisted.web
|
As others have said, a Deferred on its own is just a promise of a value, and a list of things to do when the value arrives (or when there is a failure getting the value).
How they work is like this: some function sees that the value it wants to return is not yet ready. So it prepares a Deferred, and then arranges somehow for that Deferred to be called back ("fired") with the value once it's ready. That second part is what may be causing your confusion; Deferreds on their own don't control when or how they are fired. It's the responsibility of whatever created the Deferred.
In the context of a whole Twisted app, nearly everything is event-based, and events are managed by the reactor. Say your code used twisted.web.client.getPage(), so it now has a Deferred that will be fired with the result of the http fetch. What that means is that getPage() started up a tcp conversation with the http server, and essentially installed handlers in the reactor saying "if you see any traffic on this tcp connection, call a method on this Protocol object". And once the Protocol object sees that it has received the whole page you asked for, it fires your Deferred, whereupon your own code is invoked via that Deferred's callback chain.
So everything is callbacks and hooks, all the way down. This is why you should never have blocking code in a Twisted app, unless on a separate thread- because it will stop everything else from being handled too.
Does that help?
| 0 | 1 | 0 | 0 |
2012-05-15T15:58:00.000
| 2 | 0.291313 | false | 10,604,523 | 0 | 0 | 1 | 1 |
Does it spawn a new thread underneath? If classical web server spawns a thread to serve a HTTP request and with Twisted web I have to spawn a Deferred() each time I want to query mysql - where's the gain? Looks like it doesn't make sens if it spawned a thread, so how's it implemented?
|
Compare two couchdb databases
| 10,616,421 | 1 | 1 | 685 | 1 |
couchdb,replication,couchdb-python
|
If you want to make sure they're exactly the same, write a map job that emits the document path as the key, and the documents hash (generated any way you like) as the value. Do not include the _rev field in the hash generation.
You cannot reduce to a single hash because order is not guaranteed, but you can feed the resultant JSON document to a good diff program.
| 0 | 0 | 0 | 0 |
2012-05-16T09:45:00.000
| 1 | 1.2 | true | 10,615,980 | 0 | 0 | 1 | 1 |
I have a couchdb instance with database a and database b. They should contain identical sets of documents, except that the _rev property will be different, which, AIUI, means I can't use replication.
How do I verify that the two databases really do contain the same documents which are all otherwise 'equal'?
I've tried using the python-based couchdb-dump tool with a lot of sed magic to get rid of the _rev and MD5 and ETag headers, but then it still seems that property order in the JSON structure is slightly random, which means I still can't compare the output easily with something like diff.
Is there a better approach here? Have other people wanted to solve a similar problem?
|
Django HTML quality and tutorials
| 10,629,574 | 0 | 1 | 282 | 0 |
html,django,python-3.x
|
I can share my experience with you as I have recently learned Django.
Instead of following any book you should try to use the Django documentation and also dont be afraid to look at the source code, it will help you to understand how things are working behind the scene.
| 0 | 0 | 0 | 0 |
2012-05-16T16:18:00.000
| 2 | 0 | false | 10,622,581 | 0 | 0 | 1 | 1 |
Hey I recently heard about Django, and will hopefully be moving on to learn an HTML type platform. I am currently learning python 3 and wanted to know if Django, especially recent editions, are the "best" ( sorry about the arbitrariness of that).
Plus I was hoping to know any good books / tutorials for django or any other that you believe is more vesitile, easy, etc. Most books don't seem to be up to date on Django as there have apparently been big changes from 1.0 to 1.1 and another leap on 1.3, from what I've read.
Thanks a lot!
|
convert python time.time() to java.nanoTime()
| 10,627,094 | 0 | 5 | 3,163 | 0 |
java,python,nanotime
|
Divide the output of System.nanoTime() by 10^9. This is because it is in nanoseconds, while the output of time.time() is in seconds.
| 0 | 0 | 0 | 0 |
2012-05-16T21:36:00.000
| 3 | 0 | false | 10,627,055 | 0 | 0 | 1 | 1 |
java's System.nanoTime() seems to give a long: 1337203874231141000L
while python time.time() will give something like 1337203880.462787
how can i convert time.time()'s value to something match up to System.nanoTime()?
|
Digest authentication in django
| 10,635,489 | -1 | 1 | 1,753 | 0 |
python,django,authentication,hash,digest
|
AES is not a hashing algorithm. It's an encryption algorithm.
You can use hashing algorithms like SHA1 or MD5.
| 0 | 0 | 0 | 0 |
2012-05-17T11:50:00.000
| 2 | -0.099668 | false | 10,635,212 | 0 | 0 | 1 | 1 |
As far as my knowledge says in digest authentication, a client does an irreversible computation, using the password and a random value supplied by the server as input values. The result is transmitted to the server who does the same computation and authenticates the client if he arrives at the same value. Since the computation is irreversible, an eavesdropper can't obtain the password.
Keeping eye on the above definition, I used CryptoJS.HmacSHA256("password", "key") in Javascript to send the information to django server, now the problem is:
I need to check that in server using same logic but django already has hashed the password in its own format, for example using pbkdf2_sha256.
Should I use some reversible algorithm like AES? I don't think it is possible to crack django's hashing algorithm and write the same for client side?
|
Python psycopg2 + mod_wsgi: connection is very slow and automatically close
| 10,645,670 | 1 | 0 | 450 | 1 |
python,web-services,apache2,mod-wsgi,psycopg2
|
If you are using mod_wsgi in embedded moe, especially with preform MPM for Apache, then likely that Apache is killing off the idle processes. Try using mod_wsgi daemon mode, which keeps process persistent and see if it makes a difference.
| 0 | 0 | 0 | 1 |
2012-05-17T13:12:00.000
| 1 | 0.197375 | false | 10,636,409 | 0 | 0 | 1 | 1 |
I have made a python ladon webservice and I run is on Ubuntu with Apache2 and mod_wsgi. (I use Python 2.6).
The webservice connect to a postgreSQL database with psycopg2 python module.
My problem is that the psycopg2.connection is closed (or destroyed) automatically after a little time (after about 1 or 2 minutes).
The other hand if I run the server with
ladon2.6ctl testserve
command (http://ladonize.org/index.php/Python_Configuration)
than the server is working and the connection is not closed automatically.
I can't understand why the connection is closed with apache+mod_wsgi and in this case the webserver is very slowly.
Can anyone help me?
|
Creating a KhanAcademy clone via Google App Engine - issues with application name in app.yaml
| 10,637,991 | 3 | 2 | 1,290 | 0 |
python,google-app-engine,clone
|
The problem is that your 'clone' application does not have access to Khans Academy's AppEngine datastore so there is no content to display. Even if you do use all of the code for their application, you are still going to have to generate all of your own content.
Even if you are planning to 'clone' their content, too, you are going to have to do a lot of probably manual work to get it in to your application's datastore.
| 0 | 1 | 0 | 0 |
2012-05-17T14:27:00.000
| 1 | 0.53705 | false | 10,637,637 | 0 | 0 | 1 | 1 |
I'm trying to create a KhanAcademy (KA) clone on Google App Engine (GAE). I downloaded the offline version of KA (http://code.google.com/p/khanacademy/downloads/list) for Mac, and set it up with GoogleAppEngineLauncher (https://developers.google.com/appengine/). Because KA was produced on Python 2.5, I have the setup running through the Python 2.5 included in the KA offline version download, and I added these extra flags to the app (to essentially duplicate the functionality of the included Run file):
--datastore_path=/Users/Tadas/KhanAcademy/code/datastore --use_sqlite
As is, GAELauncher is able to get that up and running perfectly fine on a localhost. However, to get it up on my Google appspot domain, I need to change the application name in app.yaml. When I change "application: khan-academy" in app.yaml to a new name and try to run the local version via GAELauncher (or the included Run file), the site comes up but all the content (exercises, etc.) has disappeared (essentially, the site loses most of its functionality). If I try to "Deploy" the app in this state, I received a 500 Server Error when I try to go on the appspot website. Any ideas as to what could be going wrong?
Thanks.
|
Localhost Server Refusing Connection
| 10,641,540 | 11 | 6 | 31,050 | 0 |
python,django,localhost,port
|
You need to know your server's IP or domain address. If you used example.com to access you server SSH, then launching Django with
./manage.py runserver 0.0.0.0:8002
and accessing it with http://example.com:8002 should work. But if you only know IP, then launch Django with that IP instead of
0.0.0.0
and access it with http://YOUR-IP:8002
| 0 | 0 | 0 | 0 |
2012-05-17T17:41:00.000
| 2 | 1.2 | true | 10,640,720 | 0 | 0 | 1 | 1 |
I just set up Django on a dreamhost server. Ran through everything, but can't seem to get the welcome page. I got to the point where it says "Development server is running at 127.0.0.1:8002 (tried with 8000 but got the "that port is already in use error). When I try to access that address in my browser in Chrome I get Error 102 (net::ERR_CONNECTION_REFUSED): The server refused the connection.
Any idea why this is happening? I am stuck in a loop, I have no clue what is going on. Help is sincerely appreciated.
|
Django path in Mac OS X
| 10,643,577 | 1 | 0 | 906 | 0 |
python,django
|
It needs to go in $PYTHONPATH instead. Create that variable if it's not already defined.
| 0 | 0 | 0 | 0 |
2012-05-17T21:05:00.000
| 1 | 1.2 | true | 10,643,506 | 1 | 0 | 1 | 1 |
I installed django and want to specify its path in my mac, and I put the path into .profile and also checked with $PATH to ensure that is specified. However, when I go to python's environment and type import django, it cannot find. Have no idea about that. Any suggestions?
|
Flask SQLAlchemy not picking up changed records
| 15,194,364 | 1 | 1 | 326 | 1 |
python,flask-sqlalchemy
|
you app's SELECT is probably within its own transaction / session so changes submitted by another session (e.g. MySQL Workbench connection) are not yet visible for your SELECT. You can easily verify it by enabling mysql general log or by setting 'echo: false' in your create_engine(...) definition. Chances are you're starting your SQLAlchemy session in SET AUTOCOMMIT = 0 mode which requires explicit commit or rollback (when you restart / reload, Flask-SQLAlchemy does it for you automatically). Try either starting your session in autocommit=true mode or stick explicit commit/rollback before calling your SELECT.
| 0 | 0 | 0 | 0 |
2012-05-18T01:57:00.000
| 1 | 0.197375 | false | 10,645,793 | 0 | 0 | 1 | 1 |
I'm seeing some unexpected behaviour with Flask-SQLAlchemy, and I don't understand what's going on:
If I make a change to a record using e.g. MySQL Workbench or Sequel Pro, the running app (whether running under WSGI on Apache, or from the command line) isn't picking up the change. If I reload the app by touching the WSGI file, or by reloading it (command line), I can see the changed record. I've verified this by running an all() query in the interactive shell, and it's the same – no change until I quit the shell, and start again. I get the feeling I'm missing something incredibly obvious here – it's a single table, no joins etc. – Running MySQL 5.5.19, and SQLA 0.7.7 on 2.7.3
|
Mark Out Multiple Delivery Zones on Google Map and Store in Database
| 10,648,479 | 1 | 0 | 667 | 1 |
python,django,postgresql,google-maps,postgis
|
"Using Python and Django" only, you're not going to do this. Obviously you're going to need Javascript.
So you may as well dump Google Maps and use an open-source web mapping framework. OpenLayers has a well-defined Javascript API which will let you do exactly what you want. Examples in the OpenLayers docs show how.
You'll thank me later - specifically when Google come asking for a fee for their map tiles and you can't switch your Google Maps widget to OpenStreetMap or some other tile provider. This Actually Happens.
| 0 | 0 | 0 | 0 |
2012-05-18T06:01:00.000
| 2 | 0.099668 | false | 10,647,482 | 0 | 0 | 1 | 1 |
Background:
I'm trying to use a Google Map as an interface to mark out multiple polygons, that can be stored in a Postgres Database.
The Database will then be queried with a geocoded Longitude Latitude Point to determine which of the Drawn Polygons encompass the point.
Using Python and Django.
Question
How do I configure the Google Map to allow a user to click around and specify multiple polygon areas?
|
python web extraction of iframe (ajax ) content
| 10,663,615 | 1 | 1 | 1,272 | 0 |
python,web-scraping
|
Splinter (http://splinter.cobrateam.info - uses selenium) makes browsing iframe elements easy. At least as long iframe tag has id attribute.
| 0 | 0 | 1 | 0 |
2012-05-18T10:10:00.000
| 2 | 0.099668 | false | 10,650,676 | 0 | 0 | 1 | 1 |
I am working on python web scraping
The web page is polluted using iframe and the content is filled by ajax(jquery)
I have tried using src of iframe(using lxml,.) but its of no use
How can i extract the content of the iframe using python modules
Thanks
|
Simulating the passing of time in unittesting
| 10,653,559 | 4 | 20 | 3,901 | 0 |
python,testing,mocking,integration-testing,celery
|
Without the use of a special mock library, I propose to prepare the code for being in mock-up-mode (probably by a global variable). In mock-up-mode instead of calling the normal time-function (like time.time() or whatever) you could call a mock-up time-function which returns whatever you need in your special case.
I would vote down for changing the system time. That does not seem like a unit test but rather like a functional test as it cannot be done in parallel to anything else on that machine.
| 0 | 1 | 0 | 1 |
2012-05-18T11:48:00.000
| 3 | 0.26052 | false | 10,652,097 | 0 | 0 | 1 | 1 |
I've built a paywalled CMS + invoicing system for a client and I need to get more stringent with my testing.
I keep all my data in a Django ORM and have a bunch of Celery tasks that run at different intervals that makes sure that new invoices and invoice reminders get sent and cuts of access when users don't pay their invoices.
For example I'd like to be a able to run a test that:
Creates a new user and generates an invoice for X days of access to the site
Simulates the passing of X + 1 days, and runs all the tasks I've got set up in Celery.
Checks that a new invoice for an other X days has been issued to the user.
The KISS approach I've come up with so far is to do all the testing on a separate machine and actually manipulate the date/time at the OS-level. So the testing script would:
Set the system date to day 1
Create a new user and generate the first invoice for X days of access
Advance then system date 1 day. Run all my celery tasks. Repeat until X + 1 days have "passed"
Check that a new invoice has been issued
It's a bit clunky but I think it might work. Any other ideas on how to get it done?
|
Django: How to manage development and production settings?
| 69,156,705 | 0 | 184 | 106,539 | 0 |
python,django
|
You're probably going to use the wsgi.py file for production (this file is created automatically when you create the django project). That file points to a settings file. So make a separate production settings file and reference it in your wsgi.py file.
| 0 | 0 | 0 | 0 |
2012-05-19T10:12:00.000
| 19 | 0 | false | 10,664,244 | 0 | 0 | 1 | 2 |
I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
|
Django: How to manage development and production settings?
| 71,316,114 | 0 | 184 | 106,539 | 0 |
python,django
|
What we do here is to have an .ENV file for each environment. This file contains a lot of variables like ENV=development
The settings.py file is basically a bunch of os.environ.get(), like ENV = os.environ.get('ENV')
So when you need to access that you can do ENV = settings.ENV.
You would have to have a .env file for your production, testing, development.
| 0 | 0 | 0 | 0 |
2012-05-19T10:12:00.000
| 19 | 0 | false | 10,664,244 | 0 | 0 | 1 | 2 |
I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
|
Django: How to you organise static pages in apps?
| 10,668,094 | 0 | 1 | 1,228 | 0 |
python,django,static
|
I put them in a 'business' app. I feel there is no need to overthink it really.
| 0 | 0 | 0 | 0 |
2012-05-19T13:11:00.000
| 3 | 0 | false | 10,665,475 | 0 | 0 | 1 | 1 |
Its said to keep each functionality as an app and keep it as pluggable as possible.
So,
How do you organise pages like :
Homepage
About Us
Contact Us
etc
These are not exactly functionality, so does django devs manage these ?
|
Eclipse plugin that just runs a python script
| 10,856,306 | 1 | 2 | 1,884 | 0 |
python,eclipse-plugin,eclipse-pde
|
You can already create an External Launch config from Run>External Tools>External Tools Configurations. You are basically calling the program from eclipse. Any output should then show up in the eclipse Console view. External launch configs can also be turned into External Builders and attached to projects.
If you are looking to run your python script within your JVM then you need a implementation of python in java ... is that what you are looking for?
| 0 | 1 | 0 | 1 |
2012-05-19T13:55:00.000
| 1 | 1.2 | true | 10,665,768 | 0 | 0 | 1 | 1 |
I want to generate an Eclipse plugin that just runs an existing Python script with parameters.
While this sounds very simple, I don't think it's easy to implement. I can generate a Eclipse plugin. My issue is not how to use PDE. But:
can I call the existing Python script from Java, from an Eclipse plugin?
it needs to run from the embedded console with some parameters
Is this reasonably easy to do? And I don't plan to reimplement it in any way. Calling it from command-line works very well. My question is: can Eclipse perform this, too?
Best,
Marius
|
Include needed for Python/Django
| 10,670,224 | 0 | 0 | 119 | 0 |
python,django,include
|
This isn't rocket science.
Create a constants file, say constants.txt. Put name/value pairs in that file in an easily-parseable format. For example name:value. Write a small program in your language of choice (Python would be great for this). This program reads in constants.txt, and then writes out an appropriate file for each of the languages you will be working with (like constants.py, constants.h, etc.)
For example, if constants.txt contained an entry of 'MODEL1_FIELD1_MAX_LENGTH: 20', then constants.h could contain an entry of the form '#define MODEL1_FIELD1_MAX_LENGTH 20', but constants.py would contain an entry of the form 'MODEL1_FIELD1_MAX_LENGTH=20'. You get the picture.
Now make that little program be run automatically as part of your projects build process any time constants.txt is changed.
There you go--your constants kept in one file, yet always synchronized and available for any language you use.
| 0 | 0 | 0 | 0 |
2012-05-19T23:46:00.000
| 3 | 0 | false | 10,669,677 | 0 | 0 | 1 | 1 |
First, let me preface this by saying I am brand new to both Python and Django. I would like to be using a language I already know, like and prefer, alas the frameworks simply don't exist for them. Bottom line, I'm no "pythonista."
At any rate, I'm on the first couple of pages of a Django tutorial, and am at the point of creating the data model. Right away I see that the example hardcodes things like the max length of character fields right there in the model. This is something I simply won't do, as this information will not only change often and be required in many places, but it will also be used when I code up backend applications in another programming language.
The critical issue is, I won't be using python for backend stuff. I will be using another language. Programs in that language will need access to things like the max length of character fields.
In any of the other languages I use, this is a simple matter. I simply stick something like a max length in a file called MAXLENGTH, and include that file wherever I need it. If max length ever needs to change (and it will), I change it in one place. It is then changed in all other places, no matter what other languages are used.
I need this capability in Python/Django, or something which will achieve similar effect with minimal hassle. I did find an import statement, but it doesn't seem to do exactly what I want (it seems to import Python code, but I can't use a Python-only solution here).
Note that I'm not likely to entertain exotic, complicated solutions involving lots of complicated declarations of classes and what not. It's a simple problem, I need a simple solution.
Also, I would accept a solution in either Python, or Django (if Django has some special capability in this regard).
Much thanks.
|
Where are dependent libraries stored in Heroku?
| 10,670,227 | 4 | 5 | 1,969 | 0 |
python,django,heroku,dependencies
|
The short answer is that the site packages live in /app/.heroku/venv/lib/python2.7/site-packages. If you want to take a look around you can open a remote shell using `heroku run -app [app_name] bash'. However, you probably don't want to just edit the packages in place since there's no guarantee that heroku won't wipe that clean and start fresh using your requirements.txt for another instance. Instead, if you need to customize a package a good strategy is to create your own fork of the project's code and then specify your customized fork in the requirements.txt.
For example, I use django-crowdsourcing for one of my sites, but needed to customize the code. So I created my own fork on google code and pointed to this custom fork using the following entry in requirements.txt:
-e hg+https://[email protected]/r/evangrim-django-crowdsourcing/@b824d8f377b5bc2706d9755650e3f35061f3e309#egg=django_crowdsourcing-dev
This tells pip to checkout a copy of my fork and use it to install the package into heroku's virtualenv for my app. Doing things this way is much more robust since it works within the pip installation framework that heroku expects you to use.
| 0 | 0 | 0 | 0 |
2012-05-20T01:08:00.000
| 2 | 1.2 | true | 10,670,022 | 1 | 0 | 1 | 1 |
I've launched my app on Heroku but need to change one of the files of one the dependent libs that I installed in the requirements.txt file.
On my local machine, this would just be in my virtual environment in lib > python2.7 > site-packages etc.
Where are these dependencies stored within Heroku's file structure? When I go into the python folder in lib the site-packages doesn't seem to have my libraries there.
|
Python module for google search in GAE
| 10,671,099 | 1 | 0 | 143 | 0 |
python,google-app-engine,python-module
|
There is no official library for what you're trying to do, and the Google Terms of Service prohibit using automated tools to 'scrape' search results.
| 0 | 1 | 0 | 0 |
2012-05-20T02:26:00.000
| 1 | 1.2 | true | 10,670,325 | 0 | 0 | 1 | 1 |
I am trying to build an application in GAE using python. I needs to do is give the query received from user and give it to Google search and return the answer in a formatted way to the user. I found lots of questions asked here. But couldn't get a clear answer regarding my requirements. My needs are
Needs to process large number of links. Many Google API described gives only top four links
Which module is best regarding my requirement. Whether I need to go for something like Mechanize, Urllib... I don't know whether they work in GAE. Also found a Google API, but it gives only few results
|
How to gracefully shutdown any WSGI server?
| 10,671,041 | 4 | 3 | 1,133 | 0 |
python,http,wsgi
|
HTTPd has the graceful-stop predicate for -k that will allow it to bring down any workers after they have completed their request. mod_wsgi is required to make it a WSGI container.
| 0 | 0 | 0 | 0 |
2012-05-20T05:24:00.000
| 1 | 0.664037 | false | 10,671,004 | 0 | 0 | 1 | 1 |
I've been experimenting with several WSGI servers and am unable to find a way for them to gracefully shut down. What I mean by graceful is that the server stops listen()'ing for new requests, but finishes processing all connections that have been accept()'ed. The server process then exits.
So far I have spent some time with FAPWS, Cherrypy, Tornado, and wsgiref. It seems like no matter what I do, some of the clients receive a "Connection reset by peer".
Can someone direct me to a WSGI server that handles this properly? Or know of a way to configure one of these servers to doing a clean shutdown? I think my next step is to mock up a simple http server that does what I want.
|
Which database model should I use for dynamic modification of entities/properties during runtime?
| 10,792,940 | 6 | 20 | 4,158 | 1 |
python,database,dynamic,sqlalchemy,redis
|
What you're asking about is a common requirement in many systems -- how to extend a core data model to handle user-defined data. That's a popular requirement for packaged software (where it is typically handled one way) and open-source software (where it is handled another way).
The earlier advice to learn more about RDBMS design generally can't hurt. What I will add to that is, don't fall into the trap of re-implementing a relational database in your own application-specific data model! I have seen this done many times, usually in packaged software. Not wanting to expose the core data model (or permission to alter it) to end users, the developer creates a generic data structure and an app interface that allows the end user to define entities, fields etc. but not using the RDBMS facilities. That's usually a mistake because it's hard to be nearly as thorough or bug-free as what a seasoned RDBMS can just do for you, and it can take a lot of time. It's tempting but IMHO not a good idea.
Assuming the data model changes are global (shared by all users once admin has made them), the way I would approach this problem would be to create an app interface to sit between the admin user and the RDBMS, and apply whatever rules you need to apply to the data model changes, but then pass the final changes to the RDBMS. So for example, you may have rules that say entity names need to follow a certain format, new entities are allowed to have foreign keys to existing tables but must always use the DELETE CASCADE rule, fields can only be of certain data types, all fields must have default values etc. You could have a very simple screen asking the user to provide entity name, field names & defaults etc. and then generate the SQL code (inclusive of all your rules) to make these changes to your database.
Some common rules & how you would address them would be things like:
-- if a field is not null and has a default value, and there are already existing records in the table before that field was added by the admin, update existing records to have the default value while creating the field (multiple steps -- add field allowing null; update all existing records; alter the table to enforce not null w/ default) -- otherwise you wouldn't be able to use a field-level integrity rule)
-- new tables must have a distinct naming pattern so you can continue to distinguish your core data model from the user-extended data model, i.e. core and user-defined have different RDBMS owners (dbo. vs. user.) or prefixes (none for core, __ for user-defined) or somesuch.
-- it is OK to add fields to tables that are in the core data model (as long as they tolerate nulls or have a default), and it is OK for admin to delete fields that admin added to core data model tables, but admin cannot delete fields that were defined as part of the core data model.
In other words -- use the power of the RDBMS to define the tables and manage the data, but in order to ensure whatever conventions or rules you need will always be applied, do this by building an app-to-DB admin function, instead of giving the admin user direct DB access.
If you really wanted to do this via the DB layer only, you could probably achieve the same by creating a bunch of stored procedures and triggers that would implement the same logic (and who knows, maybe you would do that anyway for your app). That's probably more of a question of how comfortable are your admin users working in the DB tier vs. via an intermediary app.
So to answer your questions directly:
(1) Yes, add tables and columns at run time, but think about the rules you will need to have to ensure your app can work even once user-defined data is added, and choose a way to enforce those rules (via app or via DB / stored procs or whatever) when you process the table & field changes.
(2) This issue isn't strongly affected by your choice of SQL vs. NoSQL engine. In every case, you have a core data model and an extended data model. If you can design your app to respond to a dynamic data model (e.g. add new fields to screens when fields are added to a DB table or whatever) then your app will respond nicely to changes in both the core and user-defined data model. That's an interesting challenge but not much affected by choice of DB implementation style.
Good luck!
| 0 | 0 | 0 | 0 |
2012-05-20T11:16:00.000
| 4 | 1 | false | 10,672,939 | 0 | 0 | 1 | 2 |
I am thinking about creating an open source data management web application for various types of data.
A privileged user must be able to
add new entity types (for example a 'user' or a 'family')
add new properties to entity types (for example 'gender' to 'user')
remove/modify entities and properties
These will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me:
a) How should the data be stored in the database? Should I dynamically add/remove database tables and/or columns during runtime?
I am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add/remove tables (entities) and/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database.
Anyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management?
b) How to implement this in Python using an ORM or NoSQL?
If you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy?
If you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis?
Thanks for your suggestions!
Edit in response to some comments:
The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes.
Basically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property.
The datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a "Pythonic", state-of-the-art, scalable, and reliable web application.
I see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document/Collection model of Mongo/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction?
Edit 2 in response to some answers/comments:
From most of your answers, I conclude that the dynamic creation/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design.
As exactly this dynamic nature should be the main purpose/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine.
Expressed in an abstract way, the application needs to manage
the data layout, i.e. a "dynamic list" of valid entity types and a "dynamic list" of properties for each valid entity type
the data itself
I am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.
|
Which database model should I use for dynamic modification of entities/properties during runtime?
| 10,707,420 | 3 | 20 | 4,158 | 1 |
python,database,dynamic,sqlalchemy,redis
|
So, if you conceptualize your entities as "documents," then this whole problem maps onto a no-sql solution pretty well. As commented, you'll need to have some kind of model layer that sits on top of your document store and performs tasks like validation, and perhaps enforces (or encourages) some kind of schema, because there's no implicit backend requirement that entities in the same collection (parallel to table) share schema.
Allowing privileged users to change your schema concept (as opposed to just adding fields to individual documents - that's easy to support) will pose a little bit of a challenge - you'll have to handle migrating the existing data to match the new schema automatically.
Reading your edits, Mongo supports the kind of searching/ordering you're looking for, and will give you the support for "empty cells" (documents lacking a particular key) that you need.
If I were you (and I happen to be working on a similar, but simpler, product at the moment), I'd stick with Mongo and look into a lightweight web framework like Flask to provide the front-end. You'll be on your own to provide the model, but you won't be fighting against a framework's implicit modeling choices.
| 0 | 0 | 0 | 0 |
2012-05-20T11:16:00.000
| 4 | 1.2 | true | 10,672,939 | 0 | 0 | 1 | 2 |
I am thinking about creating an open source data management web application for various types of data.
A privileged user must be able to
add new entity types (for example a 'user' or a 'family')
add new properties to entity types (for example 'gender' to 'user')
remove/modify entities and properties
These will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me:
a) How should the data be stored in the database? Should I dynamically add/remove database tables and/or columns during runtime?
I am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add/remove tables (entities) and/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database.
Anyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management?
b) How to implement this in Python using an ORM or NoSQL?
If you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy?
If you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis?
Thanks for your suggestions!
Edit in response to some comments:
The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes.
Basically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property.
The datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a "Pythonic", state-of-the-art, scalable, and reliable web application.
I see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document/Collection model of Mongo/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction?
Edit 2 in response to some answers/comments:
From most of your answers, I conclude that the dynamic creation/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design.
As exactly this dynamic nature should be the main purpose/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine.
Expressed in an abstract way, the application needs to manage
the data layout, i.e. a "dynamic list" of valid entity types and a "dynamic list" of properties for each valid entity type
the data itself
I am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.
|
Sync message to twitter in background in a web application
| 10,674,012 | 0 | 1 | 56 | 0 |
python
|
either you choose zrxq's solution, or you can do that with a thread, if you take care of two things:
you don't tamper with objects from the main thread (be careful of iterators),
you take good care of killing your thread once the job is done.
something that would look like :
import threading
class TwitterThreadQueue(threading.Thread):
queue = []
def run(self):
while len(self.queue!=0):
post_on_twitter(self.queue.pop()) # here is your code to post on twitter
def add_to_queue(self,msg):
self.queue.append(msg)
and then you instanciate it in your code :
tweetQueue = TwitterThreadQueue()
# ...
tweetQueue.add_to_queue(message)
tweetQueue.start() # you can check if it's not already started
# ...
| 0 | 0 | 0 | 1 |
2012-05-20T12:06:00.000
| 1 | 1.2 | true | 10,673,245 | 0 | 0 | 1 | 1 |
I'm writing an web app. Users can post text, and I need to store them in my DB as well as sync them to a twitter account.
The problem is that I'd like to response to the user immediately after inserting the message to DB, and run the "sync to twitter" process in background.
How could I do that? Thanks
|
Can django-tastypie display a different set of fields in the list and detail views of a single resource?
| 24,058,617 | 0 | 11 | 5,626 | 0 |
python,django,tastypie
|
Can also use the dehydrate(self, bundle) method.
def dehydrate(self, bundle):
del bundle.data['attr-to-del]
return bundle
| 0 | 0 | 0 | 0 |
2012-05-21T22:14:00.000
| 4 | 0 | false | 10,693,379 | 0 | 0 | 1 | 1 |
I would like for a particular django-tastypie model resource to have only a subset of fields when listing objects, and all fields when showing a detail. Is this possible?
|
queues remain unknown or just don't know how to call them
| 10,698,246 | 1 | 1 | 178 | 0 |
python,google-app-engine
|
If your are running a unitest and using init_taskqueue_stub() you need to pass the path of the queue.yaml when calling it using the root_path parameter.
| 0 | 1 | 0 | 0 |
2012-05-22T07:32:00.000
| 1 | 0.197375 | false | 10,697,651 | 0 | 0 | 1 | 1 |
I've added a new queue to a python GAE app, and would like to add tasks to it, but always get an UnknownQueueError when I run my tests. On the other hand, I see the queue present in the GAE admin console (both local and remote). So the question is (1) do I miss something when I add a task to my queue? (2) if not, then how can I run custom queues in a test?
Here is my queue.yaml
queue:
- name: requests
rate: 20/s
bucket_size: 100
retry_parameters:
task_age_limit: 60s
and my python call is the following:
taskqueue.add(queue_name="requests", url=reverse('queue_request', kwargs={"ckey":ckey}))
any ideas?
|
is it possible to have two submit button for one form?
| 10,697,999 | 0 | 3 | 2,325 | 0 |
python,html,web2py
|
Yes it is possible in the incoming request is the name of the pressed button.
| 0 | 0 | 0 | 0 |
2012-05-22T07:44:00.000
| 3 | 0 | false | 10,697,843 | 0 | 0 | 1 | 1 |
I am using web2py to write a search engine like app. Is it possible to implement two submit button for one form such as google has two buttons "search" and "I am feeling lucky". Thanks in advance.
|
Get started with pystache
| 10,703,762 | 3 | 1 | 617 | 0 |
python
|
Pystache is a template library not http server! If you want make webapp try to use ready-made webframeworks like Django or Pyramid.
| 0 | 0 | 0 | 0 |
2012-05-22T14:00:00.000
| 1 | 1.2 | true | 10,703,616 | 0 | 0 | 1 | 1 |
This is really a newbie question, but I don't know how to search answers for this. I want to use pystache, and I am able to execute the .py file to print out some rendered output from .mustache file. but how exactly do I convert this into .html file? Specifically, how to put it on the server so that the browser would direct to the .html file like index.html?
|
Is there feature in Pyramid to specify a route in the template like Django templates?
| 10,730,436 | 10 | 3 | 1,122 | 0 |
python,pyramid
|
The brackets depend on the templating engine you are using, but request.route_url('home') is the Python code you need inside.
For example, in your desired template file:
jinja2--> {{ request.route_url('home') }}
mako/chameleon--> ${ request.route_url('home') }
If your route definition includes pattern matching, such as config.add_route('sometestpage', '/test/{pagename}'), then you would do request.route_url('sometestpage', pagename='myfavoritepage')
| 0 | 0 | 0 | 0 |
2012-05-23T21:44:00.000
| 1 | 1.2 | true | 10,728,333 | 0 | 0 | 1 | 1 |
For example in Django if I have a url named 'home' then I can put {% url home %} in the template and it will navigate to that url. I couldn't find anything specific in the Pyramid docs so I am looking to tou Stack Overflow.
Thanks
|
Using a Ruby gem from a Django application
| 10,736,225 | 7 | 2 | 333 | 0 |
python,ruby-on-rails,ruby,django,interop
|
I suggest you either:
Expose a ruby service using REST or XML-RPC.
or
Shell out to a ruby script from Django.
To transfer data between Python and Ruby I suggest you use JSON, XML or plain text (depending on what kind of data you need to transfer).
I would recommend to use option 2 (start a ruby script from the Python process), as this introduces fewer moving parts to the solution.
| 0 | 0 | 0 | 0 |
2012-05-24T10:50:00.000
| 2 | 1.2 | true | 10,735,998 | 0 | 0 | 1 | 2 |
Lets say I have a few Ruby gems that I'd like to use from my Python (Django) application. I know this isn't the most straightforward question but let's assume that rewriting the Ruby gem in Python is a lot of work, how can I use it?
Should I create an XML-RPC wrapper around it using Rails and call it? Is there something like a ruby implementation in Python within which I could run my gem code?
Are there other methods that I may have missed? I've never tacked anything like this before I was a bit lost in this area.
Thanks
|
Using a Ruby gem from a Django application
| 10,737,263 | 3 | 2 | 333 | 0 |
python,ruby-on-rails,ruby,django,interop
|
It depends a little on what you need to do. The XML-RPC suggestion has already been made.
You might actually be able to use them together in a JVM, assuming you can accept running Django with jython and use jruby. But that is a bit of work, which may or may not be worth the effort.
It would perhaps be easier if you described exactly what the Ruby gem is and what problem it is supposed to solve. You might get suggestions that could help you avoid the problem altogether.
| 0 | 0 | 0 | 0 |
2012-05-24T10:50:00.000
| 2 | 0.291313 | false | 10,735,998 | 0 | 0 | 1 | 2 |
Lets say I have a few Ruby gems that I'd like to use from my Python (Django) application. I know this isn't the most straightforward question but let's assume that rewriting the Ruby gem in Python is a lot of work, how can I use it?
Should I create an XML-RPC wrapper around it using Rails and call it? Is there something like a ruby implementation in Python within which I could run my gem code?
Are there other methods that I may have missed? I've never tacked anything like this before I was a bit lost in this area.
Thanks
|
Do CSRF attacks apply to API's?
| 16,702,510 | 19 | 69 | 32,350 | 0 |
python,django,api,security
|
They do apply if you're also using your API to support a website.
In this case you still need some form of CSRF protection to prevent someone embedding requests in other sites to have drive-by effects on an authenticated user's account.
Chrome seems to deny cross-origin POST requests by default (other browsers may not be so strict), but allows GET requests cross-origin so you must make sure any GET requests in your API don't have side-effects.
| 0 | 0 | 0 | 0 |
2012-05-24T16:14:00.000
| 5 | 1 | false | 10,741,339 | 0 | 0 | 1 | 2 |
I'm writing a Django RESTful API to back an iOS application, and I keep running into Django's CSRF protections whenever I write methods to deal with POST requests.
My understanding is that cookies managed by iOS are not shared by applications, meaning that my session cookies are safe, and no other application can ride on them. Is this true? If so, can I just mark all my API functions as CSRF-exempt?
|
Do CSRF attacks apply to API's?
| 10,741,650 | 72 | 69 | 32,350 | 0 |
python,django,api,security
|
That's not the purpose of CSRF protection. CSRF protection is to prevent direct posting of data to your site. In other words, the client must actually post through an approved path, i.e. view the form page, fill it out, submit the data.
An API pretty much precludes CSRF, because its entire purpose is generally to allow 3rd-party entities to access and manipulate data on your site (the "cross-site" in CSRF). So, yes, I think as a rule any API view should be CSRF exempt. However, you should still follow best practices and protect every API-endpoint that actually makes a change with some form of authentication, such as OAuth.
| 0 | 0 | 0 | 0 |
2012-05-24T16:14:00.000
| 5 | 1.2 | true | 10,741,339 | 0 | 0 | 1 | 2 |
I'm writing a Django RESTful API to back an iOS application, and I keep running into Django's CSRF protections whenever I write methods to deal with POST requests.
My understanding is that cookies managed by iOS are not shared by applications, meaning that my session cookies are safe, and no other application can ride on them. Is this true? If so, can I just mark all my API functions as CSRF-exempt?
|
Python multiple processes instead of threads?
| 10,743,293 | 1 | 2 | 624 | 0 |
python,multithreading,multiprocess
|
First, profile your code to determine what is bottlenecking your performance.
If each of your threads are frequently writing to your MySQL database, the problem may be disk I/O, in which case you should consider using an in-memory database and periodically write it to disk.
If you discover that CPU performance is the limiting factor, then consider using the multiprocessing module instead of the threading module. Use a multiprocessing.Queue object to push your tasks. Also make sure that your tasks are big enough to keep each core busy for a while, so that the granularity of communication doesn't kill performance. If you are currently using threading, then switching to multiprocessing would be the easiest way forward for now.
| 0 | 0 | 0 | 0 |
2012-05-24T17:56:00.000
| 2 | 0.099668 | false | 10,742,820 | 1 | 0 | 1 | 1 |
I am working on a web backend that frequently grabs realtime market data from the web, and puts the data in a MySQL database.
Currently I have my main thread push tasks into a Queue object. I then have about 20 threads that read from that queue, and if a task is available, they execute it.
Unfortunately, I am running into performance issues, and after doing a lot of research, I can't make up my mind.
As I see it, I have 3 options:
Should I take a distributed task approach with something like Celery?
Should I switch to JPython or IronPython to avoid the GIL issues?
Or should I simply spawn different processes instead of threads using processing?
If I go for the latter, how many processes is a good amount? What is a good multi process producer / consumer design?
Thanks!
|
Access Win32 dll on Google App Engine?
| 10,743,291 | 6 | 0 | 191 | 0 |
python,winapi,google-app-engine,licensing
|
Nope, App Engine's python runtime only supports pure python modules. Wrapped native code modules won't work.
| 0 | 1 | 0 | 0 |
2012-05-24T18:21:00.000
| 1 | 1.2 | true | 10,743,158 | 0 | 0 | 1 | 1 |
BACKGROUND:
I work on a small team in a large company where I'm currently revamping the licensing system for a suite of mixed .Net and Win32 products that I update annually. Each product references a win32 .dll for product validation. I only have the binary file and the header file for the licensing module (so no hash algorithm). Somehow customers are able to purchase software on our website and receive a disk in the mail with a serial key. Keys or product specific and so disks and keys can be easily shared.
GOALS:
Modify the hash input so keys are now based on major version number (done).
Implement a web service using App Engine (it's just me so I don't want to maintain any hardware) whereby a user can purchase a serial that is automatically generated and delivered via email.
Use the existing licensing module or replicate the hash/API (I would like whoever is sending out serial keys to continue to do so except for maybe a minor change to their work flow, like adding the version number).
QUESTIONS:
Is there any way to write wrap this win32 library in a python module and use it on Google's App Engine?
Are there any tools to discover the hashing algorithm being used? The library exports a generatekey function?
Any other comments or suggestions are greatly appreciated.
Cheers,
Tom
|
My page loads fine on pc or Mac but totally disappears on iPad or iPhone with iOS 5.1.1
| 10,743,917 | 1 | 0 | 428 | 0 |
iphone,python,html,ios,ipad
|
Use safari under developer mode as an iOS device to determine the root cause. After looking at what is happening, I bet your social loading code has changed something remotely, specifically, the fb-root tag that is warned about in the error console. Start there by disabling the social network stuff and start debugging.
update: I just disabled javascript on my phone and got the page up so it is definitely a JS bug somewhere.
| 0 | 0 | 0 | 0 |
2012-05-24T19:14:00.000
| 1 | 1.2 | true | 10,743,884 | 0 | 0 | 1 | 1 |
After upgrading to ios 5.1.1 from 4.2.1, my page www.zolkan.com loads and immediately disappears showing a blank gray page. I did not change anything on my page for over a year.
Before upgrading my iPhone from 4.2.1 to 5.1.1, it loaded fine. Same was with my iPad running 5.0.1... After going to 5.1.1, the page loads and disappears.
It seems that only the dynamic page (generated by python CGI) is doing this... The rest of the static pages behave normally.
Any ideas?
|
How can I bring all the used ASCII characters of a file into a dictionary/array/list and assign each character a value?
| 10,748,050 | 1 | 0 | 417 | 0 |
java,python
|
Each byte is a number from 0 to 255. An array containing those numbers is, precisely, an array containing the contents of the file. I'm not at all clear on what you want to do with this array (or dictionary, etc) but making it is going to be easy.
| 0 | 0 | 0 | 1 |
2012-05-25T03:12:00.000
| 3 | 0.066568 | false | 10,748,021 | 1 | 0 | 1 | 2 |
I basically want to read a file (could be an mp3 file or whatever). Scan the file for all the used ASCII characters of the file and put them into an dictionary, array or list. And then from there assign each character a number value.
For example:
Let's say I load in the file blabla.mp3
(Obviously this type of file is encoded so it won't be just plain english characters.)
This is it's contents:
╤dìúúH»╓╒:φººMQ╤╤╤╤┤i↔↔←GGGΦ⌠i←E::2E┤tti←╙╤ΦΦ⌠·:::::%Fæ╤╤:6Å⌠tSN│èëåD¿╢ÄÄÄÄÄÄÄÄÅO^↔:::.ÄÄÄÄÄÄèHΦΦ■ï»ó⌐╙-↔→E┤tttttttt}▲î╤╤dì"Ü:::)ú$tm‼º╤╓q╤╙·:.ñǰ"V├╡ΦPa↨/úúúúúúΦ╞îHΦ║*ÄèúóΦΦΦΦ»DΦΦ·tΘ○_Nïúkî►"DëÜ)#ú»→·:4Äïúúúúúó¿║:( ·:ç↑PR"$RGH◄◘úúó¿ΦΦΦΦ┌&HΦΦ┌+⌠WºGG ╤m→GF╘±"¿ΦñïúúúóΦò↨FæTtt╓ìú⌠ΦΦΦ⌠z:::=:::::≥E╤╤╤╤╤╤╤Tm↔↔▬Hªèi⌠ztz:::tt
I want to figure out what characters are being used and assign each one a value from 0 - 255 and each value will be unique to that character.
So ╤ = 0; Φ = 56; ú = 25 etc etc etc
Now I've been searching the python and java docs and I'm not so sure I know what I'm searching for. And I don't know if I should be worrying about ASCII characters or HEX or the raw bytes of the file.
I just need someone to point me in the right direction. Any help?
|
How can I bring all the used ASCII characters of a file into a dictionary/array/list and assign each character a value?
| 10,748,060 | 0 | 0 | 417 | 0 |
java,python
|
Each byte you read in already is a value between 0 and 255 (thus a byte). Is there a reason you can't just use that?
| 0 | 0 | 0 | 1 |
2012-05-25T03:12:00.000
| 3 | 0 | false | 10,748,021 | 1 | 0 | 1 | 2 |
I basically want to read a file (could be an mp3 file or whatever). Scan the file for all the used ASCII characters of the file and put them into an dictionary, array or list. And then from there assign each character a number value.
For example:
Let's say I load in the file blabla.mp3
(Obviously this type of file is encoded so it won't be just plain english characters.)
This is it's contents:
╤dìúúH»╓╒:φººMQ╤╤╤╤┤i↔↔←GGGΦ⌠i←E::2E┤tti←╙╤ΦΦ⌠·:::::%Fæ╤╤:6Å⌠tSN│èëåD¿╢ÄÄÄÄÄÄÄÄÅO^↔:::.ÄÄÄÄÄÄèHΦΦ■ï»ó⌐╙-↔→E┤tttttttt}▲î╤╤dì"Ü:::)ú$tm‼º╤╓q╤╙·:.ñǰ"V├╡ΦPa↨/úúúúúúΦ╞îHΦ║*ÄèúóΦΦΦΦ»DΦΦ·tΘ○_Nïúkî►"DëÜ)#ú»→·:4Äïúúúúúó¿║:( ·:ç↑PR"$RGH◄◘úúó¿ΦΦΦΦ┌&HΦΦ┌+⌠WºGG ╤m→GF╘±"¿ΦñïúúúóΦò↨FæTtt╓ìú⌠ΦΦΦ⌠z:::=:::::≥E╤╤╤╤╤╤╤Tm↔↔▬Hªèi⌠ztz:::tt
I want to figure out what characters are being used and assign each one a value from 0 - 255 and each value will be unique to that character.
So ╤ = 0; Φ = 56; ú = 25 etc etc etc
Now I've been searching the python and java docs and I'm not so sure I know what I'm searching for. And I don't know if I should be worrying about ASCII characters or HEX or the raw bytes of the file.
I just need someone to point me in the right direction. Any help?
|
python eclipse dependency plugin - m2eclipse like
| 10,803,047 | 0 | 0 | 195 | 0 |
python,eclipse,pydev
|
The only one so far I found available is PyFlakes, it does some level of dependency check and import validations.
| 0 | 1 | 0 | 0 |
2012-05-25T12:32:00.000
| 2 | 0 | false | 10,754,496 | 0 | 0 | 1 | 1 |
Is there any eclipse plugin for python dependency management? just like what M2Eclipse does for maven project? so I can resolve all the dependencies and get ride off all the errors when I develop python using pydev.
If there is no such plugin, how do I resolve the dependencies, do I have to install the dependency modules locally?
|
SHA 512 Password with webapp2 and App Engine?
| 10,780,404 | 2 | 1 | 827 | 0 |
python,google-app-engine,webapp2
|
As you observe, the default User model doesn't provide any way to customize the hash function being used. You could subclass it and redefine the problematic methods to take a hash parameter, or file a feature request with the webapp2 project.
Webapp2's password hashing has much bigger issues, though, as it doesn't do password stretching. While it optionally(!) salts the hash, it doesn't iterate it, making brute force attacks more practical than they should be for an attacker. It should implement a proper password primitive such as PBKDF2, SCrypt, or BCrypt.
To answer your question about relative strengths of hash functions, while SHA1 is showing some weakness, nobody has successfully generated a collision, much less a preimage. Further, the HMAC construction can result in secure HMACs even with a hash function that's weak against collision attacks; arguably even MD5 would work here.
Of course, attacks only ever get better, never worse, so it's a good idea to prepare for the future. If you're concerned about security, though, you should be much more concerned about the lack of stretching than the choice of hash function. And if you're really concerned about security, you shouldn't be doing authentication yourself - you should be using the Users API or OAuth, so someone else can have the job of securely storing passwords.
| 0 | 1 | 0 | 0 |
2012-05-27T05:55:00.000
| 1 | 1.2 | true | 10,771,973 | 0 | 0 | 1 | 1 |
If you are using webapp2 with Google App Engine you can see there is only one way to create an user with the "create_user" method [auth/models.py line:364]
But that method call to "security.generate_password_hash" method where in not possible use SHA 512
Q1: I would like to know what is the best way to create a SHA 512 Password with webapp2 and App Engine Python?
Q2: Is good idea use SHA 512 instead of encryption offered by webapp2 (SHA1), or it's enough?
|
PyDev for App Engine - re-import External Libs
| 10,773,714 | 0 | 0 | 149 | 0 |
python,eclipse,google-app-engine,pydev
|
If you create a new project, you get all the new libs. Move your existing (imported) sources to this new project.
| 0 | 1 | 0 | 0 |
2012-05-27T11:33:00.000
| 1 | 1.2 | true | 10,773,667 | 0 | 0 | 1 | 1 |
I have a project which I created 2 years ago. I need to work on it again, and didn't have it in my Eclipse Workspace so I downloaded it from git and did an import existing projects into workspace. All worked well, except I notice the External Libraries do not contain all the new libraries added to the SDK since I created the project (and there's loads now compared to then). It would be useful if I could select the GAE root dir and let Eclipse automatically pull in all the libs for me, as it does when you create a new project. I don't see a way of doing this other than adding them 1 by 1. Does anyone have any tips?!
|
In app engine, can I call "get_or_insert" from inside a transaction?
| 10,791,742 | 2 | 2 | 308 | 1 |
python,google-app-engine
|
No. get_or_insert is syntactic sugar for a transactional function that fetches or inserts a record. You can implement it yourself trivially, but that will only work if the record you're operating on is in the same entity group as the rest of the entities in the current transaction, or if you have cross-group transactions enabled.
| 0 | 1 | 0 | 0 |
2012-05-28T21:01:00.000
| 2 | 0.197375 | false | 10,790,381 | 0 | 0 | 1 | 1 |
In google app engine, can I call "get_or_insert" from inside a transaction?
The reason I ask is because I'm not sure if there is some conflict with having this run its own transaction inside an already running transaction.
Thanks!
|
Django & AJAX Changing Div Contents
| 10,795,821 | 1 | 0 | 522 | 0 |
python,ajax,django,web-applications
|
Simply return the rendered template fragment. You don't need to do anything special. Your Javascript can then just insert it into the DOM at the relevant point.
| 0 | 0 | 0 | 0 |
2012-05-29T04:59:00.000
| 1 | 1.2 | true | 10,793,272 | 0 | 0 | 1 | 1 |
My website have submenus for sections. What I want to do is, when users click the submenu, the content changes accordingly. For example, if user clicks "Pen", the contents of the shall be list of pens, clicks "Eraser" , contents shall be eraser list.
How can I achieve this by using Django template and ajax? I know that I could retrieve the information as JSON data and parse it to update the div, but that requires a lot of work and I cannot use the Django template functionality.
I managed to pass the AJAX request to the server and process the list, but how can I return the rendered template as AJAX result?
|
Where do I put cleanup code in a Flask application?
| 10,798,159 | 3 | 8 | 4,055 | 0 |
python,flask
|
The atexit module allows you to register program termination callbacks. Its callbacks won't be called however if the application is terminated by a signal. If you need to handle those cases, you can register the same callbacks with the signal module (for instance you might want to handle the SIGTERM signal).
I may have misunderstood what exactly you want to cleanup, but resources such as file handles or database connections will be closed anyway at interpreter shutdown, so you shouldn't have to worry about those.
| 0 | 0 | 0 | 0 |
2012-05-29T07:51:00.000
| 1 | 1.2 | true | 10,795,095 | 0 | 0 | 1 | 1 |
I'm new to web development in Python and I've chosen Flask to start my web application. I have some resources to free before application shutdown, but I couldn't find where to put my cleanup code.
Flask provides some decorators like before_request and teardown_request to register callbacks before and after request processing. Is there something similar to register a callback to be called before the application stops?
Thanks.
|
Python: Printing Html in PDF using Reportlab Canvas
| 10,826,660 | 2 | 3 | 4,178 | 0 |
python,pdf-generation,reportlab
|
If this is what you are tying to do you should look at using Platypus with ReportLab, a built in set of classes in ReportLab for building documents out of objects representing page elements. Or, if you want really simple, xhtml2pdf would probably be better.
| 0 | 0 | 1 | 0 |
2012-05-30T07:21:00.000
| 1 | 1.2 | true | 10,811,720 | 0 | 0 | 1 | 1 |
For one of my python project I am using reportlab's canvas feature to generate pdf document.
Can anyone please help me to print small subset of html (p, strong, ul, ol, li, img, alignments) on reportlab canvas?
Thanks in advance...
|
Have user sign up with Google and get redirected back to the site afterwards?
| 10,818,524 | 1 | 0 | 688 | 0 |
python,google-app-engine
|
When your user goes to the login url, there is a red SIGN UP button on the top. They can go sign up there.
It took me a second to find too, unfortunately you can't change the login page.
| 0 | 1 | 0 | 0 |
2012-05-30T11:22:00.000
| 2 | 0.099668 | false | 10,815,286 | 0 | 0 | 1 | 1 |
I am using GAE with python and I can ask users to sign in with Google using:
loginURL = (users.create_login_url(self.request.path))
This gives me a link that lets users sign in and get redirected to my site.
However some users do not have a Google ID,
Is there any way to let them sign up for one and be redirected to my site?
I know there is no:
signupURL = (users.create_signup_url(self.request.path))
That is the kind of thing I am looking for, asking the user to sign up and have her quickly redirected when she is done.
Thank you very much for any insight.
|
Setting up a Python environment in a Rails project
| 10,817,852 | 1 | 0 | 303 | 0 |
python,ruby-on-rails,ruby,deployment,scrapy
|
Are all your (DTAP) environments using the same operating system and processor architecture?
If not, I wouldn't recommend shipping the Python interpreter with your project. Why don't you compile a more recent version of Python on your environments and install it in some non-standard path, like /opt/python27/ (or similiar).
Then, just create a virtualenv on all environments using that interpreter.
Next, you deploy your project from your virtualenv (without the bin, include, etc.) to the virtualenv of the target environment.
I've never used Capistrano (Python dev myself), but I'm assuming it can just copy over directories from one environment (or VCS) to the other.
| 0 | 0 | 0 | 0 |
2012-05-30T13:09:00.000
| 1 | 1.2 | true | 10,816,962 | 1 | 0 | 1 | 1 |
I have a Ruby on Rails project, using Python + Scrapy to scrape the web, and I would like to distribute and deploy the Rails project with all Python executables and libraries installed automatically.
The deployment environment ships by default a Python version lower than 2.6, and I would like users not to depend on OS and installed Python executable.
So, basically I want to achieve a Python virtualenv inside my Rails project.
Any ideas on how do that?
I use Capistrano for deploying my Rails project.
|
Database migrations on django production
| 10,872,504 | 1 | 22 | 11,397 | 1 |
python,mysql,django,migration,django-south
|
South isnt used everywhere. Like in my orgainzation we have 3 levels of code testing. One is local dev environment, one is staging dev enviroment, and third is that of a production .
Local Dev is on the developers hands where he can play according to his needs. Then comes staging dev which is kept identical to production, ofcourse, until a db change has to be done on the live site, where we do the db changes on staging first, and check if everything is working fine and then we manually change the production db making it identical to staging again.
| 0 | 0 | 0 | 0 |
2012-05-31T01:12:00.000
| 6 | 0.033321 | false | 10,826,266 | 0 | 0 | 1 | 2 |
From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved.
The other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up.
How do you usually handle your database migrations and schema changes?
|
Database migrations on django production
| 70,559,647 | 0 | 22 | 11,397 | 1 |
python,mysql,django,migration,django-south
|
If its not trivial, you should have pre-prod database/ app that mimic the production one. To avoid downtime on production.
| 0 | 0 | 0 | 0 |
2012-05-31T01:12:00.000
| 6 | 0 | false | 10,826,266 | 0 | 0 | 1 | 2 |
From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved.
The other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up.
How do you usually handle your database migrations and schema changes?
|
Intercepting all logging messages
| 10,827,794 | 3 | 1 | 1,549 | 0 |
python,logging
|
logging uses a hierarchy of loggers. Add a handler to the root logger and it will receive logged messages from child loggers, too.
To access the root logger use logging.getLogger().
| 0 | 0 | 0 | 1 |
2012-05-31T05:09:00.000
| 2 | 1.2 | true | 10,827,751 | 0 | 0 | 1 | 1 |
I'm working with an application where just about every module and every class emits logging messages.
I need a way to capture every single one of those messages without explicitly attaching a handler via .addHandler() to each logging instance (which is what I'm doing right now).
Is there any way to attach a handler to every logging instance at once?
|
migrating data from tomcat .dbx files
| 10,833,110 | 0 | 0 | 126 | 0 |
python,tomcat,jetty,data-migration,web-inf
|
The ".dbx" suffix has been used by various softwares over the years so it could be almost anything. The only way to know what you really have here is to browse the source code of the legacy java app (or the relevant doc or ask the author etc).
wrt/ scraping, it's probably going to be a lot of a pain for not much results, depending on the app.
| 0 | 0 | 0 | 0 |
2012-05-31T09:25:00.000
| 1 | 0 | false | 10,830,829 | 0 | 0 | 1 | 1 |
I want to migrate data from an old Tomcat/Jetty website to a new one which runs on Python & Django. Ideally I would like to populate the new website by directly reading the data from the old database and storing them in the new one.
Problem is that the database I was given comes in the form of a bunch of WEB-INF/data/*.dbx and I didn't find any way to read them. So, I have a few questions.
Which format do the WEB-INF/data/*.dbx use?
Is there a python module for directly reading from the WEB-INF/data/*.dbx files?
Is there some external tool for dumpint the WEB-INF/data/*.dbx to an ascii format that will be parsable by python?
If someone has attempted a similar data migration, how does it compare against scraping the data from the old website? (assuming that all important data can be scraped)
Thanks!
|
Django post_save preventing recursion without overriding model save()
| 19,936,271 | 39 | 38 | 22,986 | 0 |
python,django,django-signals
|
Don't disconnect signals. If any new model of the same type is generated while the signal is disconnected the handler function won't be fired. Signals are global across Django and several requests can be running concurrently, making some fail while others run their post_save handler.
| 0 | 0 | 0 | 0 |
2012-05-31T19:27:00.000
| 10 | 1 | false | 10,840,030 | 0 | 0 | 1 | 3 |
There are many Stack Overflow posts about recursion using the post_save signal, to which the comments and answers are overwhelmingly: "why not override save()" or a save that is only fired upon created == True.
Well I believe there's a good case for not using save() - for example, I am adding a temporary application that handles order fulfillment data completely separate from our Order model.
The rest of the framework is blissfully unaware of the fulfillment application and using post_save hooks isolates all fulfillment related code from our Order model.
If we drop the fulfillment service, nothing about our core code has to change. We delete the fulfillment app, and that's it.
So, are there any decent methods to ensure the post_save signal doesn't fire the same handler twice?
|
Django post_save preventing recursion without overriding model save()
| 22,560,210 | 4 | 38 | 22,986 | 0 |
python,django,django-signals
|
You could also check the raw argument in post_save and then call save_baseinstead of save.
| 0 | 0 | 0 | 0 |
2012-05-31T19:27:00.000
| 10 | 0.07983 | false | 10,840,030 | 0 | 0 | 1 | 3 |
There are many Stack Overflow posts about recursion using the post_save signal, to which the comments and answers are overwhelmingly: "why not override save()" or a save that is only fired upon created == True.
Well I believe there's a good case for not using save() - for example, I am adding a temporary application that handles order fulfillment data completely separate from our Order model.
The rest of the framework is blissfully unaware of the fulfillment application and using post_save hooks isolates all fulfillment related code from our Order model.
If we drop the fulfillment service, nothing about our core code has to change. We delete the fulfillment app, and that's it.
So, are there any decent methods to ensure the post_save signal doesn't fire the same handler twice?
|
Django post_save preventing recursion without overriding model save()
| 10,840,333 | 91 | 38 | 22,986 | 0 |
python,django,django-signals
|
you can use update instead of save in the signal handler
queryset.filter(pk=instance.pk).update(....)
| 0 | 0 | 0 | 0 |
2012-05-31T19:27:00.000
| 10 | 1 | false | 10,840,030 | 0 | 0 | 1 | 3 |
There are many Stack Overflow posts about recursion using the post_save signal, to which the comments and answers are overwhelmingly: "why not override save()" or a save that is only fired upon created == True.
Well I believe there's a good case for not using save() - for example, I am adding a temporary application that handles order fulfillment data completely separate from our Order model.
The rest of the framework is blissfully unaware of the fulfillment application and using post_save hooks isolates all fulfillment related code from our Order model.
If we drop the fulfillment service, nothing about our core code has to change. We delete the fulfillment app, and that's it.
So, are there any decent methods to ensure the post_save signal doesn't fire the same handler twice?
|
Page returned by POST handler ignored - get blank response (web.py)
| 10,842,122 | 1 | 1 | 257 | 0 |
python,web.py,uwsgi
|
To answer my own question, you need to call web.input() otherwise the returned data will be ignore (who knows why? is it a bug?)
| 0 | 0 | 1 | 0 |
2012-05-31T21:57:00.000
| 1 | 0.197375 | false | 10,841,854 | 0 | 0 | 1 | 1 |
This is using web.py with uwsgi.
When I return page data from a POST handler, the browser receives a blank page instead. GET handlers are working fine for me. The handler is being called correctly, and redirects (web.seeother) will work.
|
Change Plone workflow
| 12,753,017 | 2 | 0 | 164 | 0 |
python,plone
|
You should try this:
Go to portal_workflow -> contents
Copy & Paste simple_publication_workflow
Rename copy_of_simple_publication_workflow to something else (eg. my_files_workflow)
Go to my_files_workflow -> States -> published -> Permissions
Uncheck the unwanted Permissions
Now you can assign this new workflow to your files.
| 0 | 0 | 0 | 0 |
2012-06-01T08:44:00.000
| 1 | 1.2 | true | 10,847,097 | 0 | 0 | 1 | 1 |
By default Files haven't workflow
So i put them the simple publication workflow (private -> submit -> publish)
I want the authenticated users could not modify the file when it is published
(on Plone 4.0.7)
|
Set Time Constraint for generated URL of uploaded files
| 10,850,488 | 1 | 0 | 73 | 0 |
python,django,django-forms
|
You can have a datetimefield as an additional column and expire it as and when required.
| 0 | 0 | 0 | 0 |
2012-06-01T11:38:00.000
| 2 | 1.2 | true | 10,849,550 | 0 | 0 | 1 | 2 |
I am trying to build an application (using Django) which uploads files and generates corresponding URL. Is there some way we can set time constraint for the url, i.e. the uploaded file in url should exist only for little time after the specified time that url should give an error.
I would be using the default django server, in such a case what would be the possible ways to tackle the time constarint problem. I would be glad if you answer for both the cases as for global and individual files, or even a single solution is good :)
~Newbie up with a Herculean Task! Thank You :)
|
Set Time Constraint for generated URL of uploaded files
| 10,849,796 | 1 | 0 | 73 | 0 |
python,django,django-forms
|
If your uploaded files are being served by the Django app itself, then it's quite easy (and can be solved in different ways depending on weither the "time constraint" is global to all files/urls or not).
Else - that is if the files are served by Apache or anything similar - you'll have to resort to some async mechanism to collect and delete "obsolete" files, either the Q&D way (using a cron job) or with some help from Celery.
| 0 | 0 | 0 | 0 |
2012-06-01T11:38:00.000
| 2 | 0.099668 | false | 10,849,550 | 0 | 0 | 1 | 2 |
I am trying to build an application (using Django) which uploads files and generates corresponding URL. Is there some way we can set time constraint for the url, i.e. the uploaded file in url should exist only for little time after the specified time that url should give an error.
I would be using the default django server, in such a case what would be the possible ways to tackle the time constarint problem. I would be glad if you answer for both the cases as for global and individual files, or even a single solution is good :)
~Newbie up with a Herculean Task! Thank You :)
|
google.com web pages with ".py" extension?
| 10,852,170 | 2 | 1 | 1,523 | 0 |
python,webserver
|
Yes, this just accesses a resource called answer.py on the server. This naming convention is entirely up to the server - it could run a python script (most likely what is going on), or it could even be in a completely different language. In any case, all that that browser cares about is the information that is returned from that resource - HTML, CSS, XML, etc...
| 0 | 0 | 0 | 0 |
2012-06-01T14:27:00.000
| 2 | 0.197375 | false | 10,852,130 | 0 | 0 | 1 | 2 |
Take, for instance, support.google.com/mail/bin/answer.py?hl=en&answer=13287. What is that answer.py? How does this work? I'm 99.99% sure that browsers don't have the ability to interpret python code (yet) like javascript / PHP. So what is this? Is it some Python webframework?
|
google.com web pages with ".py" extension?
| 10,852,194 | 2 | 1 | 1,523 | 0 |
python,webserver
|
Most likely you request a file that's ending in .py that spits out the usual HTML et. al. The file is executed on the server side, not in your browser.
But again, that's a URL and it could point to any resource. Could be anything. Like a lot of websites use pretty URLs to point you to something , except in this case it's not pretty. (Behind the scenes there are routers, rewrite rules sometimes, etc. to do this.)
| 0 | 0 | 0 | 0 |
2012-06-01T14:27:00.000
| 2 | 1.2 | true | 10,852,130 | 0 | 0 | 1 | 2 |
Take, for instance, support.google.com/mail/bin/answer.py?hl=en&answer=13287. What is that answer.py? How does this work? I'm 99.99% sure that browsers don't have the ability to interpret python code (yet) like javascript / PHP. So what is this? Is it some Python webframework?
|
UDP communication between JavaScript and Python
| 10,858,211 | 4 | 0 | 302 | 0 |
javascript,python,udp,multiplayer
|
I recommend doing the dumbest simplest thing to get your project to work, meaning probably http and Json. Then deal with any performance problems. Otherwise you'll spend much of your project on a hard optimization problem that might not really matter.
| 0 | 0 | 1 | 0 |
2012-06-01T22:38:00.000
| 2 | 0.379949 | false | 10,858,172 | 1 | 0 | 1 | 2 |
I am planning to make a multiplayer game with a JavaScript based Client UI and Python on the server side. The game will be dynamic, so communication speed is very important - consequently I have decided to use UDP. Does anyone have any tips on implementations I could utilize. What tools would you recommend for this project?
|
UDP communication between JavaScript and Python
| 10,858,488 | 1 | 0 | 302 | 0 |
javascript,python,udp,multiplayer
|
I've been using SockJS + Tornado for this sort of thing. Easy to get started with, and well supported in modern browsers.
| 0 | 0 | 1 | 0 |
2012-06-01T22:38:00.000
| 2 | 0.099668 | false | 10,858,172 | 1 | 0 | 1 | 2 |
I am planning to make a multiplayer game with a JavaScript based Client UI and Python on the server side. The game will be dynamic, so communication speed is very important - consequently I have decided to use UDP. Does anyone have any tips on implementations I could utilize. What tools would you recommend for this project?
|
Django Handle big files ( imageblob )
| 10,971,247 | 0 | 0 | 140 | 0 |
python,django,blob,sorl-thumbnail
|
It seems like the issue was the Jquery plugin that i was using to upload multiple files. The plugin was the one who split the file into chunks which were then sent individually as POST requests, and django didn't know that blob1, blob2, blob3, blob4 where the same file in chunks.
| 0 | 0 | 0 | 0 |
2012-06-02T04:10:00.000
| 1 | 1.2 | true | 10,859,714 | 0 | 0 | 1 | 1 |
I am writing a small gallery app and after extensive testing i submitted a 3mb image.
Basically the gallery app relies on another app that creates an UploadedFile instance for every image, however i see that for this specific image it has created 4 instances ( rows in db ) that belong to the same 3mb image, each image has "blob" at the end of its name.
My question is, how can i handle an image as big as this and be able to refer to the whole image ? in a html tag or django templatetag like sorl-thumbnail's ?
Im using python 2.7.2, Django 1.3.1 and MySQL 5.1
|
Can Django run on Gunicorn alone (no Apache or nginx)?
| 10,862,770 | 7 | 15 | 17,822 | 0 |
python,django,nginx,amazon-ec2,gunicorn
|
If you are already using amazon web services, you can use s3 buckets to host your static content and deploy your app to ec2 using gunicorn (or whatever you want). That way, you don't have to worry about setting up your own static file server at all.
| 0 | 0 | 0 | 0 |
2012-06-02T12:16:00.000
| 5 | 1 | false | 10,862,259 | 0 | 0 | 1 | 1 |
I have tried just about every django + nginx tutorial on the web and I cannot get an image file to display on the screen. It's always the old story - 404 PAGE NOT FOUND. The web page loads fine but django.png in my /static/ folder does not. Not sure if it's a problem in settings.py or with nginx.
I am so frustrated with it that I refuse to look at another "How to get nginx/django tutorial". If I deploy a website in the near future will Gunicorn suffice to run a Django site and serve static files simultaneously without using Apache or nginx? Is there a big benefit to having a reverse proxy in the first place?
|
how to mechnize the urls inside a returned page by mechanize in python?
| 10,869,296 | 0 | 0 | 104 | 0 |
python,mechanize
|
You can rewrite the urls, either by parsing the HTML with lxml, beautiful soup, etc - and then rewriting them and re-dumping the DOM to string before sending it to the user. Or by searching for URLs with regular expressions, and return a rewritten HTML.
keep in mind that doing it properly, with links generated by javascript, etc - is almost impossible.
That's why people use proxy servers.
| 0 | 0 | 1 | 0 |
2012-06-03T09:29:00.000
| 1 | 0 | false | 10,869,211 | 0 | 0 | 1 | 1 |
I am working on a project to make web-based proxy using python and
mechanize . I have a problem :
The page that mechanize returns, has URLS that are are not
Mechanized and if user clicks on it, they will go thourgh the link by
their own computer's ip (not the server that my code is installed on it) . is there any way to fix that ?
|
Best way for single worker implementation in Flask
| 10,872,890 | 0 | 1 | 2,057 | 0 |
python,multithreading,multiprocessing,flask,celery
|
You could hypervize the process using multiprocess or subprocess, then just hand the handle round the session.
| 0 | 0 | 0 | 0 |
2012-06-03T17:09:00.000
| 2 | 0 | false | 10,872,287 | 0 | 0 | 1 | 1 |
I have some spider that download pages and store data in database. I have created flask application with admin panel (by Flask-Admin extension) that show database.
Now I want append function to my flask app for control spider state: switch on/off.
I thing it posible by threads or multiprocessing. Celery is not good decision because total program must use minimum memory.
Which method to choose for implementation this function?
|
performing backend operations / tasks after specific intervals of time on Google App Engine (python)
| 10,873,056 | 1 | 0 | 203 | 0 |
python,google-app-engine,scheduled-tasks,backend,task-queue
|
Take a look at the Cron task or set a task queue with a specific ETA
| 0 | 1 | 0 | 0 |
2012-06-03T19:00:00.000
| 2 | 1.2 | true | 10,873,049 | 0 | 0 | 1 | 1 |
I want my GAE app to do some back-end processing and uploading/updating results to data-store after specific intervals of time (say every 6 hours). So whenever a user uses my app (and basically requests those values from the data-store) they would get the recent/updated values from the data-store.
How would this be implemented in google app engine? I'd really appreciate if someone could guide me in the right direction and/or provide me with information pertinent to doing something like this in python.
|
GAE DataStore python - fetch() vs run()
| 10,874,870 | 2 | 1 | 576 | 0 |
python,google-app-engine,google-cloud-datastore
|
You can run (with run) multiple datastore queries in parallel to improve latency. This has nothing to do with your resulting HTML. The resulting HTML should be the same.
| 0 | 1 | 0 | 0 |
2012-06-03T22:09:00.000
| 2 | 1.2 | true | 10,874,312 | 0 | 0 | 1 | 2 |
I saw there are two methods for getting data from the datastore:
fetch() and run()
Regarding fetch the documentation says:
Note: You should rarely need to use this method; it is almost always better to use run() instead.
I don't understand the difference between the two.
I am new to GAE and Python, please help me understand.
Thanks
It says that run() is asynchronous which I don't understand cause unlike JavaScript, once you run the Python script for the site, the html is frozen, right?
|
GAE DataStore python - fetch() vs run()
| 10,883,221 | 3 | 1 | 576 | 0 |
python,google-app-engine,google-cloud-datastore
|
Beginner's advice: until you appreciate the difference, stick with fetch(). There are many other things you probably ought to get comfortable with first before this subtle distinction will bother you.
| 0 | 1 | 0 | 0 |
2012-06-03T22:09:00.000
| 2 | 0.291313 | false | 10,874,312 | 0 | 0 | 1 | 2 |
I saw there are two methods for getting data from the datastore:
fetch() and run()
Regarding fetch the documentation says:
Note: You should rarely need to use this method; it is almost always better to use run() instead.
I don't understand the difference between the two.
I am new to GAE and Python, please help me understand.
Thanks
It says that run() is asynchronous which I don't understand cause unlike JavaScript, once you run the Python script for the site, the html is frozen, right?
|
How do you find the CPU consumption for a piece of Python?
| 10,906,462 | 2 | 6 | 1,360 | 0 |
python,django,performance,profiling,stress-testing
|
You could try configuring your test to ramp up slowly, slow enough so that you can see the CPU gradually increase and then run the profiler before you hit high CPU. There's no point trying to profile code when the CPU is maxed out because at this point everything will be slow. In fact, you really only need a relatively light load to get useful data from a profiler.
Also, by gradually increasing the load you will be better able to see if there is a gradual increase in CPU (suggesting a CPU bottleneck) or if there is a sudden jump in CPU (suggesting perhaps another type of problem, one that would not necessarily be addressed by more CPU).
Try using something like a Cosntant Throughput Timer to pace the requests, this will prevent JMeter getting carried away and over-loading the system.
| 0 | 0 | 0 | 1 |
2012-06-04T06:16:00.000
| 2 | 1.2 | true | 10,877,048 | 0 | 0 | 1 | 1 |
Background
I have a Django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down.
Problem:
Profiling the application gives me time taken by functions.
This time increases on high load.
Time consumed may be due to complex calculation or for waiting for CPU.
So, how to find the CPU cycles consumed by a piece of code ?
Since reducing the CPU consumption will increase the response time.
I might have written extremely efficient code and need to add more CPU power
OR
I might have some stupid code taking the CPU and causing the slow down ?
Update
I am using Jmeter to profile my web app, it gives me a throughput of 2 requests/sec. [ 100 users]
I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request.
More Info
Configuration Nginx + Uwsgi with 4 workers
No database used, using a responses from a REST API
On 1st hit the response of REST API gets cached, therefore doesn't makes a difference.
Using ujson for json parsing.
Curious to know:
Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools.
All those I found were casual snippets of code that perform profiling.
|
Loading external python modules for Pig UDFs on Amazon EMR
| 10,922,348 | 0 | 3 | 1,681 | 0 |
python,amazon,apache-pig,emr
|
could you manually hack sys.path inside of your jython script?
| 0 | 1 | 0 | 0 |
2012-06-04T17:15:00.000
| 1 | 0 | false | 10,885,312 | 0 | 0 | 1 | 1 |
I've created a python UDF to convert datetimes into different timezones. The script uses pytz which doesn't ship with python (or jython). I've tried a couple things:
Bootstrapping PIG to install it's own jython and including pytz in
that jython installation. I can't get PIG to use the newly installed
jython, it keeps reverting to Amazon's jython.
Setting PYTHONPATH to a local directory where the new modules have been installed
Setting HADOOP_CLASSPATH/PIG_CLASSPATH to the new installation of jython
Each of these ends up with "ImportError: No module named pytz" when I try to load the UDF script. The script loads fine if I remove pytz so it's definitely the external module that's giving it problems.
Edit: Originally put this as a comment but I thought I'd just make it an edit:
I've tried every way I know of to get PIG to recognize another jython jar. That hasn't worked. Amazon's jython is here: /home/hadoop/.versions/pig-0.9.2/lib/pig/jython.jar, with is recognizing this sys.path: /home/hadoop/lib/Lib. I can't figure out how to build external libraries against this jar.
|
Automating HTTP navigation and HTML printing using Python
| 10,899,256 | 0 | 2 | 773 | 0 |
javascript,python,html,automation
|
I think it will be easier for you get program like autoit.
| 0 | 0 | 1 | 0 |
2012-06-05T14:26:00.000
| 3 | 0 | false | 10,899,192 | 0 | 0 | 1 | 1 |
Every Monday at Work, I have the task of printing out Account analysis (portfolio analysis) and Account Positions for over 50 accounts. So i go to the page, click "account analysis", enter the account name, click "format this page for printing", Print the output (excluding company disclosures), then I go back to the account analysis page and click "positions" instead this time, the positions for that account comes up. Then I click "format this page for printing", Print the output (excluding company disclosures).Then I repeat the process for the other 50 accounts.
I haven't taken any programming classes in the past but I heard using python to automate a html response might help me do this faster. I was wondering if that's true, and if so, how does it work? Also, are there any other programs that could enable me automate this process and save time?
Thank you so much
|
Compiling and running code as dmg or exe
| 10,906,453 | 0 | 0 | 1,925 | 0 |
python,exe,dmg
|
If you mean specifically with Python, as I gather from tagging that in your question, it won't simply run the same way as Java will, because there's no equivalent Virtual Machine.
If the user has a Python interpreter on their system, they they can simply run the .py file.
If they do not, you can bundle the interpreter and needed libraries into an executable using Py2Exe, cxFreeze, or bbFreeze. For replacing a dmg, App2Exe does something similar.
However. the three commands you listed are not python-related, and rely on functionality that is not necessarily available on Windows or Mac, so it might not be as possible.
| 0 | 1 | 0 | 1 |
2012-06-05T23:01:00.000
| 1 | 0 | false | 10,906,198 | 0 | 0 | 1 | 1 |
Newbie question I am finding it hard to get my head around.
If I wanted to use one of the many tool out their like rsync lsync or s3cmd how can you build these into a program for none computer savvy people to use.
Ie I am comfortable opening terminal and running s3cmd which Is developed in python how would I go about developing this as a dmg file for mac or exe file for windows?
So a user could just install the dmg or exe then they have s3cmd lsync or rsync on their computer.
I can open up eclipse code a simple app in java and then export as a dmg or exe I cannot figure out how you do this for other languages say write a simple piece of code that I cam save as a dmg or exe and that after installed will add a folder to my desktop or something simple like that to get me started?
|
In Pyramid, is it safe to have a python global variable that stores the db connection?
| 10,907,158 | 2 | 2 | 877 | 1 |
python,pyramid
|
Pyramid has nothing to do with it. The global needs to handle whatever mechanism the WSGI server is using to serve your application.
For instance, most servers use a separate thread per request, so your global variable needs to be threadsafe. gunicorn and gevent are served using greenlets, which is a different mechanic.
A lot of engines/orms support a threadlocal connection. This will allow you to access your connection as if it were a global variable, but it is a different variable in each thread. You just have to make sure to close the connection when the request is complete to avoid that connection spilling over into the next request in the same thread. This can be done easily using a Pyramid tween or several other patterns illustrated in the cookbook.
| 0 | 0 | 0 | 0 |
2012-06-05T23:41:00.000
| 1 | 0.379949 | false | 10,906,477 | 0 | 0 | 1 | 1 |
It looks like this is what e.g. MongoEngine does. The goal is to have model files be able to access the db without having to explicitly pass around the context.
|
Google Finance recognizes my Python script as a bot and blocks it
| 10,908,773 | 1 | 1 | 1,173 | 0 |
python,bots,google-finance
|
well, you finally reached a quite challenging realm. decode the captcha.
there do exist OCR approaches to decode simple captcha into code. not seems to work for google captcha.
I heard there are some companies provide manual captcha decoding services, you can try to use some. ^_^ LOL
ok, to be serious, if google don't want you to do it that way, then it is not easy to decode those captchas. After all, why google on finance data, there are a lot other providers, right? try to scrape those websites.
| 0 | 0 | 1 | 0 |
2012-06-06T05:42:00.000
| 1 | 0.197375 | false | 10,908,715 | 0 | 0 | 1 | 1 |
I wrote a script that retrieves stock data on google finance and prints it out, nice and simple. It always worked, but since this morning I only get a page that tells me that I'm probably an automated script instead of the stock data. Of course, being a script, I can't pass the captcha. What can I do?
|
How to migrate large data from any file system to a plone site plone 4.1?
| 10,913,922 | 0 | 1 | 280 | 0 |
python,plone
|
If you're running out of other ideas, you can copy them in using WebDAV access. (Be aware, though, to pack the database afterwards: while Plone4 has blob support, I think files uploaded via WebDAV leave a stale copy in the database.)
| 0 | 0 | 0 | 0 |
2012-06-06T07:20:00.000
| 3 | 0 | false | 10,909,812 | 0 | 0 | 1 | 1 |
I want to implement DMS for the existing files on my File system. How do I import such existing files / images into my Plone DMS. I don't wish to use Products.Reflecto as I am unable to add any version control/ edit the uploaded files, images in it.
|
Migrating GAE app from python 2.5 to 2.7
| 10,910,709 | 2 | 3 | 335 | 0 |
google-app-engine,python-2.7
|
Put a main file in the top-level directory and import all your handlers there, then reference them via that file
| 0 | 1 | 0 | 0 |
2012-06-06T08:21:00.000
| 2 | 0.197375 | false | 10,910,591 | 0 | 0 | 1 | 1 |
I am trying to migrate my app and everything worked fine until I changed in app.yaml
from threadsafe: false to threadsafe: true.
The error I was receiving was:
threadsafe cannot be enabled with CGI handler: a/b/xyz.app
After some googling I found:
Only scripts in the top-level directory work as handlers, so if you have any in subdirectories, they'll need to be moved, and the script reference changed accordingly:
- url: /whatever
# This doesn't work ...
# script: lib/some_library/handler.app
# ... this does work
script: handler.app
Is there any workaround for this(if above research is valid), as I don't want to change my project hirarchy?
|
Embed Flask page in another without code duplication?
| 10,926,171 | 1 | 5 | 909 | 0 |
python,twitter-bootstrap,flask
|
To re-use a chunk of HTML, you can use Jinja's {% include %} tag. If that's too limiting, Jinja macros are also well suited. You can define your macros in a separate file and import them with {% import "path/to/macros.html" as my_macros %}.
Flask-Assets can help with the organisation of your assets.
As for using Blueprints, yes you should use them. But they mostly apply to Python code and HTML templates are organised in a different realm, so maybe their use is unrelated here.
You can't always remove all duplication though. If your game needs to affect three distant locations of the server-generated HTML, that's bits of template code to copy in every template that includes your game.
| 0 | 0 | 0 | 0 |
2012-06-06T09:52:00.000
| 1 | 0.197375 | false | 10,911,878 | 0 | 0 | 1 | 1 |
I have a page (located at /games/compare/) and it's a mini image comparison game. Users are shown two images and asked to pick between them in response to a question. This page can get images from the database, render a template with javascript and css inside and communicate back to the database using AJAX.
Now what if I wanted to embed this voting game onto the main page without duplicating any code? Ideally, I'd update the game and all the pages that "feature" the game will also reflect the changes.
I'm getting hung up on how to manage the assets for the entire site in a coherent and organized way. Some pages have css, javascript and I'm also using frameworks like bootstrap and a GIS framework.
Would I set the game up as a blueprint? How would I organize the assets (Javascript and CSS) so that there is no duplication?
Currently, I have a page rendering a template (main.html) which extends another (base.html). Base.html includes header.html, nav.html and footer.html with blocks set up for body and others.
My current approach is to strip everything out at the lowest level and reassemble it at a highest common level, which makes coding really slow. For instance, I have that voting game and right now it's located in a page called voting_game.html and has everything in it needed to play the game (full page html, styles and javascript included). Now if I want to include that game on another page, like the root index, the only solution I know of is to strip out the style, js and full page html from voting_game.html, leaving only the html necessary for the game to run. When I'm creating the index now, I'll import the html from voting_game.html but I'll separately have to import the style and javascript. This means I have to build every page twice, which is twice the work I need to be doing. This process also leaves little files all over the place, as I'm constantly refactoring and it makes development just a bookkeeping nightmare.
There has to be a way to do this and stay organized but I need your help understanding the best way to do this.
Thanks,
Phil
Edit: The embedded page should also be able to communicate with its parent page (the one it is being embedded into), or with other embedded pages within the same parent (children of a parent should be able to talk. So when someone plays the embedded game, they earn points, which should show up on another part other page, which would update reflecting the users current points.
This "Score board" would also be a separate widget/page/blueprint that can be embedded and will look for certain pieces of data in order to function.
|
Internationalizing images in django
| 10,912,731 | 4 | 3 | 1,446 | 0 |
python,django,image,internationalization
|
You could pass a language parameter to your page template and use it as part of your media file URL.
This would require you to host all media files for, e.g., English in a folder SITE_MEDIA/english, while other, e.g., Japanese images would be available from SITE_MEDIA/japanese.
Inside your page templates, you could then use {{MEDIA_URL}}{{language}}/my-image.jpg...
| 0 | 0 | 0 | 0 |
2012-06-06T10:48:00.000
| 5 | 0.158649 | false | 10,912,706 | 0 | 0 | 1 | 1 |
How would I implement different images from static folder based on language?
For example, when visiting the main site the layout will load in english but when changed to japanese the logo and images attached to the layout will change based on the requested language. please help.....
|
Twisted Server Sent Events accessing using Internet Explorer
| 10,949,657 | 2 | 2 | 431 | 0 |
python,internet-explorer,real-time,twisted,server-sent-events
|
Answering my own question is a little weird. But I just found the answer. I had to go with long polling. looks like, I have to write a framework which falls-back to long polling when server sent events are not supported. Answering just in case anyone comes for reference in future.
| 0 | 1 | 0 | 0 |
2012-06-06T13:01:00.000
| 1 | 1.2 | true | 10,914,740 | 0 | 0 | 1 | 1 |
I am working on a project that requires real time update. So, long ago, I decided to go with using Twisted SSE Handler (cyclone.sse). The project is at an end. And all the pub/sub stuff is good on all the browsers except Internet Explorer. IE doesn't support SSE. How do I get pub-sub working on IE without change of code in server-side? Also long polling will not help as I am using cyclone.sse.
|
Monkey patch form in INSTALLED_APPS
| 11,048,847 | 0 | 1 | 259 | 0 |
python,django,django-forms,monkeypatching
|
The solution here was to copy the package to my application folder and patch it locally.
| 0 | 0 | 0 | 0 |
2012-06-06T16:10:00.000
| 2 | 1.2 | true | 10,918,002 | 0 | 0 | 1 | 2 |
I have an app included into INSTALLED_APPS that needs to be monkey-patched.
The problem is that I don't explicitly import modules from this app (django-allauth).
Is there any way to get some access at the point when Django imports an application
and monkey patch one of it's internal forms?
Which in my case would be socialaccount.forms.DisconnectForm.clean = smth
|
Monkey patch form in INSTALLED_APPS
| 10,918,221 | -1 | 1 | 259 | 0 |
python,django,django-forms,monkeypatching
|
import ipdb; ipdb.set_trace() in the __init__ of the module. And write the char "w" to see the trace
| 0 | 0 | 0 | 0 |
2012-06-06T16:10:00.000
| 2 | -0.099668 | false | 10,918,002 | 0 | 0 | 1 | 2 |
I have an app included into INSTALLED_APPS that needs to be monkey-patched.
The problem is that I don't explicitly import modules from this app (django-allauth).
Is there any way to get some access at the point when Django imports an application
and monkey patch one of it's internal forms?
Which in my case would be socialaccount.forms.DisconnectForm.clean = smth
|
Cron-like scheduler, something between cron and celery
| 10,918,986 | 1 | 3 | 1,859 | 0 |
python,django,cron,celery
|
In my personal opinion, i would learn how to use cron. This won't take more than 5 to 10 minutes, and it's an essential tool when working on a Linux server.
What you could do is set up a cronjob that requests one page of your django instance every minute, and have the django script figure out what time it is and what needs to be done, depending on the configuration stored in your database. This is the approach i've seen in other similar applications.
| 0 | 1 | 0 | 0 |
2012-06-06T17:11:00.000
| 2 | 1.2 | true | 10,918,905 | 0 | 0 | 1 | 1 |
I'd like to run periodic tasks on my django project, but I don't want all the complexity of celery/django-celery (with celerybeat) bundled in my project.
I'd like, also, to store the config with the times and which command to run within my SCM.
My production machine is running Ubuntu 10.04.
While I could learn and use cron, I feel like there should be a higher level (user friendly) way to do it. (Much like UFW is to iptables).
Is there such thing? Any tips/advice?
Thanks!
|
As part of development, I am committing to github and pulling down and executing elsewhere. It feels wrong
| 10,919,450 | 3 | 1 | 97 | 0 |
python,github,jenkins
|
The solution is quite simple: make cleaner commits (fix typos before committing, only commit changes that belong together, not for too small edits). It's a bit odd that you don't take the time to fix typos (by running/testing locally) but wish to reduce the number of commits by some other means.
| 0 | 1 | 0 | 0 |
2012-06-06T17:40:00.000
| 2 | 0.291313 | false | 10,919,301 | 0 | 0 | 1 | 2 |
I have a standard-ish setup. Call it three servers - www, app and db, all fed from fabric scripts, and the whole on github.
I have a local laptop with the repo clone. I change a file locally, and push it to github then deploy using jenkins - which pulls from github and does its business. The problem here is I can put a dozen rubbish commits up till I manage to fix all my typos.
Its not so much the round trip to github that matters, but the sheer number of commits - I cannot squash them as they have been pushed. It looks ugly. It works sure but it is ugly.
I don't think I can edit on the servers directly - the file are spread out a lot, and I cannot make each directory on three servers a clone of github and hope to keep things sane.
And trying to write scripts that will synch the servers with my local repo is insane - fabric files took long enough.
I cannot easily git pull from jenkins, because I still have to commit to have jenkins pull, and we still get ugly ugly commit logs.
I cannot see a graceful way to do this - ideas anyone.
|
As part of development, I am committing to github and pulling down and executing elsewhere. It feels wrong
| 10,920,164 | 0 | 1 | 97 | 0 |
python,github,jenkins
|
The solution is to not use github / jenkins to deploy to the servers.
The servers should be seen as part of the 'local' deployment (local being pre-commit)
So use the fab files directly, from my laptop.
That was harder because of pre processing occuring on jenkins but that is replicable.
So, I shall take Jeff Atwoods advice here
embrace the suck, in public.
Well I certainly sucked at that - but hey I learnt.
Will put brain in the right way tomorrow.
| 0 | 1 | 0 | 0 |
2012-06-06T17:40:00.000
| 2 | 1.2 | true | 10,919,301 | 0 | 0 | 1 | 2 |
I have a standard-ish setup. Call it three servers - www, app and db, all fed from fabric scripts, and the whole on github.
I have a local laptop with the repo clone. I change a file locally, and push it to github then deploy using jenkins - which pulls from github and does its business. The problem here is I can put a dozen rubbish commits up till I manage to fix all my typos.
Its not so much the round trip to github that matters, but the sheer number of commits - I cannot squash them as they have been pushed. It looks ugly. It works sure but it is ugly.
I don't think I can edit on the servers directly - the file are spread out a lot, and I cannot make each directory on three servers a clone of github and hope to keep things sane.
And trying to write scripts that will synch the servers with my local repo is insane - fabric files took long enough.
I cannot easily git pull from jenkins, because I still have to commit to have jenkins pull, and we still get ugly ugly commit logs.
I cannot see a graceful way to do this - ideas anyone.
|
What to do with pyc files when Django or python is used with Mercurial?
| 10,920,888 | 5 | 7 | 8,119 | 1 |
python,django,mercurial,pyc
|
Usually you are safe, because *.pyc are regenerated if the corresponding *.py changes its content.
It is problematic if you delete a *.py file and you are still importing from it in another file. In this case you are importing from the *.pyc file if it is existing. But this will be a bug in your code and is not really related to your mercurial workflow.
Conclusion: Every famous Python library is ignoring their *.pyc files, just do it ;)
| 0 | 0 | 0 | 0 |
2012-06-06T18:58:00.000
| 4 | 0.244919 | false | 10,920,423 | 0 | 0 | 1 | 2 |
Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track.
I am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?
|
What to do with pyc files when Django or python is used with Mercurial?
| 10,920,511 | 0 | 7 | 8,119 | 1 |
python,django,mercurial,pyc
|
Sure if you have a .pyc file from an older version of the same module python will use that. Many times I have wondered why my program wasn't reflecting the changes I made, and realized it was because I had old pyc files.
If this means that .pyc are not reflecting your current version then yes you will have to delete all .pyc files.
If you are on linux you can find . -name *.pyc -delete
| 0 | 0 | 0 | 0 |
2012-06-06T18:58:00.000
| 4 | 0 | false | 10,920,423 | 0 | 0 | 1 | 2 |
Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track.
I am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?
|
Django: how to properly handle a database connection error
| 10,935,789 | 1 | 1 | 2,536 | 1 |
python,mysql,django
|
You could use a middleware with a process_view method and a try / except wrapping your call.
Or you could decorate your views and wrap the call there.
Or you could use class based views with a base class that has a method decorator on its dispatch method, or an overriden.dispatch.
Really, you have plenty of solutions.
Now, as said above, you might want to modify your Desktop application too!
| 0 | 0 | 0 | 0 |
2012-06-07T11:00:00.000
| 2 | 0.099668 | false | 10,930,459 | 0 | 0 | 1 | 1 |
I have a desktop application that send POST requests to a server where a django app store the results. DB server and web server are not on the same machine and it happens that sometimes the connectivity is lost for a very short time but results in a connection error on some requests:
OperationalError: (2003, "Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (110)")
On a "normal" website I guess you'd not worry too much: the browser display a 500 error page and the visitor tries again later.
In my case loosing info posted by a request is not an option and I am wondering how to handle this? I'd try to catch on this exception, wait for the connectivity to come back (lag is not a problem) and then continue the process. But as the exception can occur about anywhere in the code I'm a bit stuck on how to proceed.
Thanks for your advice.
|
MongoDB: Embedded users into comments
| 10,932,004 | 1 | 3 | 919 | 1 |
python,mongodb,mongoalchemy,nosql
|
What I would do with mongodb would be to embed the user id into the comments (which are part of the structure of the "post" document).
Three simple hints for better performances:
1) Make sure to ensure an index on the user_id
2) Use comment pagination method to avoid querying 200 times the database
3) Caching is your friend
| 0 | 0 | 0 | 0 |
2012-06-07T12:34:00.000
| 4 | 0.049958 | false | 10,931,889 | 0 | 0 | 1 | 1 |
I cant find "best" solution for very simple problem(or not very)
Have classical set of data: posts that attached to users, comments that attached to post and to user.
Now i can't decide how to build scheme/classes
On way is to store user_id inside comments and inside.
But what happens when i have 200 comments on page?
Or when i have N posts on page?
I mean it should be 200 additional requests to database to display user info(such as name,avatar)
Another solution is to embed user data into each comment and each post.
But first -> it is huge overhead, second -> model system is getting corrupted(using mongoalchemy), third-> user can change his info(like avatar). And what then? As i understand update operation on huge collections of comments or posts is not simple operation...
What would you suggest? Is 200 requests per page to mongodb is OK(must aim for performance)?
Or may be I am just missing something...
|
How many concurrent requests does a single Flask process receive?
| 10,943,523 | 39 | 184 | 143,304 | 0 |
python,flask,wsgi,gunicorn
|
Flask will process one request per thread at the same time. If you have 2 processes with 4 threads each, that's 8 concurrent requests.
Flask doesn't spawn or manage threads or processes. That's the responsability of the WSGI gateway (eg. gunicorn).
| 0 | 0 | 0 | 0 |
2012-06-07T19:12:00.000
| 4 | 1 | false | 10,938,360 | 0 | 0 | 1 | 2 |
I'm building an app with Flask, but I don't know much about WSGI and it's HTTP base, Werkzeug. When I start serving a Flask application with gunicorn and 4 worker processes, does this mean that I can handle 4 concurrent requests?
I do mean concurrent requests, and not requests per second or anything else.
|
How many concurrent requests does a single Flask process receive?
| 10,942,272 | 9 | 184 | 143,304 | 0 |
python,flask,wsgi,gunicorn
|
No- you can definitely handle more than that.
Its important to remember that deep deep down, assuming you are running a single core machine, the CPU really only runs one instruction* at a time.
Namely, the CPU can only execute a very limited set of instructions, and it can't execute more than one instruction per clock tick (many instructions even take more than 1 tick).
Therefore, most concurrency we talk about in computer science is software concurrency.
In other words, there are layers of software implementation that abstract the bottom level CPU from us and make us think we are running code concurrently.
These "things" can be processes, which are units of code that get run concurrently in the sense that each process thinks its running in its own world with its own, non-shared memory.
Another example is threads, which are units of code inside processes that allow concurrency as well.
The reason your 4 worker processes will be able to handle more than 4 requests is that they will fire off threads to handle more and more requests.
The actual request limit depends on HTTP server chosen, I/O, OS, hardware, network connection etc.
Good luck!
*instructions are the very basic commands the CPU can run. examples - add two numbers, jump from one instruction to another
| 0 | 0 | 0 | 0 |
2012-06-07T19:12:00.000
| 4 | 1 | false | 10,938,360 | 0 | 0 | 1 | 2 |
I'm building an app with Flask, but I don't know much about WSGI and it's HTTP base, Werkzeug. When I start serving a Flask application with gunicorn and 4 worker processes, does this mean that I can handle 4 concurrent requests?
I do mean concurrent requests, and not requests per second or anything else.
|
Scraping a website in python with firebug?
| 10,942,524 | 0 | 0 | 1,907 | 0 |
python,firebug,web-scraping
|
If the answer's not in the source code (possibly obfuscated, encoded, etc), then it was probably retrieved after the page loaded with an XmlHTTPRequest. You can use the 'network' panel in Firebug to see what other pieces of data the page loaded, and what requests it made to load them.
(You may have to enable the network panel and then reload the page/start over)
| 0 | 0 | 1 | 0 |
2012-06-08T02:44:00.000
| 2 | 0 | false | 10,942,469 | 0 | 0 | 1 | 1 |
I am trying to scrape a website, but the thing that I want to get is not in the source code. But it does appear when i use firebug. Is there a way to scrape from the firebug code as opposed to the source code?
|
Python: moving from dev server to production server
| 10,950,787 | 1 | 1 | 307 | 0 |
python
|
In general your application is using wsgi compliant framework and you shouldn't be afraid of multi-threaded / single-threaded server side. It's meant to work transparent and has to react same way despite of what kind of server is it, as long as it is wsgi compliant.
Every code block before bottle.run() will be run only once. As so, every connection (database, memcached) will be instantiated only once and shared.
When you call bottle.run() bottlepy starts wsgi server for you. Every request to that server fires some wsgi callable inside bottlepy framework. You are not really interested if it is single or multi -threaded environment, as long as you don't do something strange.
For strange i mean for instance synchronizing something through global variables. (Exception here is global request object for which bottlepy ensures that it contains proper request in proper context).
And in response to first question on the list: request may be computed in newly spawned thread or thread from the pool of threads (CherryPy is thread-pooled)
| 0 | 0 | 0 | 0 |
2012-06-08T12:12:00.000
| 1 | 1.2 | true | 10,948,636 | 0 | 0 | 1 | 1 |
I am developing an application with the bottlepy framework. I am using the standard library WSGIRefServer() to run a development server. It is a single threaded server.
Now when going into production, I will want to move to a multi-threaded production server, and there are many choices. Let's say I choose CherryPy.
Now, in my code, I am initializing a single wsgi application. Other than that, I am also initializing other things...
Memcached connection
Mako templates
MongoDB connection
Since standard library wsgiref is a single threaded server, and I am creating only a single wsgi app (wsgi callable), everything works just fine.
What I want to know is that when I move to the multi-threaded server, how will my wsgi app, initialization code, connections to different server, etc. behave.
Will a multi-threaded server create a separate instance of wsgi app for every thread. And will a new thread be spawned for each new request (which then means a new wsgi app for each request)?
Will my connections to memcached, mongoDB, etc, be shared across threads or not. What else will be shared between threads
Please explain the request-response cycle for a threaded server
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.