Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ruby on Rails frontend and server side processing in Python or Java.. HOW, What, Huh?
| 33,303,096 | 0 | 0 | 694 | 0 |
python,ruby-on-rails,server-side
|
Are you sure your database is well maintained and efficient (good indexes, normalised, clean etc)
Or can you not make use of messaging queues, so you keep your rails crud app, then the jobs are just added to a queue. Python scripts on the backend (or different machine) read from the queue, process then insert back into the database or add results to a results queue or whereever you want to read them from
| 0 | 0 | 0 | 0 |
2012-06-08T22:21:00.000
| 2 | 0 | false | 10,956,683 | 0 | 0 | 1 | 1 |
I am a data scientist and database veteran but a total rookie in web development and have just finished developing my first Ruby On Rails app. This app accepts data from users submitting data to my frontend webpage and returns stats on the data submitted. Some users have been submitting way too much data - its getting slow and I think I better push the data crunching to a backed python or java app, not a database. I don't even know where to start. Any ideas on how to best architect this application? The job flow is > data being submitted from the fronted app which pushes it to the > backend for my server app to process and > send back to my Ruby on Rails page. Any good tutorials that cover this? Please help!
What should I be reading up on?
|
Dajaxice performance measures for high-traffic site
| 10,959,121 | 0 | 0 | 218 | 0 |
python,django,performance,jquery,dajaxice
|
In my experience the main load in the web application laying on databases. Not on frameworks or templates engines.
| 0 | 0 | 0 | 0 |
2012-06-09T06:54:00.000
| 1 | 0 | false | 10,959,051 | 0 | 0 | 1 | 1 |
If this question has already been answered by someone on this site, please point me there.
Is it a good option to use Dajaxice for a high-traffic website, assuming millions of hits per day? Has anybody faced performance issues in terms of load-times for web pages with multiple AJAX calls to server?
What are the alternatives for Python+Django projects? Is it better to use just jQuery?
|
What is the Google Appengine Ndb GQL query max limit?
| 10,974,037 | 9 | 5 | 1,106 | 1 |
python,google-app-engine,gql,app-engine-ndb
|
This depends on lots of things like the size of the entities and the number of values that need to look up in the index, so it's best to benchmark it for your specific application. Also beware that if you find that on a sunny day it takes e.g. 10 seconds to load all your items, that probably means that some small fraction of your queries will run into a timeout due to natural variations in datastore performance, and occasionally your app will hit the timeout all the time when the datastore is having a bad day (it happens).
| 0 | 1 | 0 | 0 |
2012-06-10T11:51:00.000
| 2 | 1 | false | 10,968,439 | 0 | 0 | 1 | 2 |
I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.
|
What is the Google Appengine Ndb GQL query max limit?
| 10,969,575 | 7 | 5 | 1,106 | 1 |
python,google-app-engine,gql,app-engine-ndb
|
Basically you don't have the old limit of 1000 entities per query anymore, but consider using a reasonable limit, because you can hit the time out error and it's better to get them in batches so users won't wait during load time.
| 0 | 1 | 0 | 0 |
2012-06-10T11:51:00.000
| 2 | 1.2 | true | 10,968,439 | 0 | 0 | 1 | 2 |
I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.
|
How to do this kind of session related task in App Engine Python?
| 10,973,713 | 1 | 0 | 42 | 0 |
python,google-app-engine,web-applications
|
You'd use the datastore to create a union as an entity class, with a description and a name. If your image is small you can store it in your entity, if it's large, you may store it in the blobstore and store a link to it inside your entity.
You can use the python User API for authentication. You don't really need any special session work if you're using the User API.
| 0 | 1 | 0 | 0 |
2012-06-11T00:21:00.000
| 1 | 0.197375 | false | 10,973,432 | 0 | 0 | 1 | 1 |
First than all, I don't even know if this is a session related question. But I could not think a better way to describe it in the title.
I'm developing a web application for registered users so they can create and manage trade unions.
A user can create several unions. Each union can store an image, a description and a name.
The index page shows the list of unions created by the currently registered user.
When the user clicks on a union from the list, all the pages of the application must show
in they headers the corresponding name and image stored for that union.
Also, all the options of the application must refer to the currently selected union.
That is the process for every selected union.
How could I do this on App Engine Python? What technique could I use? Is it something
related to sessions? I do the authentication process with the Gmail service.
I hope I explained myself clearly.
Thanks in advance!
|
"ImportError: No module named cv2" when running Django project from PyCharm IDE
| 19,462,559 | 1 | 3 | 20,383 | 0 |
python,django,opencv,pycharm
|
I'm not quite sure if this works for you guys but it works for me. In my case, it seems to me that I installed OpenCV to work with the default Python arriving with OS X. I remember I tried to install Python 2.7.5 and Python 3 in my Mac as well, I see them when I chose my Python interpreter for Pycharm. And all of them didn't let me import module cv2. So I change to the default Python2.7.2 (/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python). Then, in File/DefaultSettings/Project Interpreter/Python Interpreter, click on the Python interpreter that's been added (Python 2.7.2), click on Paths and locate to "/usr/local/bin/python2.7/site-packages"and add it. Click the blue refresh button, apply and ok. Then it works, both with import and autocompletion.
Regards,
| 0 | 0 | 0 | 0 |
2012-06-11T21:33:00.000
| 2 | 0.099668 | false | 10,987,834 | 1 | 0 | 1 | 2 |
I'm running a Django project from PyCharm with the configuration set up to use the Python interpreter from a virtualenv which has a dependency on opencv. The site works fine locally when I run django-admin.py runserver, however I keep getting an "ImportError: No module named cv2" error when I try to run the project directly from the PyCharm IDE.
Has anyone else had this issue with PyCharm and opencv?
|
"ImportError: No module named cv2" when running Django project from PyCharm IDE
| 10,992,173 | 10 | 3 | 20,383 | 0 |
python,django,opencv,pycharm
|
In the end I ended up having to set an environment variable directly in the Pycharm Edit Configurations -> Run/Debug Configurations -> Environment Variables panel. I added the following option after you hit the edit button: set name to PYTHONPATH and value to /usr/local/lib/python2.7/site-packages:$PYTHONPATH which should display in the input box after editing as PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH. Also, I made sure to log out and log back in of osx which also worked for a couple other path related issues.
| 0 | 0 | 0 | 0 |
2012-06-11T21:33:00.000
| 2 | 1.2 | true | 10,987,834 | 1 | 0 | 1 | 2 |
I'm running a Django project from PyCharm with the configuration set up to use the Python interpreter from a virtualenv which has a dependency on opencv. The site works fine locally when I run django-admin.py runserver, however I keep getting an "ImportError: No module named cv2" error when I try to run the project directly from the PyCharm IDE.
Has anyone else had this issue with PyCharm and opencv?
|
python project-specific modules installation approach
| 10,993,067 | 0 | 2 | 223 | 0 |
python,module
|
Technically for any Python module to be "installed" you just have to add it to the sys.path variable, so that Python can find and import it.
Same goes with django apps, which are Python modules. As long as Python can find and import the django application, you just have to add it to INSTALLED_APPS in settings (and maybe few more steps usually described in the application, e.g. adding urls etc).
| 0 | 0 | 0 | 1 |
2012-06-12T08:27:00.000
| 2 | 0 | false | 10,992,976 | 1 | 0 | 1 | 1 |
I'm experienced in PHP and recently started studying Python, and right now I'm on creating a small web project using django. And I have a conceptual question about approach to installing modules in Python and django.
Example: based on expected needs for my project I've googled and downloaded django_openid module. And obviously I want to install it to my project.
However, when I do it the prescribed way (python setup.py install) it installs it to python dir as a python module. Thus this module becomes not project specific, but system-wide.
So, what is generally right approach to install project-specific modules in python?
Based on my PHP experience it looks strange to install high level functional modules into the python itself. I'd rather expect it to be installed in the project library and included in the project on runtime.
Or do I loose something important here?
I've googled around, but as long as this is rather a conceptual approach question - keywords search doesnt work good in this case.
|
App shows up as "not synced" after convert_to_south
| 11,009,395 | 2 | 2 | 236 | 0 |
python,django,database-schema,database-migration,django-south
|
As long as you are not getting any errors, this is fine. There are two ways to create a table in Django/South:
Running syncdb which automatically creates the initial tables of Django.
Running an initial migration of an app which also creates the tables of that app.
These are different approaches: tables that were 'synced' are not created with a migration or vice versa. So if South has made the tables with an initial migration then it is correct that they are not 'synced'.
To check whether it has worked correctly, you need: an entry in the south_migrationhistory table (i.e., South knows that the migration has been done) and the table(s) with the proper structure in the database. If that's the case then there's nothing to worry about.
| 0 | 0 | 0 | 0 |
2012-06-13T06:05:00.000
| 1 | 1.2 | true | 11,009,083 | 0 | 0 | 1 | 1 |
I ran the convert_to_south command on my app. Everything seems to have gone fine: the migration is in south_migrationhistory table, migrate --list show the migration as applied BUT when I do syncdb, the app is still shows as "Not Synced". It suggests I migrate those (which does nothing, since there is nothing to migrate)
Is this behaviour expected?
|
Django vs webapp2 on App Engine
| 11,020,530 | 24 | 11 | 9,275 | 0 |
python,django,google-app-engine,python-2.7,webapp2
|
Choosing between Django and webapp2 really depends on what you're using it for. In your question you haven't given any of the parameters for your decision making, so it's impossible to tell which is "better". Describing them both as "web frameworks" shows you haven't done much research into what they are.
Webapp2 is essentially a request handler. It directs HTTP requests to handlers that you write. It's also very small.
Django has a request handler. It also has a template engine. It also has a forms processor. It also has an ORM, which you may choose to use, or not. Note that you can use the ORM on CloudSQL, but you'll need to use Django-nonrel if you want to use the ORM on the HRD. It also has a library of plugins that you can use, but they'll only work if you're using the Django ORM. It also has bunch of 3rd party libraries, which will also require the Django ORM.
If you have portability in mind the Django ORM would help a lot.
You'll have to make your decision comparing what you actually need.
| 0 | 1 | 0 | 0 |
2012-06-13T08:29:00.000
| 1 | 1.2 | true | 11,010,953 | 0 | 0 | 1 | 1 |
I would like to know your opinion of which of these two web frameworks (Django & webapp2) is better for using on App Engine Platform, and why?
Please don't say that both are completely different, because Django is much more complete. Both are the "web frameworks" you can use in App Engine.
|
python: how to have a dictionary which can be accessed from all the app
| 11,013,947 | -2 | 3 | 576 | 0 |
python,google-app-engine,python-2.7
|
This is impossible (without an external service). DBs are made for this to store data longer than one request. What you could do is to safe the dict "in" the users session, but I don't recommend that. Unless you have millions of entries every DB is fast enough even sqlite.
| 0 | 0 | 0 | 0 |
2012-06-13T11:28:00.000
| 3 | -0.132549 | false | 11,013,911 | 1 | 0 | 1 | 1 |
I am new to Python and have been studying its fundementals for 3 months now, learning types, functions and algorithms. Now I started practiciging web app development with GAE framework.
Goal: have a very large dictionary, which can be accessed from all .py files throughout the web app without having it stored more than once or re-created each time when someone visits a URL of the app.
I want to render a simple DB table to a dictionary, with hopes of speed gain as it will be in memory.
Also I am planing on creating an in memory DAWG - TRIE
I don't want this dictionary to be created each time a page is called, I want it to be stored in memory once, kept there and used and accessed by all sessions and if possible modified too.
How can I achieve this? Like a simple in memory DB but actually a Python dictionary?
Thank you.
|
Maintaining data integrity in mysql when different applications are accessing it
| 11,014,025 | 1 | 2 | 296 | 1 |
python,mysql,ruby-on-rails,database,triggers
|
Yes, refactor the code to put a data web service in front of the database and let the Ruby and Python apps talk to the service. Let it maintain all integrity and business rules.
"Don't Repeat Yourself" - it's a good rule.
| 0 | 0 | 0 | 0 |
2012-06-13T11:31:00.000
| 1 | 1.2 | true | 11,013,976 | 0 | 0 | 1 | 1 |
Okay., We have Rails webapp which stores data in a mysql data base. The table design was not read efficient. So we resorted to creating a separate set of read only tables in mysql and made all our internal API calls use that tables for read. We used callbacks to keep the data in sync between both the set of tables. Now we have a another Python app which is going to mess with the same database - now how do we proceed maintaining the data integrity?
Active record callbacks can't be used anymore. We know we can do it with triggers. But is there a any other elegant way to do this? How to people achieve to maintain the integrity of such derived data.
|
Shopify Python API and textiles inventory
| 11,042,475 | 1 | 1 | 579 | 0 |
python,shopify
|
Your best bet is to start by pulling down all products and variants from each shop into a db on your side. After that, you can listen for products/update webhooks and orders/paid webhooks to be alerted of any changes you should make.
| 0 | 0 | 0 | 0 |
2012-06-14T00:25:00.000
| 1 | 0.197375 | false | 11,025,222 | 0 | 0 | 1 | 1 |
We are a small-scale fair-trade textiles importer and recently made the internal switch to OpenERP for our inventory management. We have two shops on Shopify (in two different languages).
In a longer term, I have two goals: 1) to synchronise the inventory of the two shops and 2) to build a Shopify plugin for OpenERP that imports a sale upon reception of an email from Shopify.
Since OpenERP itself is written in Python, I would like to work with the Shopify Python API.
And since we're working with textiles wich usually have different styles and size, we're working with SKUs and variants in Shopify.
As a start, I would like to be able to sync the inventories between the two shops at midnight each day. If the inventory count of Shop A is lower than in Shop B, Shop B should get the count of Shop A, and the other way around.
My biggest problem in the moment seems to be to get a simple list of SKUs and inventory count with the Python API. Ideally, I would like to get two simple lists of SKUs and their inventory count, check if the variant from Shop A exists in Shop B and then check the inventory and propagate needed changes between the two.
However, I can't seem to get such a list and the documentation is extremely limited. Is the only possibility really to get all products first, then, for each product, to get the variants, and then to list these variants individually? So I would actually need to construct a whole database organisation around a task that I considered quite simple?
Does somebody have any experience with such a task? Is there any further documentation or examples that I could have a look at?
Thank you very much,
Knut-Otto
|
How to upgrade a django package only in a python virtual environment?
| 11,027,060 | 2 | 2 | 3,260 | 0 |
python,django,installation,package,virtualenv
|
After you switch to the virtual environment with the activate script. Just use pip install Django==1.4 no sudo needed.
Alternately you can use pip install -E=/path/to/my/virtual/env Django==1.4 in which case you don't need to switch to the virtual environment first.
| 0 | 0 | 0 | 0 |
2012-06-14T05:13:00.000
| 3 | 1.2 | true | 11,027,009 | 1 | 0 | 1 | 1 |
I needed a virtual environment with all the global packages included. I created one, and the global Django version is 1.3.1. Now, I need to upgrade the Django version to 1.4 only in my virtual environment. I switched to my environment by activating it, and tried
sudo pip install Django=1.4
It was installed,not in the virtual env but in the global dist-packages.
How to install a package only in the virtual environment?
|
How to deploy a python project to an environment that has not install some third-party libraries?
| 16,770,213 | 0 | 3 | 698 | 0 |
python,deployment,egg
|
I find that PYTHONPATH can be set with the egg files. And now I can just put the egg files in a directory and add these egg files to PYTHONPATH.
| 0 | 0 | 0 | 0 |
2012-06-14T07:11:00.000
| 2 | 0 | false | 11,028,318 | 1 | 0 | 1 | 1 |
I have a project that using some third-party libraries. My questions is how to deploy my project to an environment that has not install these third-party libraries. In Java, I can just put all jars in the "lib" directory and write a bootstrap shell script that setting the CLASSPATH to contain the jars. I want a clean solution like this so that makes little influence on the environment.
|
Print a dictionary and save it to a file, then copy it to a py file to use?
| 11,040,555 | 0 | 1 | 441 | 0 |
python,google-app-engine,python-2.7,webapp2
|
To be honest I can't see why you would want to try to do this so can't come up with an idea that might help.
Can you clarify what your trying to do instead of what your wanting to do?
Although if I am understanding you correctly what your wanting to do is get around resource usage. There is no way to avoid using GAE resources if your using the platform. No matter what you do your going to hit some type of resource usage on App Engine. You either put the dictionary in the datastore, blobstore, or memcache. You can send the data to another url, you can download and upload the data but you are still using resources.
| 0 | 0 | 0 | 0 |
2012-06-14T17:44:00.000
| 6 | 0 | false | 11,038,531 | 1 | 0 | 1 | 1 |
Using a DB, I want to create a very large dictionary. If I save it to disk, when pickled, it takes about 10 MB of space.
What I want to do is:
Save this dictionary as it is to disk, in order to open up that text document and copy it to another py file so that I won't have to re-generate it each time and whenever the py document is called via web app, it is iterable.
How can I do this?
PS. My app is running on Google app engine and I want to solve this issue like this to refrain from DB et al resource usage.
|
How to have different py files to handle different routes?
| 11,040,602 | 3 | 3 | 3,072 | 0 |
python,python-2.7,flask,webapp2
|
In my experience with flask, you cannot declare route configurations in a central file. Route handling is done via the use of route declarations. In my experience with python frameworks route handling is done at a more granular function level rather than a file level. Even in the frameworks that do have a more central route configuration setup, the routes are defined as being tied to a specific view/controller function not simply a python file.
As stated, each framework handles it differently. The three frameworks I have looked at in any detail, django, pyramid, and flask, all handle it differently. The closest to what you are looking for is django which has a urls.py file that you place all of your url configurations in, but again it points to function level items, not higher level .py files. Pyramid does a mix with part of a url declaration being put into the __init__.py file of the main module and the use of a decorator to correlate that named route to a function. And then you have flask, which you mentioned having looked at, which appears to use just decorators for "simplicity sake" as they are trying to reduce the number of overall files and configuration files that need to be used or edited to get an application from concept into served space.
| 0 | 0 | 0 | 0 |
2012-06-14T18:35:00.000
| 2 | 0.291313 | false | 11,039,287 | 0 | 0 | 1 | 1 |
I have been studying python for quite sometime now and very recently I decided to get into learning web development side of things. I have experience with PHP and PHP frameworks, along with ruby, where:
Routes are defined in a (single) file and then in that file, each route is assigned to a model (py file) which will uniquely handle incoming requests matching that route.
How can I achieve this with flask AND webapp2?
I read the documentation and tutorial in full but it got me very confused. I just want A file where all routes and how should they be handled are set, and then each route request to be handled by its own model (python file).
All the examples lead to single file apps.
Thank you VERY MUCH, really. Please teach, kindly, in a simple way.
|
django application loading very slow in IE
| 11,044,764 | 1 | 0 | 383 | 0 |
python,django
|
Django is not responsible for your site being slow in IE.The following might be the reason:
1) You might have heavy images/javascript in you page.use YSLOW/PSO to debug it.
2) Try seving a webserver like apache and not with django.
| 0 | 0 | 0 | 0 |
2012-06-14T23:57:00.000
| 1 | 0.197375 | false | 11,042,970 | 0 | 0 | 1 | 1 |
Our Django application is working without problems in Chrome but it is tiresome when using IE.
Running the application using manage.py runserver works fine but in our production site, it is very slow. Navigating from page to page is very slow.
How can we improve the app's performance in IE? We've already tried reducing our js and css lines and optimizing our js and css code but that hasn't helped.
|
Django login not working
| 11,046,189 | 5 | 3 | 2,498 | 0 |
python,django,django-admin,django-authentication
|
I got it-- I had set SESSION_COOKIE_SECURE to True in my settings.py, but since I'm using the development server SSL isn't enabled it would just redirect to the same page. Thanks for the help guys you got me searching around.
| 0 | 0 | 0 | 0 |
2012-06-15T01:21:00.000
| 1 | 1.2 | true | 11,043,508 | 0 | 0 | 1 | 1 |
I have a really weird issue here. I'm using my local development server right now, and I'm working on the user account aspect of my site.
I thought I had it worked out, but when I try to access @login_required views I fill in the login information, and am redirected back to the login page everytime. When I try to login to the admin site (to verify everything is good on the backend) the same thing happens: I put in a correct username and password, and am redirected back to the login page.
I verified via the shell that the username I'm using for the admin site is a super user, is staff, and is active. In my settings I have Authentication and Session middleware enabled, as well as django.contrib.auth and django.contrib.sessions in my installed apps. Any ideas? Thanks in advance!
|
redirect old php urls to new flask urls
| 11,052,322 | 1 | 2 | 735 | 0 |
php,python,redirect,flask
|
If using apache, putting rewrite rules into either the directory section of the httpd.conf file or into an .htaccess file would probably be the easiest way to do this.
| 0 | 0 | 0 | 0 |
2012-06-15T14:05:00.000
| 3 | 0.066568 | false | 11,052,200 | 0 | 0 | 1 | 1 |
I had a website which was written in php containing urls such as example.com/index.php?q=xyz .
Now I have rewritten it in flask ( python ) which has urls such as example.com/search?q=xyz .
What is the best way to do it?
One approach I think of is to write an index.php and code in php to redirect. But is it possible achieving same thing using application ( flask) only?
Update: sending parameters are not exactly same, there is some customization with that too.
|
Documentation when not using getters and setters
| 11,053,683 | 1 | 0 | 162 | 0 |
java,php,python,setter,getter
|
In my opinion, there shouldn't be any undocumented attributes in a class. PHP and other languages allow you to just stick attributes on a class from anywhere, whether they've been defined in the class or not. I think that's bad practice for the reasons you describe and more:
It's hard to read.
It makes it harder for other programmers (including your future self) to understand what's going on.
It prevents auto-complete functionality in IDEs from working.
It often makes the domain layer too dependent on the persistence layer.
Whether you use getters and setters to access the defined attributes of a class is a little more fungible to me. I like things to be consistent, so if I have a class that has a getChildren() method to lazy load some array of objects, then I don't make the $children attribute public, and I tend to make other attributes private as well. I think that's a little more a matter of taste, but I find it annoying to access some attributes in a class directly ($object->name;) and others by getters/setters.
| 0 | 0 | 0 | 0 |
2012-06-15T15:18:00.000
| 1 | 1.2 | true | 11,053,550 | 0 | 0 | 1 | 1 |
In Java I use getters/setters when I have simple models/pojos. I find that the code becomes self-documenting when you do it this way. Calling getName() will return a name, I don't need to care how it's mapped to some database and so on.
Problems rise when using languages where getters and setters start feeling clunky, like in Python, and I often hear people saying that they are bad. For example some time a go I had a PHP project in which some of the data was just queried from the database and column values mapped to the objects/dictionaries. What I found out was that code like this was annoyingly hard to read, you can't really just read the code, you read the code, then you notice that the values are fetched from the database and now you have to look through the database schema all the time to understand it. When all you could do is just look at the class definition and knowing that there won't be any undocumented magic keys there.
So my question is how do you guys document code without getters and setters?
|
Django guests vote only once poll
| 11,063,081 | 1 | 5 | 1,201 | 0 |
python,django
|
If it is that important that people can only vote once, consider creating a basic registration / login system anyway. A guest can always use multiple computers to skew the voting while account registration at least allows you to track which e-mail addresses are being used to vote. It also takes a bit more effort to skew the voting that way. If it's important but not of life-saving importance then I would use the cookie approach for anonymous guests.
| 0 | 0 | 0 | 0 |
2012-06-15T23:02:00.000
| 2 | 0.099668 | false | 11,059,191 | 0 | 0 | 1 | 1 |
I'm new to Django but am working on the tutorial on the Django website for creating a poll.
What is the best way to make it so guests (no registration / login) can only vote once on a poll?
IP (Don't want IP because people sharing a network can only vote once).
Cookie (User can delete the cookie but seems like the best approach).
Session (If the user closes the browser the session will change).
I'm guessing that Cookie would be the best approach but is there a better way for Django?
|
Should I use Screen Scrapers or API to read data from websites
| 11,061,160 | 7 | 1 | 356 | 0 |
python,html,screen-scraping,web-scraping
|
Using the website's public API, when it exists, is by far the best solution. That is quite why the API exists, it is the way that the website administrators say "use our content". Scraping may work one day and break the next, and it does not imply the website administrator's consent to have their content reused.
| 0 | 0 | 1 | 0 |
2012-06-16T05:41:00.000
| 2 | 1.2 | true | 11,061,135 | 0 | 0 | 1 | 1 |
I am building a web application as college project (using Python), where I need to read content from websites. It could be any website on internet.
At first I thought of using Screen Scrapers like BeautifulSoup, lxml to read content(data written by authors) but I am unable to search content based upon one logic as each website is developed on different standards.
Thus I thought of using RSS/ Atom (using Universal Feed Parser) but I could only get content summary! But I want all the content, not just summary.
So, is there a way to have one logic by which we can read a website's content using lib's like BeautifulSoup, lxml etc?
Or I should use API's provided by the websites.
My job becomes easy if its a blogger's blog as I can use Google Data API but the trouble is, should I need to write code for every different API for the same job?
What is the best solution?
|
Django model tags
| 11,065,539 | 0 | 1 | 404 | 0 |
python,django,django-models,tags,django-admin
|
Both are good. I have used Django-tagging in over 30 projects, and am yet to find an issue. Is there a specific challenge
| 0 | 0 | 0 | 0 |
2012-06-16T16:49:00.000
| 1 | 0 | false | 11,065,308 | 0 | 0 | 1 | 1 |
I am working on a django app and need to add a tags field to one of my models.
In the admin interface i need it to work like the wordpress tagging.. (comma separated entry, auto create new tags and autcomplete)
There are two tagging libraries I found, django-tagging and django-taggit, both also have an autcomplete extension.
The problem is that both of them are very old (last update was 2 years ago), unmaintained and needs some work to bring them up to speed.
Is there any good, recent tagging module i didn't find?
|
Simulating threads scheduling in java (stackless java?)
| 11,284,527 | 1 | 3 | 865 | 0 |
java,scheduling,python-stackless,stackless
|
Scala actors framework like Akka do this. Each thread handles many actors that's how they created so efficiently. I recommend taking look at their source code.
| 0 | 0 | 0 | 0 |
2012-06-16T17:28:00.000
| 3 | 0.066568 | false | 11,065,582 | 1 | 0 | 1 | 2 |
For some academic research I need to simulate several threads running on a single processor.
I want to be able to insert *call_scheduler()* calls inside my code, in which the current "thread" will pause (remembering in which code line it is) and some scheduling function will decide which thread to let go.
In python, this could be implemented neatly using stackless python. Is there a java alternative?
I could implement it using real threads and some messaging queues (or pipes) that will force only one thread to run at a time - but this is an ugly and problematic solution.
|
Simulating threads scheduling in java (stackless java?)
| 11,065,738 | 0 | 3 | 865 | 0 |
java,scheduling,python-stackless,stackless
|
Your question:
I could implement it using real threads and some messaging queues (or pipes) that will force only one thread to run at a time - but this is an ugly and problematic solution
Well if you want only a single thread to run at a time, by controlling the access of the thread on the object in a cleaner way, then use the Semaphores in java.util.concurrent package.
Semaphores sem = new Semaphores(1); // 1 here will mark that only one thread can have access
use sem.acquire() to get the key of the object, and when its done, use sem.release() then only another thread will get the access to this object.
| 0 | 0 | 0 | 0 |
2012-06-16T17:28:00.000
| 3 | 0 | false | 11,065,582 | 1 | 0 | 1 | 2 |
For some academic research I need to simulate several threads running on a single processor.
I want to be able to insert *call_scheduler()* calls inside my code, in which the current "thread" will pause (remembering in which code line it is) and some scheduling function will decide which thread to let go.
In python, this could be implemented neatly using stackless python. Is there a java alternative?
I could implement it using real threads and some messaging queues (or pipes) that will force only one thread to run at a time - but this is an ugly and problematic solution.
|
Creating shopping list web application
| 11,065,909 | 3 | 0 | 1,212 | 0 |
javascript,jquery,python,django
|
Here's how I'd do it. But note that there will be no right answer to this question.
Lets call the page where you click on foods, index.php.
On index, I'd have all the foods that could be clicked, as well as an empty div with the id "ingredients" -- eg. <div id="ingredients"></div>
When foods were clicked, they'd call jQuery handlers in main.js.
The onClick handlers would:
Keep track of the currently selected foods
Send a json or otherwise formatted list to a get_ingredients.php page via AJAX.
(This page would return the HTML I'd want to display in the "Ingedients" section)
Set the content of the ingredients div with the html returned by the AJAX call.
More explicitly, get_ingredients.php would:
Parse the GET/POST list of foods that was sent (via AJAX) by index
Query the database to see what ingredients were necessary for the selected foods
Construct HTML corresponding to the query results that should be put into the "ingredients" div on the user-facing page.
"display it" via echo/print/printf/etc, this isn't really displayed with AJAX, but rather sent as the AJAX response.
This way, you only have to keep track of the selected foods, and not deal with adding/subtracting individual ingredient quantities when foods are selected and deselected.
This has the downside of re-transmitting things "you already know", namely the ingredients of foods that were previously selected, however it eliminates a lot of the work that would otherwise be required.
| 0 | 0 | 0 | 0 |
2012-06-16T18:03:00.000
| 2 | 1.2 | true | 11,065,828 | 0 | 0 | 1 | 2 |
I'm learning Python, Django, Javascript and jQuery by trying to build a simple shopping list web application. I'd like to hear from experienced developers if my approach is correct:
Functionality:
Web page will display a list of foods. When I click on a certain food I want a list of ingredients to get displayed (this will be my shopping list). Clicking another food will add more ingredients to the shopping list.
My approach:
I have built a simple page with django. I'm able to display a list of all foods that have been entered to the database. What I need now is to start building a list of ingredients as I click on food items.
I was thinking I would load a list of ingredients from the database on the first page load but hide them with css. When I click on a food item I would un-hide the ingredients associated with the food item by using jquery/css.
This approach seems quite clumsy to me. Could you give me some advice how I could create my shopping list application using the technologies mentioned above? Is my approach correct?
|
Creating shopping list web application
| 11,065,889 | 0 | 0 | 1,212 | 0 |
javascript,jquery,python,django
|
@jedwards' comment is correct. Instead of loading everything for every item at the beginning(which could cause slow initial page loads). You should do an on-click event in jquery that does an ajax call back to the server and passes the name or id of the object in your db and returns its ingredients, then you populate the div that would contain the ingredients with them.
| 0 | 0 | 0 | 0 |
2012-06-16T18:03:00.000
| 2 | 0 | false | 11,065,828 | 0 | 0 | 1 | 2 |
I'm learning Python, Django, Javascript and jQuery by trying to build a simple shopping list web application. I'd like to hear from experienced developers if my approach is correct:
Functionality:
Web page will display a list of foods. When I click on a certain food I want a list of ingredients to get displayed (this will be my shopping list). Clicking another food will add more ingredients to the shopping list.
My approach:
I have built a simple page with django. I'm able to display a list of all foods that have been entered to the database. What I need now is to start building a list of ingredients as I click on food items.
I was thinking I would load a list of ingredients from the database on the first page load but hide them with css. When I click on a food item I would un-hide the ingredients associated with the food item by using jquery/css.
This approach seems quite clumsy to me. Could you give me some advice how I could create my shopping list application using the technologies mentioned above? Is my approach correct?
|
Do I really need to know more than one language if i want to make webapps?
| 11,070,833 | 1 | 0 | 96 | 0 |
python
|
I'd stick with one server side language for now - and Python (or any of the other languages you listed) is a perfectly good choice for that.
Basic notions of what Javascript would be important I think, and how and what Ajax type technology can do.
However, stretching the definition of "language" a little, I think you develop a reasonable understanding of html and css as these are integral to web-development.
| 0 | 0 | 0 | 1 |
2012-06-17T11:02:00.000
| 1 | 0.197375 | false | 11,070,805 | 0 | 0 | 1 | 1 |
I'm a liberal arts major and have a few ideas for some web apps. I've saved up enough money to hire someone to do the coding for me, but I want to pick up at least basic coding skills on my own. I'd rather not be the clueless founder.
I've started off with Python. So far, so good. Despite my liberal arts background, I've always been pretty mathematically inclined, even taking some advanced calculus classes in college.
My question is: if my goal is to make web apps and not actually land a job, is it really necessary to learn more than one programming language? I'm starting off with Python and I've found it flexible and powerful enough to meet most of my needs. Do I need to expand my oeuvre to PHP, Ruby, Java, etc.?
|
Calling a python web server within a script
| 11,071,410 | 2 | 0 | 268 | 0 |
python,web-services,monitoring,status,cherrypy
|
I would break it apart further:
script A, on port a
script B, on port b
web script C which checks on A and B (by making simple requests to them)
and returns the results in a machine-friendly format, ie JSON or XML
web page D which calls C and formats the results for people, ie an HTML table
There are existing programs which do this - Nagios springs to mind.
| 0 | 0 | 1 | 0 |
2012-06-17T11:09:00.000
| 2 | 0.197375 | false | 11,070,842 | 0 | 0 | 1 | 1 |
I would like to have a web server displaying the status of 2 of my python scripts.
These scripts listen for incoming data on a specific port. What I would like to do is have it so that when the script is running the web server will return a HTTP200 and when the script is not running a 500. I have had a look at cherrypy and other such python web servers but I could not get them to run first and then while the web server is running continue with the rest of my code. I would like it so that if the script crashes so does the web server. Or a way for the web server to display say a blank webpage with just a 1 in the HTML if the script is running or a 0 if it is not.
Any suggestions?
Thanks in advance.
|
Integration of pyjamas and tornado
| 11,453,702 | 0 | 0 | 178 | 0 |
python,tornado,pyjamas
|
The route I have chosen is to combine pyjs (old pyjamas) with web2py, via JSONRPC. So far it is working fine.
| 1 | 0 | 0 | 0 |
2012-06-17T12:28:00.000
| 2 | 1.2 | true | 11,071,287 | 0 | 0 | 1 | 1 |
Is it possible to write an application using the pyjamas widgets, together with the tornado server model? What I have in mind is to provide a desktop-like frontend for my web application with pyjamas, and do server side logic with tornado.
Specifically, I want to trigger events in the web application generated in the server side, and be able to display those events using the pyjamas widgets.
Does somebody have a working example of this?
|
appengine DateTimeProperty auto_now=True unexpected behavior
| 11,094,276 | 2 | 1 | 327 | 0 |
python,google-app-engine
|
No, this is not due to the HRD -- auto_now is implemented purely in the client library. After you write the entity, the property's value does not correspond to what's written to the datastore, but to what was last read. I'm not sure what you'll see for a brand new entity but it's probably still not the same as what was written.
If you switch to NDB you'll find that auto_now behaves much more reasonably. :-)
| 0 | 1 | 0 | 0 |
2012-06-17T14:42:00.000
| 1 | 1.2 | true | 11,072,138 | 0 | 0 | 1 | 1 |
I use a last_touch_date DateTimeProperty as a means for revisioning entities in my application's datastore using the auto_now=True flag.
When a user posts an entity it receives its last_touch_date as a reference for future updates.
However, when I check the entity's last_touch_date afterwards I always find a slight delta between this property as read right after writing and soon afterwards. I have a feeling this is a result of the high consistency model.
Is this known behavior? Is there a workaround besides managing this property my self?
|
Django signals for new entry only
| 11,079,716 | 0 | 14 | 2,985 | 0 |
python,django,django-signals
|
Checking for instance.id is a nice way of determining if the instance is "new". This only works if you use ids that are auto-generated by your database.
| 0 | 0 | 0 | 0 |
2012-06-18T08:58:00.000
| 3 | 0 | false | 11,079,644 | 0 | 0 | 1 | 1 |
I'm using Django's post_save signal to send emails to users whenever a new article is added to the site. However, users still receive new emails whenever I use save() method for already created articles. How is it possible to receive emails only when NEW entry was added?
Thanks in advance
|
Google App Engine: Determine whether Current Request is a Taskqueue
| 11,082,412 | 7 | 4 | 923 | 0 |
python,google-app-engine,task-queue
|
Pick any one of the following HTTP headers:
X-AppEngine-QueueName, the name of the queue (possibly default)
X-AppEngine-TaskName, the name of the task, or a system-generated unique ID if no name was specified
X-AppEngine-TaskRetryCount, the number of times this task has been retried; for the first attempt, this value is 0
X-AppEngine-TaskETA, the target execution time of the task, specified in microseconds since January 1st 1970.
Standard HTTP requests won't have these headers.
| 0 | 1 | 0 | 0 |
2012-06-18T11:23:00.000
| 2 | 1.2 | true | 11,081,767 | 0 | 0 | 1 | 1 |
Is there a way to dynamically determine whether the currently executing task is a standard http request or a TaskQueue?
In some parts of my request handler, I make a few urlfetches. I would like the timeout delay of the url fetch to be short if the request is a standard http request and long if it is a TaskQueue.
|
Generating users accounts inside Google App Engine
| 11,093,808 | 3 | 11 | 4,278 | 0 |
python,google-app-engine,openid
|
If you don't want to require a Google Account or OpenID account you have to roll your own accounts system. This gives you maximum freedom, but it is a lot of work and makes you responsible for password security (ouch). Personally I would advise you to reconsider this requirement -- OpenID especially has a lot going for it (except IIUC it's not so simple to use Facebook).
| 0 | 1 | 0 | 0 |
2012-06-18T13:19:00.000
| 3 | 0.197375 | false | 11,083,776 | 0 | 0 | 1 | 1 |
For a project, I'm going to create an application on Google App Engine where:
Discussion Leaders can register with their e-mail address (or OpenID or Google Account) on the website itself to use it.
In the application admin page they can create a group discussion for which they can add users based on their e-mail address
and these users should then receive generated account details (if they don't have accounts yet) making them able to log in to that group discussion with their newly created account.
I don't want to require discussion leaders to having a Google Account or OpenID account in order to register for the application and all user other accounts must be generated by the discussion leader.
However Google App Engine seems to only support Google Accounts and OpenID accounts. How would I go about this? Is there an existing pattern for creating leader-accounts and generating user-accounts from within the Google App Engine which still support the GAE User API?
|
Django/Python Downloading a file with progressbar
| 11,096,591 | 2 | 4 | 1,334 | 0 |
python,django,multithreading,file,download
|
It's possible but takes some jumps though the metaphorical hoops. My answer isn't Django specific, you'll need to translate it to your framework.
Start a thread that does the actual download. While it downloads, it must update some data structure in the user's session (total size of the download, etc).
In the browser, start a timer which does AJAX requests to a "download status URL"
Create a handler for this URL which takes the status from the session and turns that into JSON or a piece of HTML which you send to the browser.
In the AJAX handler's success method, take the JSON/HTML and put it into the current page. Unless the download is complete (this part is more simple with JSON), restart the timer.
| 0 | 0 | 0 | 0 |
2012-06-19T07:27:00.000
| 1 | 1.2 | true | 11,096,295 | 0 | 0 | 1 | 1 |
Ok i decided to post the question here because i really don't know what to do or even if its possible. You might tell me it's a repost or so but i aready read similar posts about it and it didn't helped me out.
Here is the deal. I have an admin interface with django and want to download a file from an external site on my server with a progressbar showing the percentage of the download.
I can't do anything while it's downloading. I tried to run a command with call_command within a view but it's the same.
Is it because Django server is single threaded? So, is it even possible do achieve what i want to do ?
Thanks in advance,
|
how to write python wrapper of a java library
| 11,286,405 | 2 | 6 | 10,944 | 0 |
java,python,binding,word-wrap,cpython
|
I've used JPype in a similar instance with decent results. The main task would be to write wrappers to translate your java api into a more pythonic api, since raw JPype usage is hardly any prettier than just writing java code.
| 0 | 0 | 0 | 0 |
2012-06-19T09:26:00.000
| 2 | 0.197375 | false | 11,098,131 | 0 | 0 | 1 | 1 |
How can we write a python (with CPython) binding to a Java library so that the developers that want to use this java library can use it by writing only python code, not worrying about any Java code?
|
Auto Increment Field in Django/Python
| 11,101,114 | 0 | 1 | 2,701 | 1 |
python,django,postgresql
|
Instead of deleting orders - you should create a field which is a boolean (call it whatever you like - for example, deleted) and set this field to 1 for "deleted" orders.
Messing with a serial field (which is what your auto-increment field is called in postgres) will lead to problems later; especially if you have foreign keys and relationships with tables.
Not only will it impact your database server's performance; it also will impact on your business as eventually you will have two orders floating around that have the same order number; even though you have "deleted" one from the database, the order number may already be referenced somewhere else - like in a receipt your printed for your customer.
| 0 | 0 | 0 | 0 |
2012-06-19T12:32:00.000
| 7 | 0 | false | 11,100,997 | 0 | 0 | 1 | 4 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Auto Increment Field in Django/Python
| 11,101,064 | 4 | 1 | 2,701 | 1 |
python,django,postgresql
|
You are going to have to implement that feature yourself, I doubt very much that a relational db will do that for you, and for good reason: it means updating a potentially large number of rows when one row is deleted.
Are you sure you need this? It could become expensive.
| 0 | 0 | 0 | 0 |
2012-06-19T12:32:00.000
| 7 | 0.113791 | false | 11,100,997 | 0 | 0 | 1 | 4 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Auto Increment Field in Django/Python
| 11,101,032 | -1 | 1 | 2,701 | 1 |
python,django,postgresql
|
Try to set the value with type sequence in postgres using pgadmin.
| 0 | 0 | 0 | 0 |
2012-06-19T12:32:00.000
| 7 | -0.028564 | false | 11,100,997 | 0 | 0 | 1 | 4 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Auto Increment Field in Django/Python
| 15,074,698 | 0 | 1 | 2,701 | 1 |
python,django,postgresql
|
I came across this looking for something else and wanted to point something out:
By storing the order in a field in the same table as your data, you lose data integrity, or if you index it things will get very complicated if you hit a conflict. In other words, it's very easy to have a bug (or something else) give you two 3's, a missing 4, and other weird things can happen. I inherited a project with a manual sort order that was critical to the application (there were other issues as well) and this was constantly an issue, with just 200-300 items.
The right way to handle a manual sort order is to have a separate table to manage it and sort with a join. This way your Order table will have exactly 10 entries with just it's PK (the order number) and a foreign key relationship to the ID of the items you want to sort. Deleted items just won't have a reference anymore.
You can continue to sort on delete similar to how you're doing it now, you'll just be updating the Order model's FK to list instead of iterating through and re-writing all your items. Much more efficient.
This will scale up to millions of manually sorted items easily. But rather than using auto-incremented ints, you would want to give each item a random order id in between the two items you want to place it between and keep plenty of space (few hundred thousand should do it) so you can arbitrarily re-sort them.
I see you mentioned that you've only got 10 rows here, but designing your architecture to scale well the first time, as a practice, will save you headaches down the road, and once you're in the habit of it, it won't really take you any more time.
| 0 | 0 | 0 | 0 |
2012-06-19T12:32:00.000
| 7 | 0 | false | 11,100,997 | 0 | 0 | 1 | 4 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Understanding Python Web Application Deployment
| 24,712,242 | 0 | 6 | 723 | 0 |
python,web-applications,deployment
|
Couldn't you take half your servers offline (say by pulling them out of the load balancing pool) and then update those. Then bring them back online while simultaneously pulling down the other half. Then update those and bring them back online.
This will ensure that you stay online while also ensuring that you never have the old and new versions of your application online at the same time. Yes, this will mean that your site would run at half its capacity during the time. But that might be ok?
| 0 | 0 | 0 | 0 |
2012-06-19T13:01:00.000
| 2 | 0 | false | 11,101,525 | 0 | 0 | 1 | 1 |
I think I don't completely understand the deployment process. Here is what I know:
when we need to do hot deployment -- meaning that we need to change the code that is live -- we can do it by reloading the modules, but
imp.reload is a bad idea, and we should restart the application instead of reloading the changed modules
ideally the running code should be a clone of your code repository, and any time you need to deploy, you just pull the changes
Now, let's say I have multiple instances of wsgi app running behind a reverse proxy like nginx (on ports like 8011, 8012, ...). And, let's also assume that I get 5 requests per second.
Now in this case, how should I update my code in all the running instances of the application.
If I stop all the instances, then update all of them, then restart them all -- I will certainly lose some requests
If I update each instance one by one -- then the instances will be in inconsistent states (some will be running old code, and some new) until all of them are updated. Now if a request hits an updated instance, and then a subsequent (and related) request hits an older instance (yet to be updated) -- then I will get wrong results.
Can somebody explain thoroughly how busy applications like this are hot-deployed?
|
Pyramid: Preventing being forced to restart the pserve
| 11,159,396 | 8 | 3 | 1,412 | 0 |
python,python-2.7,pyramid
|
Well I think you can add the --reload flag when starting the webserver. This will watch for any changes on files and reload the server automatically. i.e /pserve --reload develoment.ini
| 0 | 0 | 0 | 0 |
2012-06-19T14:57:00.000
| 1 | 1.2 | true | 11,103,718 | 0 | 0 | 1 | 1 |
Although I have set pyramid.reload_templates to true e.g. "pyramid.reload_templates = true", each time I modify a view, I have to kill the pserve process and restart it in order to see the changes.
How can I get over this and just refresh the page to get the results?
Thank you
|
Game development: "Play Now" via website vs. download & install
| 11,113,116 | 1 | 0 | 277 | 0 |
java,python,flash
|
Download and install is a harder sell. People are more reluctant to do it, and once they have done it, you own the problem of platform compatibility, and you have installed code to update or avoid as your game evolves.
Java applets eliminate all that mess. Presumably also flash or html5.
| 1 | 0 | 0 | 0 |
2012-06-19T23:27:00.000
| 1 | 0.197375 | false | 11,110,902 | 0 | 0 | 1 | 1 |
I've spent some time looking over the various threads here on stackoverflow and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on.
In particular, I'm talking about browser games vs. desktop games.
I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D, but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now).
For simple & short games the newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "ok, I'm going to download and play that"? Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does anyone have any experiences with decisions like this?
Thanks!
(* With the exception of flash-based engines it seems like most of the other approaches have these downsides when compared to what is available for desktop-based environments. I'd go with flash, but I'm worried that flash's 3D capabilities aren't mature enough right now to do what I want easily).
|
Disable menu in report_Aeroo on form
| 13,524,775 | 0 | 2 | 90 | 0 |
python,report,openerp
|
I believe the entry is multi=True or alternatively I think you will find that just going in to the report and using the remove print button will do it.
| 0 | 0 | 0 | 0 |
2012-06-20T04:49:00.000
| 1 | 0 | false | 11,112,988 | 0 | 0 | 1 | 1 |
I make an Excel Report using Report_Aeroo but I have Little bit problem over here I make a wizard and from that I am generating a Excel Report so I don't want Menu of it on my Form view of model which I passed in Report.So what can I Do ??
like we have functionality in rml's like "Menu =false" anything same in Report Aeroo???
|
Programming Android apps in jython
| 11,122,066 | 44 | 64 | 34,891 | 0 |
android,python,jython
|
Jython doesn't compile to "pure java", it compiles to java bytecode - ie, to *.class files. To develop for Android, one further compiles java bytecode to Dalvik bytecode. This means that, yes, Jython can let you use Python for developing Android, subject to you getting it to play nice with the Android SDK (I haven't personally tried this, so I don't know how hard it actually is) - you do need to make sure you don't depend on any Java APIs that Android doesn't provide, and might need to have some of the Android API .class files around when you run jython. Aside from these niggles, your core idea should work - Jython does, indeed, let write code in Python that interacts with anything else that runs on the JVM.
| 1 | 0 | 0 | 0 |
2012-06-20T12:58:00.000
| 7 | 1.2 | true | 11,120,130 | 0 | 0 | 1 | 6 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 26,718,708 | -6 | 64 | 34,891 | 0 |
android,python,jython
|
sadly No.
Mobile phones only have Java ME (Micro Edition) but Jython requires Java SE (Standard Edition). There is no Jython port to ME, and there is not enough interest to make it worth the effort.
| 1 | 0 | 0 | 0 |
2012-06-20T12:58:00.000
| 7 | -1 | false | 11,120,130 | 0 | 0 | 1 | 6 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 11,120,423 | 5 | 64 | 34,891 | 0 |
android,python,jython
|
As long as it compiles to pure java (with some constraints, as some APIs are not available), but I doubt that python will be of much use in development of android-specific stuff like activities and UI manipulation code.
You also have to take care of application size - that is serious constraint for mobile developement.
| 1 | 0 | 0 | 0 |
2012-06-20T12:58:00.000
| 7 | 0.141893 | false | 11,120,130 | 0 | 0 | 1 | 6 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 23,649,506 | 3 | 64 | 34,891 | 0 |
android,python,jython
|
Yes and no. With jython you can use java classes to compile for the JVM. But Android use the DVM (Dalvik Virtual Machine) and the compiled code is different. You have to use tools to convert from JVM code to DVM.
| 1 | 0 | 0 | 0 |
2012-06-20T12:58:00.000
| 7 | 0.085505 | false | 11,120,130 | 0 | 0 | 1 | 6 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 53,232,363 | 1 | 64 | 34,891 | 0 |
android,python,jython
|
Yes, you can.
Test your python code on your computer and, when it is ok, copy to your Android device.
Install Pydroid from Google Play Store and compile your code again inside the application and you will get your App ready and running.
Use pip inside Pydroid to install any dependencies.
PS: You will need to configure your Android device to install APKs from outside Play Store.
| 1 | 0 | 0 | 0 |
2012-06-20T12:58:00.000
| 7 | 0.028564 | false | 11,120,130 | 0 | 0 | 1 | 6 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 13,966,835 | -2 | 64 | 34,891 | 0 |
android,python,jython
|
It's not possible. You can't use jython with android because the DVM doesn't understand it. DVM is not JVM.
| 1 | 0 | 0 | 0 |
2012-06-20T12:58:00.000
| 7 | -0.057081 | false | 11,120,130 | 0 | 0 | 1 | 6 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
top user authentication method for google app engine
| 11,132,393 | 2 | 1 | 637 | 0 |
python,google-app-engine,jinja2,authentication
|
Your choices are Google's own authentication, OpenID, some third party solution or roll your own. Unless you really know what you are doing, do not choose option 4! Authentication is very involved, and if you make a single mistake or omission you're opening yourself up to a lot of pain. Option 3 is not great because you have to ensure the author really knows what they are doing, which either means trusting them or... really knowing what you're doing!
So I'd suggest you chose between Google's authentication and OpenID. Both are well trusted; Google is going to be easier to implement because there are several OpenID account providers you have to test against; but Google authentication may turn away some users who refuse to have Google accounts.
| 0 | 1 | 0 | 0 |
2012-06-21T01:32:00.000
| 1 | 1.2 | true | 11,130,434 | 0 | 0 | 1 | 1 |
Having ease of implementation a strong factor but security also an issue what would the best user authentication method for google app engine be? My goal is to have a small very specific social network. I know how to make my own but making it hack-proof is a little out of my league right now. I have looked at OpenID and a few others.
I am using Jinja2 as my template system and writing all of my web app code in python.
Thanks!
|
Is any other open source web server available other than Apache webserver for web application development?
| 11,132,477 | 3 | 0 | 1,701 | 0 |
python,apache,web-applications,webserver
|
Apart from Apache web server is there any open source web servers available for web application development? are you looking for an HTTP server or a web framework, the two are quite different.
HTTP servers simply send/recieve requests among other tasks, yes you can use PHP and other tools most commonly through CGI or FCGI but fundamentally an HTTP server simply accepts HTTP requests, some content maybe dynamic if its coming from an underlying framework.
A web framework is a collection of tools used to generate dynamic content, or web apps, many frameworks come with a built in http server so you don't have to configure one on your own, but they aren't as powerful or as robust since the underlying frameworks tends to concentrate on generating the content.
nginx is one my favorite HTTP servers, among the many out there, since it tends to be one of the easier ones to configure.
As for web frameworks, there are many many out there, in the python comunity (giving the python tag) django tends to be quite popular since it tends to include virtually all the tools you'd ever need to deploy a web app, which include, url dispatchig, database engine + ORM Object Relational Mapper and its own templating engine to render dynamic html in its own limited language, to remove as much as possible the logic from the rendering phase.
Usually django apps are deployed behind nginx, to control multiple instances of sites on the server, as well as serving static content, web frameworks are not great at it.
Theres also micro-webframeworks like bottle which is basically a single python file, its quite cool, I usually use sqlalchemy as the ORM when building simple bottle apps.
| 0 | 1 | 0 | 0 |
2012-06-21T05:33:00.000
| 3 | 0.197375 | false | 11,132,059 | 0 | 0 | 1 | 2 |
Apart from Apache web server is there any open source web servers available for web application development?
I am looking for a web server developing python web applications and deploy it and test it.
|
Is any other open source web server available other than Apache webserver for web application development?
| 11,132,139 | 0 | 0 | 1,701 | 0 |
python,apache,web-applications,webserver
|
If you simply Google "Open Source Web Server" you'll get a lot of results.
Nginx
Lighttpd
Cherokee
Savant
Tornado
Nginx is probably the best alternative.
| 0 | 1 | 0 | 0 |
2012-06-21T05:33:00.000
| 3 | 0 | false | 11,132,059 | 0 | 0 | 1 | 2 |
Apart from Apache web server is there any open source web servers available for web application development?
I am looking for a web server developing python web applications and deploy it and test it.
|
Troubles with http server on linux
| 11,219,337 | 0 | 0 | 93 | 0 |
python,performance,http
|
It looks like that you have problems with DNS. can you check this idea running host 192.168.1.100 on the host? Please also check that other DNS queries being quickly processed.
Check /etc/hosts file for a quick-and-dirty solution.
| 0 | 1 | 0 | 0 |
2012-06-21T16:17:00.000
| 2 | 0 | false | 11,142,427 | 0 | 0 | 1 | 1 |
I have such problem. I have local http server (BottlePy or Django), and when i use http:// localhost/ or http:// 127.0.0.1/ - it loads immediately. But when i use my local ip (192.168.1.100), it loads very long time (some minutes). What could be the problem?
Server works on Ubuntu 11.
|
Suppress "None" output as string in Jinja2
| 11,146,693 | 67 | 62 | 38,596 | 0 |
python,jinja2
|
In new versions of Jinja2 (2.9+):
{{ value if value }}
In older versions of Jinja2 (prior to 2.9):
{{ value if value is not none }} works great.
if this raises an error about not having an else try using an else ..
{{ value if value is not none else '' }}
| 0 | 0 | 0 | 0 |
2012-06-21T20:57:00.000
| 5 | 1.2 | true | 11,146,619 | 1 | 0 | 1 | 1 |
How do I persuade Jinja2 to not print "None" when the value is None?
I have a number of entries in a dictionary and I would like to output everything in a single loop instead of having special cases for different keywords. If I have a value of None (the NoneType not the string) then the string "None" is inserted into the template rendering results.
Trying to suppress it using
{{ value or '' }} works too well as it will replace the numeric value zero as well.
Do I need to filter the dictionary before passing it to Jinja2 for rendering?
|
Is the Visitor pattern useful for dynamically typed languages?
| 11,155,836 | 0 | 10 | 2,548 | 0 |
php,python,ruby,design-patterns,visitor-pattern
|
I think you are using Visitor Pattern and Double Dispatch interchangeably. When you say,
If I can work with a family of heterogeneous objects and call their public methods without any cooperation from the "visited" class, does this still deserve to be called the "Visitor pattern"?
and
write a new class that manipulates your objects from the outside to carry out an operation"?
you are defining what Double dispatch is. Sure, Visitor pattern is implemented by double dispatch. But there is something more to the pattern itself.
Each Visitor is an algorithm over a group of elements (entities) and new visitors can be plugged in without changing the existing code. Open/Closed principle.
When new elements are added frequently, Visitor pattern is best avoided
| 0 | 0 | 0 | 0 |
2012-06-22T10:43:00.000
| 6 | 0 | false | 11,154,668 | 0 | 0 | 1 | 2 |
The Visitor pattern allows operations on objects to be written without extending the object class. Sure. But why not just write a global function, or a static class, that manipulates my object collection from the outside? Basically, in a language like java, an accept() method is needed for technical reasons; but in a language where I can implement the same design without an accept() method, does the Visitor pattern become trivial?
Explanation: In the Visitor pattern, visitable classes (entities) have a method .accept() whose job is to call the visitor's .visit() method on themselves. I can see the logic of the java examples: The visitor defines a different .visit(n) method for each visitable type n it supports, and the .accept() trick must be used to choose among them at runtime. But languages like python or php have dynamic typing and no method overloading. If I am a visitor I can call an entity method (e.g., .serialize()) without knowing the entity's type or even the full signature of the method. (That's the "double dispatch" issue, right?)
I know an accept method could pass protected data to the visitor, but what's the point? If the data is exposed to the visitor classes, it is effectively part of the class interface since its details matter outside the class. Exposing private data never struck me as the point of the visitor pattern, anyway.
So it seems that in python, ruby or php I can implement a visitor-like class without an accept method in the visited object (and without reflection), right? If I can work with a family of heterogeneous objects and call their public methods without any cooperation from the "visited" class, does this still deserve to be called the "Visitor pattern"? Is there something to the essence of the pattern that I am missing, or does it just boil down to "write a new class that manipulates your objects from the outside to carry out an operation"?
PS. I've looked at plenty of discussion on SO and elsewhere, but could not find anything that addresses this question. Pointers welcome.
|
Is the Visitor pattern useful for dynamically typed languages?
| 47,449,075 | 0 | 10 | 2,548 | 0 |
php,python,ruby,design-patterns,visitor-pattern
|
Visitor pattern do 2 things:
Allows for ad hoc polymorphism (same function but do different things
to different "types").
Enables adding new consuming algorithm without changing provider of data.
You can do second in dynamic languages without Visitor nor runtime type information. But first one requires some explicit mechanism, or design pattern like Visitor.
| 0 | 0 | 0 | 0 |
2012-06-22T10:43:00.000
| 6 | 0 | false | 11,154,668 | 0 | 0 | 1 | 2 |
The Visitor pattern allows operations on objects to be written without extending the object class. Sure. But why not just write a global function, or a static class, that manipulates my object collection from the outside? Basically, in a language like java, an accept() method is needed for technical reasons; but in a language where I can implement the same design without an accept() method, does the Visitor pattern become trivial?
Explanation: In the Visitor pattern, visitable classes (entities) have a method .accept() whose job is to call the visitor's .visit() method on themselves. I can see the logic of the java examples: The visitor defines a different .visit(n) method for each visitable type n it supports, and the .accept() trick must be used to choose among them at runtime. But languages like python or php have dynamic typing and no method overloading. If I am a visitor I can call an entity method (e.g., .serialize()) without knowing the entity's type or even the full signature of the method. (That's the "double dispatch" issue, right?)
I know an accept method could pass protected data to the visitor, but what's the point? If the data is exposed to the visitor classes, it is effectively part of the class interface since its details matter outside the class. Exposing private data never struck me as the point of the visitor pattern, anyway.
So it seems that in python, ruby or php I can implement a visitor-like class without an accept method in the visited object (and without reflection), right? If I can work with a family of heterogeneous objects and call their public methods without any cooperation from the "visited" class, does this still deserve to be called the "Visitor pattern"? Is there something to the essence of the pattern that I am missing, or does it just boil down to "write a new class that manipulates your objects from the outside to carry out an operation"?
PS. I've looked at plenty of discussion on SO and elsewhere, but could not find anything that addresses this question. Pointers welcome.
|
How to tell if you have multiple Django's installed
| 11,166,438 | 5 | 7 | 2,635 | 0 |
python,django,installation,duplicates
|
Check out virtualenv and virtualenvwrapper
| 0 | 0 | 0 | 0 |
2012-06-23T01:15:00.000
| 3 | 0.321513 | false | 11,166,014 | 1 | 0 | 1 | 2 |
In the process of trying to install django, I had a series of failures. I followed many different tutorials online and ended up trying to install it several times. I think I may have installed it twice (which the website said was not a good thing), so how do I tell if I actually have multiple versions installed? I have a Mac running Lion.
|
How to tell if you have multiple Django's installed
| 11,166,539 | 9 | 7 | 2,635 | 0 |
python,django,installation,duplicates
|
open terminal and type python then type import django then type django and it will tell you the path to the django you are importing. Goto that folder [it should look something like this: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/] and look for more than one instance of django(if there is more than one, they will be right next to each other). Delete the one(s) you don't want.
| 0 | 0 | 0 | 0 |
2012-06-23T01:15:00.000
| 3 | 1.2 | true | 11,166,014 | 1 | 0 | 1 | 2 |
In the process of trying to install django, I had a series of failures. I followed many different tutorials online and ended up trying to install it several times. I think I may have installed it twice (which the website said was not a good thing), so how do I tell if I actually have multiple versions installed? I have a Mac running Lion.
|
Setting up Flask-SQLAlchemy
| 11,210,290 | 4 | 1 | 7,080 | 1 |
python,sqlalchemy,flask,flask-sqlalchemy
|
At the time you execute create_all, models.py has never been imported, so no class is declared. Thus, create_all does not create any table.
To solve this problem, import models before running create_all or, even better, don't separate the db object from the model declaration.
| 0 | 0 | 0 | 0 |
2012-06-23T06:47:00.000
| 3 | 0.26052 | false | 11,167,518 | 0 | 0 | 1 | 1 |
Trying to set up Flask and SQLAlchemy on Windows but I've been running into issues.
I've been using Flask-SQLAlchemy along with PostgreSQL 9.1.4 (32 bit) and the Psycopg2 package. Here are the relevant bits of code, I created a basic User model just to test that my DB is connecting, and committing.
The three bits of code would come from the __init__.py file of my application, the models.py file and my settings.py file.
When I try opening up my interactive prompt and try the code in the following link out I get a ProgrammingError exception (details in link).
What could be causing this? I followed the documentation and I'm simply confused as to what I'm doing wrong especially considering that I've also used Django with psycopg2 and PostgreSQL on Windows.
|
update text within html
| 11,167,788 | 1 | 1 | 119 | 0 |
python,google-app-engine,jinja2
|
What you are thinking about, and moving toward (whether you know it or not) is called a content management system.
Most of them store content in a database and provide a user interface to allow editing it, just as you're designing.
Perhaps you could use off-the-shelf parts? I don't know exactly which ones are appengine-based, but this is a very common task and I'm sure you'll save time by using others' work.
| 0 | 0 | 0 | 0 |
2012-06-23T07:40:00.000
| 2 | 1.2 | true | 11,167,771 | 0 | 0 | 1 | 1 |
So I have a python webapp on Google App Engine and am using the jinja2 template engine. I have a lot of text on the site that I want to update regularly, such as news sections and updates about the site.
What is the most efficient way to go about doing this? Clearly the simplest short-term solution and what I am currently doing is just to change the HTML but I would like to give others access to this without giving them access to the server side of things.
Should I just bite the bullet and write a interface on an admin page that allows users to edit it and then the server takes this and renders it in the News section? Any suggestions or tips would be great!
|
python libraries in cpanel
| 11,169,154 | 1 | 0 | 1,172 | 0 |
python,django,web-hosting,cpanel
|
You could try to put them in your PYTHONPATH. Usually, your current working directory is in your PYTHONPATH. If that changes, you might need to add a path to it (maybe in each file, you should check, or one common file which is always included), and put the libraries there. You can do this with import sys;sys.path.append(the_path)
I'm not sure all of the libraries will work, but those which are pure-python, you should be able to copy/paste the source in a directory, and they will work I think.
| 0 | 1 | 0 | 1 |
2012-06-23T10:23:00.000
| 2 | 1.2 | true | 11,168,747 | 0 | 0 | 1 | 2 |
I am using heliohost's free service to test my django apps. But Heliohost does not provide me shell access. Is there anyway to install python libraries on the host machine?
|
python libraries in cpanel
| 45,138,203 | 0 | 0 | 1,172 | 0 |
python,django,web-hosting,cpanel
|
You should inform the support of heliohost.this server has very good support that help you or install any package you want
| 0 | 1 | 0 | 1 |
2012-06-23T10:23:00.000
| 2 | 0 | false | 11,168,747 | 0 | 0 | 1 | 2 |
I am using heliohost's free service to test my django apps. But Heliohost does not provide me shell access. Is there anyway to install python libraries on the host machine?
|
List all memcached keys/values
| 11,176,318 | 2 | 2 | 839 | 0 |
python,caching,memcached
|
There's no way to do it that's guaranteed to work. The only way I found is the way you'll find on google, but there's a restriction: Only 1 MB will be returned - it may not be all keys. And it will probably be quite slow..
If you really, really has to have all those keys you'd probably have to hack the source code.
I would say: no, you can't.
Why do you need all those key? I would consider redesigning your application to not make your admin panel dependent of the internals of a caching server
| 0 | 0 | 0 | 1 |
2012-06-24T09:15:00.000
| 1 | 0.379949 | false | 11,176,273 | 0 | 0 | 1 | 1 |
I know this question has been asked many times and it's also covered in official memcached FAQ. But my case is - I want to use it just for admin panel purposes. I want to see keys with values in my admin page so it doesn't matter if it's slow and against the best practices. Please advise, if it's possible.
|
Jython - is it faster to use Python data structures or Java ones?
| 11,178,285 | 5 | 6 | 418 | 0 |
java,python,jython
|
The point of using Jython is that you can write Python code and have it run on the JVM. Don't ruin that by making your Python into Java.
If -- if -- it turns out that your data structure is too slow, you can drop-in replace it with a Java version. But that's for the optimisation stage of programming, which comes later.
I guess I should try to answer your question. I would guess that using native Java structures will be faster (because the JVM can infer more about them than the Python interpreter can), but that might be counterbalanced by the extra processing needed to interface with Jython. Only tests will tell!
| 0 | 0 | 0 | 1 |
2012-06-24T14:24:00.000
| 2 | 0.462117 | false | 11,178,243 | 1 | 0 | 1 | 2 |
I'm trying to understand whether and under what circs one should use Python classes and/or Java ones.
If making a specialist dictionary/Map kind of class, should one subclass from Python's dict, or from Java's HashMap or TreeMap, etc.?
It is tempting to use the Python ones just because they are simpler and sexier. But one reason that Jython runs relatively slowly (so it appears to me to do) seems to have something to do with the dynamic typing. I'd better say I'm not that clear about all this, and haven't spent nocturnal hours poring over the Python/Jython interpreter code, to my shame.
Anyway it just occurs to me that the Java classes might possibly run faster because the code might have to do less work. OTOH maybe it has to do more. Or maybe there's nothing in it. Anyone know?
|
Jython - is it faster to use Python data structures or Java ones?
| 11,179,152 | 4 | 6 | 418 | 0 |
java,python,jython
|
Generally, the decision shouldn't be one of speed - the Python classes will be implemented in terms of Java classes anyway, even if they don't inherit from them. So, the speed should be roughly comparable, and at most you would save a couple of method calls per operation.
The bigger question is what you plan on doing with your class. If you're using it with Python APIs, you'll want to use the Python types, or something that behaves like them so that you don't have to do the work of implementing the entire Mapping protocol (only the bits your class changes). If you're using Java APIs, you will certainly need to meet the static type checks - which means you'll need to inherit from Java's classes.
If this isn't easy to answer in your situation, start with the Python ones, since you (correctly ;-) find them "simpler and sexier". If your class doesn't pass outside the boundaries of your project, then this should be trivial to change later if the speed really becomes an issue - and at that point, you might also be thinking about questions like "could it help to implement it entirely at the Java level?" which you've hopefully recognised would be premature optimisation to think about now.
| 0 | 0 | 0 | 1 |
2012-06-24T14:24:00.000
| 2 | 1.2 | true | 11,178,243 | 1 | 0 | 1 | 2 |
I'm trying to understand whether and under what circs one should use Python classes and/or Java ones.
If making a specialist dictionary/Map kind of class, should one subclass from Python's dict, or from Java's HashMap or TreeMap, etc.?
It is tempting to use the Python ones just because they are simpler and sexier. But one reason that Jython runs relatively slowly (so it appears to me to do) seems to have something to do with the dynamic typing. I'd better say I'm not that clear about all this, and haven't spent nocturnal hours poring over the Python/Jython interpreter code, to my shame.
Anyway it just occurs to me that the Java classes might possibly run faster because the code might have to do less work. OTOH maybe it has to do more. Or maybe there's nothing in it. Anyone know?
|
How can I resolve this python and django settings import idiosyncrasy?
| 11,202,467 | 2 | 0 | 116 | 0 |
python,django,import,settings
|
No. Use from ... import * or execfile() in settings/__init__.py to load the appropriate files.
| 0 | 0 | 0 | 0 |
2012-06-26T01:53:00.000
| 2 | 1.2 | true | 11,199,797 | 1 | 0 | 1 | 1 |
I have a file layout like this:
settings/
----__init__.py
----common.py
----configs/
--------constants1.py
--------constants2.py
----debug/
--------include1&2.py
--------include1.py
--------include2.py
and when I import settings.debug.include1, I would like the settings file to execute/import common.py then override the settings in common.py with the proper constants file. Problem is, this isn't happening. Is there a way to accomplish my goals in this fashion?
|
No module named _imagingcms on OSX
| 11,204,486 | 0 | 1 | 550 | 0 |
python,django,python-imaging-library,importerror
|
Oh, installing lcms from macports and reinstalling PIL helped.
| 0 | 0 | 0 | 0 |
2012-06-26T09:05:00.000
| 1 | 1.2 | true | 11,204,002 | 0 | 0 | 1 | 1 |
Today i've tried to test a django project on my macbook, but whenever i try to start it, i get the same error: "Error: No module named _imagingcms".
Seems like something is missng from PIL. I've tried to reinstall PIL, but it does not help.
What should i do?
|
Interesting Stock Tick Data Scenario
| 11,211,502 | 2 | 1 | 441 | 0 |
java,php,c++,python,ruby
|
Perhaps I am missing something ....
Its appears you want to keep the data in 5 min buckets, but you can't be sure you have all the data for a bucket for up to 10 sec after it has rolled over.
This means for each instrument you need to keep the current bucket and the previous bucket. When its 10 seconds past the 5 min boundary you can publish/write out the old bucket.
| 0 | 0 | 0 | 0 |
2012-06-26T15:54:00.000
| 1 | 1.2 | true | 11,211,228 | 1 | 0 | 1 | 1 |
Alright so this problem has been breaking my brain all day today.
The Problem: I am currently receiving stock tick data at an extremely high rate through multicasts. I have already parsed this data and am receiving it in the following form.
-StockID: Int-64
-TimeStamp: Microseconds from Epoch
-Price: Int
-Quantity: Int
Hundreds of these packets of data are parsed every second. I am trying to reduce the computation on my storage end by packaging up this data into dictionaries/hashtables hashed by the stockID (key == stockID)(value == array of [timestamp, price, quantity] elements).
I also want each dictionary to represent timestamps within a 5min interval. When the incoming data's timestamps get past the 5min time interval, I want this new data to go into a new dictionary that represents the next time interval. Also, a special key will be hashed at key -1 telling what 5 particular minute interval per day does this dictionary belong to (so if you receive something at 12:32am, it should hash into the dictionary that has value 7 at key -1, since this represents the time interval of 12:30am to 12:35am for that particular day). Once the time passes, the dict that has its time expired can be sent off to the dataWrapper.
Now, you might be coming up with some ideas right about now. But here's a big constraint. The timestamps that are coming in Are not necessarily strictly increasing; however, if one waits about 10 seconds after an interval has ended then it can be safe to assume that every data coming in belongs to the current interval.
The reason I am doing all this complicated things is to reduce computation on the storage side of my application. With the setup above, my storage side thread can simply iterate over all of the key, value pairs within the dictionary and store them in the same location on the storage system without having to reopen files, reassign groups or change directories.
Good Luck! I will greatly appreciate ANY answers btw. :)
Preferred if you can send me something in python (that's what I'm doing the project in), but I can perfectly understand Java, C++, Ruby or PHP.
Summary
I am trying to put stock data into dictionaries that represent 5min intervals for each dictionary. The timestamp that comes with the data determines what particular dictionary it should be put in. This could be relatively easy except that timestamps are not strictly increasing as they come in, so dictionaries cannot be sent off to the datawrapper immediately once 5 mins has passed by the timestamps, since it isn't guaranteed to not receive any more data within 10 seconds, after this its okay to send it to the wrapper.
I just want any kind of ideas, algorithms, or partial implementations that could help me with the scheduling of this. How can we switch the current use of dictionaries within both timestamps (for the data) and actual time (the 10seconds buffer).
Clarification Edit
The 5 min window should be data driven (based upon timestamps), however the 10 second timeout appears to be clock time.
|
Efficient approach to catching database errors
| 11,215,911 | 1 | 2 | 237 | 1 |
python,database,sqlite,error-handling
|
Your gut feeling is right. There is no way to add robustness to the application without reviewing each database access point separately.
You still have a lot of important choice at how the application should react on errors that depends on factors like,
Is it attended, or sometimes completely unattended?
Is delay OK, or is it important to report database errors promptly?
What are relative frequencies of the three types of failure that you describe?
Now that you have a single wrapper, you can use it to do some common configuration and error handling, especially:
set reasonable connect timeouts
set reasonable busy timeouts
enforce command timeouts on client side
retry automatically on errors, especially on SQLITE_BUSY (insert large delays between retries, fail after a few retries)
use exceptions to reduce the number of application level handlers. You may be able to restart the whole application on database errors. However, do that only if you have confidence as to in which state you are aborting the application; consistent use of transactions may ensure that the restart method does not leave inconsistent data behind.
ask a human for help when you detect a locking error
...but there comes a moment where you need to bite the bullet and let the error out into the application, and see what all the particular callers are likely to do with it.
| 0 | 0 | 0 | 0 |
2012-06-26T20:38:00.000
| 1 | 1.2 | true | 11,215,535 | 0 | 0 | 1 | 1 |
I have a desktop app that has 65 modules, about half of which read from or write to an SQLite database. I've found that there are 3 ways that the database can throw an SQliteDatabaseError:
SQL logic error or missing database (happens unpredictably every now and then)
Database is locked (if it's being edited by another program, like SQLite Database Browser)
Disk I/O error (also happens unpredictably)
Although these errors don't happen often, when they do they lock up my application entirely, and so I can't just let them stand.
And so I've started re-writing every single access of the database to be a pointer to a common "database-access function" in its own module. That function then can catch these three errors as exceptions and thereby not crash, and also alert the user accordingly. For example, if it is a "database is locked error", it will announce this and ask the user to close any program that is also using the database and then try again. (If it's the other errors, perhaps it will tell the user to try again later...not sure yet). Updating all the database accesses to do this is mostly a matter of copy/pasting the redirect to the common function--easy work.
The problem is: it is not sufficient to just provide this database-access function and its announcements, because at all of the points of database access in the 65 modules there is code that follows the access that assumes the database will successfully return data or complete a write--and when it doesn't, that code has to have a condition for that. But writing those conditionals requires carefully going into each access point and seeing how best to handle it. This is laborious and difficult for the couple of hundred database accesses I'll need to patch in this way.
I'm willing to do that, but I thought I'd inquire if there were a more efficient/clever way or at least heuristics that would help in finishing this fix efficiently and well.
(I should state that there is no particular "architecture" of this application...it's mostly what could be called "ravioli code", where the GUI and database calls and logic are all together in units that "go together". I am not willing to re-write the architecture of the whole project in MVC or something like this at this point, though I'd consider it for future projects.)
|
django upload file to custom location
| 11,218,464 | 0 | 0 | 303 | 0 |
python,django,django-file-upload
|
try: models.FileField(upload_to = '...')
| 0 | 0 | 0 | 0 |
2012-06-27T01:59:00.000
| 2 | 0 | false | 11,218,393 | 0 | 0 | 1 | 1 |
Is it possible to upload a file in django using django's model.FileField() to a location that's not relative to /media ?. In my case upload an .html file to myproject/templates.
|
APPDATA is not returned in Python executed via CGI
| 11,246,816 | 1 | 1 | 496 | 0 |
python,apache,cgi,nltk,appdata
|
%APPDATA% is a special variable that expands to the "Application Data" directory of the user who expands the variable (i.e., who runs a script). Apache is not running as you, so it has no business knowing about your APPDATA directory.
You should either hard-code the relevant path into your script, or replace it with a path relative to the location of the script, e.g., r'..\data\nltk_data'. If you really need to, you can recover the absolute location of your script by looking at __file__.
| 0 | 1 | 0 | 0 |
2012-06-27T04:16:00.000
| 3 | 0.066568 | false | 11,219,319 | 0 | 0 | 1 | 1 |
I'm using Python with the NLTK toolkit in Apache via CGI. The toolkit need to know the APPDATA directory, but when executed in the server, the os.environ not lists theAPPDATA.
When I execute a simple print os.envrion in console, APPDATA is present, but not when executed via CGI in the web server.
What is going on? How can I solve this? I'm new to Python and I'm just learning it yet.
|
how long will django store temporary files?
| 11,221,675 | 5 | 2 | 457 | 0 |
python,django,file-upload
|
how long will this file be retained in memory?
Are you talking about the temporary file on the filesystem? In that case, on a Unix platform, usually until you reboot. If you're talking about uploaded files in RAM, it probably stays in there at least until the request/response cycle is done. But that shouldn't really matter to you, you'll have to handle the uploaded file in the response processing code anyways. Otherwise, you won't have any reference to it anymore.
will each upload have a unique name regardless if the same file is
uploaded twice?
Yes.
| 0 | 0 | 0 | 0 |
2012-06-27T07:43:00.000
| 1 | 1.2 | true | 11,221,544 | 0 | 0 | 1 | 1 |
"if an uploaded file is too large, Django will write the uploaded file to a temporary file stored in your system's temporary directory. On a Unix-like platform this means you can expect Django to generate a file called something like /tmp/tmpzfp6I6.upload. If an upload is large enough, you can watch this file grow in size as Django streams the data onto disk."
This is taken from Django's documentation.
My question is how long will this file be retained in memory? and will each upload have a unique name regardless if the same file is uploaded twice?
|
Take HTML form data to python
| 11,229,283 | 1 | 0 | 600 | 0 |
python,html
|
If you are just trying to execute your application from the web application you would like to create, then you can go for anything from bare cgi scripts (in say ... Perl) through PHP script and even Django (Python based web framework). It all depend on what you like to do :)
If your intention is to integrate your Python app with the web app, you can try doing it in Django web framework.
| 0 | 0 | 0 | 0 |
2012-06-27T14:45:00.000
| 2 | 1.2 | true | 11,228,878 | 0 | 0 | 1 | 1 |
I want to take HTML form data and handle the submited data(string for example) with my python application.
The html file with the form will be stored locally and values will be entered from a browser.
I then want to take the submited values to my python application.
How do I do with the form action and link it to my application? Please point me in the right direction.
BR,
|
How to run a file that uses django models (large block of code) in Pycharm
| 11,231,045 | 1 | 1 | 1,138 | 0 |
python,django,pycharm
|
If your site loads, you should put the import the models into one of your Django views.
In a view you can do whatever you like with the models.
| 0 | 0 | 0 | 0 |
2012-06-27T16:31:00.000
| 3 | 1.2 | true | 11,230,979 | 1 | 0 | 1 | 1 |
So I have a chunk of code that declares some classes, creates data, uses django to actually save them to the database. My question is how do I actually execute it?
I am using PyCharm and have the file open. But I have no clue how to actually execute it. I can execute line by line in Django Console, but if it's more than that it can't handle the indentation.
The project itself runs fine (127.0.0.1 loads my page). How can I accomplish this?
I am sorry if this a completely obvious answer, I've been struggling with this for a bit.
|
Test iFrame "top === self" in Python Tornado
| 11,232,414 | 4 | 0 | 152 | 0 |
python,iframe,tornado
|
Javascript has access to the browser context but a templating system will only have access to the request object.
If you control the creation of the iframe in question, for instance if that is happening on another part of your site, you might be able to pass get parameters in to the templating system or something... But in general this is something you have to do with javascript. Add javascript directly to your template or (better) include a javascript file. You can expose both the iframed and the non-iframed versions of your page in the template and have javascript select which one to show once it hits the browser.
| 0 | 0 | 0 | 0 |
2012-06-27T18:03:00.000
| 1 | 1.2 | true | 11,232,351 | 0 | 0 | 1 | 1 |
In Tornado, you can do if statements in the HTML such as {% if true %} do stuff {% end %}. I'd like to check if the page is within an iframe.
In Javascript, it would be something like: if (top === self) { not in a frame } else { in a frame }
How can I do this in with Tornado?
|
Preferred (or recommended) way to store large amounts of simulation configurations, runs values and final results
| 11,244,121 | 1 | 7 | 2,845 | 0 |
python,database,design-patterns,simulation
|
It sounds like you need to record more or less the same kinds of information for each case, so a relational database sounds like a good fit-- why do you think it's "not the proper way"?
If your data fits in a collection of CSV files, you're most of the way to a relational database already! Just store in database tables instead, and you have support for foreign keys and queries. If you go on to implement an object-oriented solution, you can initialize your objects from the database.
| 0 | 0 | 0 | 0 |
2012-06-28T10:12:00.000
| 3 | 0.066568 | false | 11,242,387 | 0 | 0 | 1 | 2 |
I am working with some network simulator. After making some extensions to it, I need to make a lot of different simulations and tests. I need to record:
simulation scenario configurations
values of some parameters (e.g. buffer sizes, signal qualities, position) per devices per time unit t
final results computed from those recorded values
Second data is needed to perform some visualization after simulation was performed (simple animation, showing some statistics over time).
I am using Python with matplotlib etc. for post-processing the data and for writing a proper app (now considering pyQt or Django, but this is not the topic of the question). Now I am wondering what would be the best way to store this data?
My first guess was to use XML files, but it can be too much overhead from the XML syntax (I mean, files can grow up to very big sizes, especially for the second part of the data type). So I tried to design a database... But this also seems to me to be not the proper way... Maybe a mix of both?
I have tried to find some clues in Google, but found nothing special. Have you ever had a need for storing such data? How have you done that? Is there any "design pattern" for that?
|
Preferred (or recommended) way to store large amounts of simulation configurations, runs values and final results
| 11,244,431 | 1 | 7 | 2,845 | 0 |
python,database,design-patterns,simulation
|
If your data structures are well-known and stable AND you need some of the SQL querying / computation features then a light-weight relational DB like SQLite might be the way to go (just make sure it can handle your eventual 3+GB data).
Else - ie, each simulation scenario might need a dedicated data structure to store the results -, and you don't need any SQL feature, then you might be better using a more free-form solution (document-oriented database, OO database, filesystem + csv, whatever).
Note that you can still use a SQL db in the second case, but you'll have to dynamically create tables for each resultset, and of course dynamically create the relevant SQL queries too.
| 0 | 0 | 0 | 0 |
2012-06-28T10:12:00.000
| 3 | 0.066568 | false | 11,242,387 | 0 | 0 | 1 | 2 |
I am working with some network simulator. After making some extensions to it, I need to make a lot of different simulations and tests. I need to record:
simulation scenario configurations
values of some parameters (e.g. buffer sizes, signal qualities, position) per devices per time unit t
final results computed from those recorded values
Second data is needed to perform some visualization after simulation was performed (simple animation, showing some statistics over time).
I am using Python with matplotlib etc. for post-processing the data and for writing a proper app (now considering pyQt or Django, but this is not the topic of the question). Now I am wondering what would be the best way to store this data?
My first guess was to use XML files, but it can be too much overhead from the XML syntax (I mean, files can grow up to very big sizes, especially for the second part of the data type). So I tried to design a database... But this also seems to me to be not the proper way... Maybe a mix of both?
I have tried to find some clues in Google, but found nothing special. Have you ever had a need for storing such data? How have you done that? Is there any "design pattern" for that?
|
what's wrong with web.py? SystemError: error return without exception set
| 11,488,215 | 0 | 1 | 363 | 0 |
python,web.py
|
Not sure, but I'd switch to WSGI anyway, its faster and easy to use. Do you get that error when running the built in webserver?
| 0 | 0 | 0 | 0 |
2012-06-28T11:04:00.000
| 1 | 0 | false | 11,243,256 | 0 | 0 | 1 | 1 |
when uploading a file with web.py, there's a exception " SystemError: error return without exception set" raised.
here's traceback
...
File "../web/template.py", line 882, in __call__
return BaseTemplate.__call__(self, *a, **kw)
File "../web/template.py", line 809, in __call__
return self.t(*a, **kw)
File "", line 193, in __template__
File "../web/webapi.py", line 276, in input
out = rawinput(_method)
File "../web/webapi.py", line 249, in rawinput
a = cgi.FieldStorage(fp=fp, environ=e, keep_blank_values=1)
File "../python2.7/cgi.py", line 508, in __init__
self.read_multi(environ, keep_blank_values, strict_parsing)
File "../python2.7/cgi.py", line 632, in read_multi
environ, keep_blank_values, strict_parsing)
File "../python2.7/cgi.py", line 510, in __init__
self.read_single()
File "../python2.7/cgi.py", line 647, in read_single
self.read_lines()
File "../python2.7/cgi.py", line 669, in read_lines
self.read_lines_to_outerboundary()
File "../python2.7/cgi.py", line 697, in read_lines_to_outerboundary
line = self.fp.readline(1
"""
def POST(self):
x = web.input(myfile= {})
return x.myfile.file.read()
|
Is it possible to invoke a python script from jsp?
| 11,244,145 | 1 | 1 | 8,657 | 0 |
java,python,jsp
|
It would be neater to expose your python API as RESTful services, that JSP can access using Ajax and display data in the page. I'm specifically suggesting this because you said 'JSP' not 'Java'.
| 0 | 0 | 0 | 1 |
2012-06-28T11:53:00.000
| 3 | 0.066568 | false | 11,244,049 | 0 | 0 | 1 | 1 |
I want to create a UI which invokes my python script. Can i do it using JSP? If so, can you please explain how ? Or can i do it using some other language. I have gone through many posts related to it but could not find much? please help me out? Explanations using examples would be more helpful.
Thanks In Advance..
|
store openid user in cookie google appengine
| 11,249,560 | 1 | 0 | 191 | 0 |
python,google-app-engine,cookies,openid
|
users.get_current_user() is actually reading the cookies so you don't need to do anything more to optimize it (you can easily verify it by deleting your cookies and then refreshing the page). Unless you want to store more information and have access to them without accessing the datastore on every request.
| 0 | 1 | 0 | 0 |
2012-06-28T16:47:00.000
| 1 | 1.2 | true | 11,249,313 | 0 | 0 | 1 | 1 |
I am using OpenID as a login system for a google appengine website and I right now for every website I am just passing the user info to every page using user = users.get_current_user()
Would using a cookie to do this be more efficient? (I know if would be easier that putting that in every single webpage) and is these any special way to do it with google appengine? I already have a cookie counting visits but I would image it'll be a little different.
Update: Could I do self.user = users.get_current_user() as a global variable and then pass in user=self.user on every page to have access to that variable?
Thanks!
|
Django: Generate models from database vs database from models
| 11,269,451 | 2 | 0 | 362 | 0 |
python,mysql,django,django-models,code-generation
|
If you're starting a new project, always let Django generate the tables. Django provides mechanisms to use a pre-existing database to support legacy data or situations where you aren't in direct control of the database structure. This is a nice feature of Django for those who need it, but it creates more work for the developer and breaks many conventions in Django.
You still can, and in fact are encouraged to, add additional indexes and any other database-specific optimizations available to you after the fact.
| 0 | 0 | 0 | 0 |
2012-06-29T21:35:00.000
| 1 | 1.2 | true | 11,269,277 | 0 | 0 | 1 | 1 |
I'm getting myself acquainted with MVC by making a 'for-fun' site in django. I'm really liking a lot of the features of the api and python itself. But one thing I really didn't like was that django encourages you to let it build your database FOR you. I'm well aware of inspectDB and I'm interested in using it.
So my question is this, is there any solid reason that I should choose one method of model/DB generation over another? I feel far more comfortable defining my database in the traditional SQL way (where I have access to combined keys). But I'm concerned using things that aren't available through the Model api may cause problems for me later on. Such as combined keys, medium_text, etc.
I'm using mysql btw.
|
store a calculated value in the datastore or just calculate it on the fly
| 11,269,923 | 1 | 0 | 303 | 0 |
python,database,performance,google-app-engine,google-cloud-datastore
|
Caching is most useful when the calculation is expensive. If the calculation is simple and cheap, you might as well recalculate as needed.
| 0 | 0 | 0 | 0 |
2012-06-29T22:45:00.000
| 3 | 0.066568 | false | 11,269,862 | 0 | 0 | 1 | 3 |
I writing an app in python for google app engine where each user can submit a post and each post has a ranking which is determined by its votes and comment count. The ranking is just a simple calculation based on these two parameters. I am wondering should I store this value in the datastore (and take up space there) or just simply calculate it every time that I need it. Now just fyi the posts will be sorted by ranking so that needs to be taken into account.
I am mostly thinking for the sake of efficiency and trying to balance if I should try and save the datastore room or save the read/write quota.
I would think it would be better to simply store it but then I need to recalculate and rewrite the ranking value every time anyone votes or comments on the post.
Any input would be great.
|
store a calculated value in the datastore or just calculate it on the fly
| 11,270,514 | 2 | 0 | 303 | 0 |
python,database,performance,google-app-engine,google-cloud-datastore
|
What about storing the ranking as a property in the post. That would make sense for querying/sorting wouldn't it.
If you store the ranking at the same time (meaning in the same entitiy) as you store the votes/comment count, then the only increase in write cost would be for the index. (ok initial write cost too but that is what 2 [very small anyway]).
You need to do a database operation everytime anyone votes or comments on the post anyway right!?! How else can to track votes/comments?
Actually though, I imagine you will get into use text search to find data in the posts. If so, I would look into maybe storing the ranking as a property in the search index and using it to rank matching results.
Don't we need to consider how you are selecting the posts to display. Is ranking by votes and comments the only criteria?
| 0 | 0 | 0 | 0 |
2012-06-29T22:45:00.000
| 3 | 1.2 | true | 11,269,862 | 0 | 0 | 1 | 3 |
I writing an app in python for google app engine where each user can submit a post and each post has a ranking which is determined by its votes and comment count. The ranking is just a simple calculation based on these two parameters. I am wondering should I store this value in the datastore (and take up space there) or just simply calculate it every time that I need it. Now just fyi the posts will be sorted by ranking so that needs to be taken into account.
I am mostly thinking for the sake of efficiency and trying to balance if I should try and save the datastore room or save the read/write quota.
I would think it would be better to simply store it but then I need to recalculate and rewrite the ranking value every time anyone votes or comments on the post.
Any input would be great.
|
store a calculated value in the datastore or just calculate it on the fly
| 11,271,252 | 1 | 0 | 303 | 0 |
python,database,performance,google-app-engine,google-cloud-datastore
|
If you're depending on keeping a running vote count in an entity, then you either have to be willing to lose an occasional vote, or you have to use transactions. If you use transactions, you're rate limited as to how many transactions you can do per second. (See the doc on transactions and entity groups). If you're liable to have a high volume of votes, rate limiting can be a problem.
For a low rate of votes, keeping a count in an entity might work fine. But if you any significant peaks in voting rate, storing separate Vote entities that periodically get rolled up into a cached count, perhaps adjusted by (possibly unreliable) incremental counts kept in memcache, might work better for you.
It really depends on what you want to optimize for. If you're trying to minimize disk writes by keeping a vote count cached non-transactionally, you risk losing votes.
| 0 | 0 | 0 | 0 |
2012-06-29T22:45:00.000
| 3 | 0.066568 | false | 11,269,862 | 0 | 0 | 1 | 3 |
I writing an app in python for google app engine where each user can submit a post and each post has a ranking which is determined by its votes and comment count. The ranking is just a simple calculation based on these two parameters. I am wondering should I store this value in the datastore (and take up space there) or just simply calculate it every time that I need it. Now just fyi the posts will be sorted by ranking so that needs to be taken into account.
I am mostly thinking for the sake of efficiency and trying to balance if I should try and save the datastore room or save the read/write quota.
I would think it would be better to simply store it but then I need to recalculate and rewrite the ranking value every time anyone votes or comments on the post.
Any input would be great.
|
use standard datastore index or build my own
| 11,270,908 | 0 | 0 | 82 | 1 |
python,google-app-engine,indexing,google-cloud-datastore
|
I'd suggest using pre-existing code and building around that in stead of re-inventing the wheel.
| 0 | 1 | 0 | 0 |
2012-06-30T00:18:00.000
| 2 | 0 | false | 11,270,434 | 0 | 0 | 1 | 1 |
I am running a webapp on google appengine with python and my app lets users post topics and respond to them and the website is basically a collection of these posts categorized onto different pages.
Now I only have around 200 posts and 30 visitors a day right now but that is already taking up nearly 20% of my reads and 10% of my writes with the datastore. I am wondering if it is more efficient to use the google app engine's built in get_by_id() function to retrieve posts by their IDs or if it is better to build my own. For some of the queries I will simply have to use GQL or the built in query language because they are retrieved on more than just and ID but I wanted to see which was better.
Thanks!
|
What kind of authorization I should use for my facebook application
| 11,275,174 | 1 | 1 | 90 | 0 |
python,django,oauth,google-api,gdata-api
|
Django has some packages like django-facebook or django-social-auth which manage the authentication part of facebook login for you. You could either use these in your project, or look at the code there as a good starting point to learn about FB OAuth implementation.
| 0 | 0 | 0 | 0 |
2012-06-30T15:03:00.000
| 1 | 1.2 | true | 11,275,133 | 0 | 0 | 1 | 1 |
I am building a social reader Facebook application using Django where I am using Google Data API (Blogger API). But I am unable to deal with the authorization step to use the Google API (currently using ClientLogin under development).
I tried to read the OAuth documentation but couldn't figure out how to proceed. I don't want my users to provide any login credentials for google.. which makes the app completely absurd.
So, can anyone help me on my project and tell me what kind of authorization I should actually use for google API and how ? (I am using gdata lib)
|
Serving files through flask + redis
| 11,277,333 | 1 | 0 | 723 | 0 |
python,ajax,redis,download,flask
|
Normally you would configure your webserver so that URLs that refer to static files are handled directly by the server, rather than going through Flask.
| 0 | 0 | 0 | 0 |
2012-06-30T20:13:00.000
| 1 | 1.2 | true | 11,277,323 | 0 | 0 | 1 | 1 |
This might be a bit of a noob question, so I apologize in advance.
How do I make a web server running flask+redis serve binary files as a response to a link/query?
I want the response to the link to be either some AJAX action such as changing a div, or popping up an "unavailable" response, or to serve back some binary file.
I would like help both with the client side (jQuery / other Javascript) and the server side.
Thanks!
Side question: Would you choose redis for this task? Or maybe something else such as MongoDB, or a regular RDBMS? And why?
|
What goes into an Eventlet+Gunicorn worker thread?
| 11,281,661 | 3 | 1 | 2,064 | 0 |
python,django,multithreading,gunicorn,eventlet
|
If you set workers = 1 in your gunicorn configuration, two processes will be created: 1 master process and 1 worker process.
When you use worker_class = eventlet, the simultaneous connections are handled by green threads. Green threads are not like real threads. In simple terms, green threads are functions (coroutines) that yield whenever the function encounters I/O operation.
So nothing is copied. You just need to worry about making every I/O operation 'green'.
| 0 | 0 | 0 | 0 |
2012-07-01T03:59:00.000
| 1 | 1.2 | true | 11,279,467 | 0 | 0 | 1 | 1 |
If I deploy Django using Gunicorn with the Eventlet worker type, and only use one process, what happens under the hood to service the 1000 (by default) worker connections? What parts of Django are copied into each thread? Are any parts copied?
|
Is it feasible to run multiple processeses on a Heroku dyno?
| 11,282,872 | 4 | 10 | 2,842 | 0 |
python,heroku,subprocess,worker
|
On the newer Cedar stack, there are no issues with spawning multiple processes. Each dyno is a virtual machine and has no particular limitations except in memory and CPU usage (about 512 MB of memory, I think, and 1 CPU core). Following the newer installation instructions for some stacks such as Python will result in a configuration with multiple (web server) processes out of the box.
Software installed on web dynos may vary depending on what buildpack you are using; if your subprocesses need special software then you may have to either bundle it with your application or (better) roll your own buildpack.
At this point I would normally remind you that running asynchronous tasks on worker dynos instead of web dynos, with a proper task queue system, is strongly encouraged, but it sounds like you know that already. Do keep in mind that accounts with only one web dyno (typically this means, "free" accounts) will have that dyno spun down after an hour or so of not receiving any web requests, and that any background processes running on the dyno at that time will necessarily be killed. Accounts with multiple web dynos are not subject to this restriction.
| 0 | 1 | 0 | 0 |
2012-07-01T04:15:00.000
| 1 | 1.2 | true | 11,279,527 | 0 | 0 | 1 | 1 |
I am aware of the memory limitations of the Heroku platform, and I know that it is far more scalable to separate an app into web and worker dynos. However, I still would like to run asynchronous tasks alongside the web process for testing purposes. Dynos are costly and I would like to prototype on the free instance that Heroku provides.
Are there any issues with spawning a new job as a process or subprocess in the same dyno as a web process?
|
Java PlayFramework & Python Django GAE
| 11,283,888 | 3 | 2 | 411 | 0 |
java,python,django,playframework
|
PyCharm is an IDE created by JetBrains. Originally, JetBrains only had one product, IntelliJ IDE (a Java IDE), and PyCharm and all the other products were spawned from that one highly successful product.
As for which language, I would suggest trying to do something small (but feature rich enough to be a holistic test) with all 3 and see which one works best for you. Language choice is a massive question, and depends on personal factors, project factors and many other besides. Therefore I won't even begin to tell you which one is best (because it would be what is best for me, in my situation).
| 0 | 0 | 0 | 0 |
2012-07-01T10:30:00.000
| 3 | 0.197375 | false | 11,281,233 | 0 | 0 | 1 | 3 |
I already know Java, C# and C++. Now I want to start with web development and I saw that some really big sites are built with Python/C++. I like the coding style of Python, it looks really clean, but some other things like no errors before runtime is really strange.
However, I don't know what I should learn now. I started with Python but then I saw that Google App Engine also supports Java and the PlayFramework looks amazing too.
Now I am really confused. Should I go with Python or Java? I found the IDE for Python "PyCharm" really amazing for web development. Does Java have something similar, eclipse maybe?
I know that this question isn't constructive, but it will help me with my decision. What are pro and cons of both languages?
|
Java PlayFramework & Python Django GAE
| 11,285,063 | 4 | 2 | 411 | 0 |
java,python,django,playframework
|
I just want to add, that if it is a requirement for you that it is compatible with GAE, then I think Django is the best choise. Playframework is of version 2.0 no longer compatible with GAE.
| 0 | 0 | 0 | 0 |
2012-07-01T10:30:00.000
| 3 | 1.2 | true | 11,281,233 | 0 | 0 | 1 | 3 |
I already know Java, C# and C++. Now I want to start with web development and I saw that some really big sites are built with Python/C++. I like the coding style of Python, it looks really clean, but some other things like no errors before runtime is really strange.
However, I don't know what I should learn now. I started with Python but then I saw that Google App Engine also supports Java and the PlayFramework looks amazing too.
Now I am really confused. Should I go with Python or Java? I found the IDE for Python "PyCharm" really amazing for web development. Does Java have something similar, eclipse maybe?
I know that this question isn't constructive, but it will help me with my decision. What are pro and cons of both languages?
|
Java PlayFramework & Python Django GAE
| 15,122,029 | 0 | 2 | 411 | 0 |
java,python,django,playframework
|
It depends on you. What do you want more: learn new programming language or learn how to make web apps?
I just started few PLay tutorials and it's really great. PLay 2 is even more amazing than previous one. I'd like to learn Scala, so it's perfect for me, but also because of that it's not GAE compatible anymore, but come on, there are other ways to deploy apps, I'd like to try OpenShift (dunno if it's possible, I'll try it soon).
I'm also a big fan of Python, so it's naturally that I'm also looking for frameworks to build apps in that. I would say, that Django isn't the only choice. I had few tries with Django, right now I'm trying web2py. As many stated, Django has quite hard learning curve. Web2py should be better, but I don't like the 'wizzard' way of scaffolding apps.
I've used Bottle (Flask is similar) and it's great for small apps. RESTful apps are super-easy with them, so maybe it should be your starting point.
From what I've read about Python's frameworks:
Django is quite good for typical websites/CMS-like, hard to learn
web2py very interesting --- I'm in the middle of testing that, Reddit's using it?
web.py -- minimalistic, lightweight framework, you have to build webapp almost from scratch
Tornado/Twisted --- fast, async frameworks
Flask/Bottle --- very nice microframeworks. Great for REST services
I've not tried them all, but it's what I've found out during reading the web/blogs etc.
I'm looking for something like Play Framework 2.x but in Python(ideally 3) :)
| 0 | 0 | 0 | 0 |
2012-07-01T10:30:00.000
| 3 | 0 | false | 11,281,233 | 0 | 0 | 1 | 3 |
I already know Java, C# and C++. Now I want to start with web development and I saw that some really big sites are built with Python/C++. I like the coding style of Python, it looks really clean, but some other things like no errors before runtime is really strange.
However, I don't know what I should learn now. I started with Python but then I saw that Google App Engine also supports Java and the PlayFramework looks amazing too.
Now I am really confused. Should I go with Python or Java? I found the IDE for Python "PyCharm" really amazing for web development. Does Java have something similar, eclipse maybe?
I know that this question isn't constructive, but it will help me with my decision. What are pro and cons of both languages?
|
Test for existence of field in django class
| 11,284,649 | 1 | 0 | 63 | 0 |
python,django
|
For example:
field_name_exists = field_name in ModelName._meta.get_all_field_names()
| 0 | 0 | 0 | 0 |
2012-07-01T18:29:00.000
| 1 | 0.197375 | false | 11,284,600 | 0 | 0 | 1 | 1 |
Given a django class and a field name how can you test to see whether the class has a field with the given name?
The field name is a string in this case.
|
how to make youtube videos embed on your webpage when a link is posted
| 64,008,461 | 0 | 4 | 4,555 | 0 |
python,google-app-engine,youtube,youtube-api,jinja2
|
Use this code when getting embed link from list value. In the template inside the iframe use below code
src="{{results[0].video_link}}"
"video_link" is the Field name.
| 0 | 0 | 0 | 0 |
2012-07-02T00:56:00.000
| 3 | 0 | false | 11,286,809 | 0 | 0 | 1 | 1 |
I have a website that gets a lot of links to youtube and similar sites and I wanted to know if there is anyway that I can make a link automatically appear as a video. Like what happens when you post a link to a video on facebook. You can play it right on the page. Is there a way to do this without users actually posting the entire embed video HTML code?
By the way I am using google app engine with python and jinja2 templating.
|
New URL on django admin independent of the apps
| 11,288,438 | 3 | 7 | 10,538 | 0 |
django,django-admin,python-2.7,django-urls,django-1.4
|
Just put your desired url mapping before the admin mapping in your root urls.py. The first match for the request will be taken, because django goes the url mappings from top to down. Just remember that you don't use an url the admin normally needs or provides because this will never match with a custom mapping in front of it. HTH!
| 0 | 0 | 0 | 0 |
2012-07-02T05:46:00.000
| 2 | 0.291313 | false | 11,288,320 | 0 | 0 | 1 | 1 |
I am using django 1.4 and Python 2.7.
I just have a simple requirement where I have to add a new URL to the django admin app. I know how to add URLs which are for the custom apps but am unable figure out how to add URLs which are of the admin app. Please guide me through this.
Basically the full URL should be something like admin/my_url.
UPDATE
I want a way after which I can as well reverse map the URL using admin.
|
Django filesystem storage for large number of files
| 22,918,886 | 0 | 4 | 1,387 | 0 |
python,django,filesystems,storage
|
Also can use a custom storage manager that if a file is small save this in a database model with binary field and save small files of more of 16MB and don't need use other database.
| 0 | 0 | 0 | 0 |
2012-07-02T10:36:00.000
| 3 | 0 | false | 11,291,975 | 0 | 0 | 1 | 1 |
Scenario:
A Django app generates a lot of small files related to objects in different models.
I've done a lot of search for avoiding generation of large number of files in a single directory, when using the default Filestorage.
Is django-fsfield the only open source solution for this? Anything else you would recommend for fixing the large number of inodes in a dir?
Thank you!
|
How to manage user-specific database connections in a Pyramid Web Application?
| 11,300,227 | 0 | 0 | 343 | 1 |
python,sqlalchemy,pyramid
|
The best way to do this that I know is to use the same database with multiple schemas. Unfortunately I don't think this works with MySQL. The idea is that you connection pool engines to the same database and then when you know what user is associated with the request you can switch schemas for that connection.
| 0 | 0 | 0 | 0 |
2012-07-02T18:29:00.000
| 1 | 0 | false | 11,299,182 | 0 | 0 | 1 | 1 |
We are using Python Pyramid with SQLAlchemy and MySQL to build a web application. We would like to have user-specific database connections, so every web application user has their own database credentials. This is primarily for security reasons, so each user only has privileges for their own database content. We would also like to maintain the performance advantage of connection pooling. Is there a way we can setup a new engine at login time based on the users credentials, and reuse that engine for requests made by the same user?
|
Need to get IP address to add to GAE blacklist
| 11,304,280 | 1 | 0 | 283 | 0 |
python,google-app-engine,ip,blacklist,denial-of-service
|
You could see the IP on the Logs page in the admin panel. Click the 'plus' icon next to a log item in order to expand it and view request data.
| 0 | 1 | 0 | 0 |
2012-07-03T03:56:00.000
| 1 | 0.197375 | false | 11,304,235 | 0 | 0 | 1 | 1 |
I'm getting a lot of requests to my appengine app from a malicious user and I suspect it might be an attempt at a DOS attack. I need to add thier IP address to blacklists on GAE. However when I look at
self.request.remote_addr
all I get is my own IP address. How can I get the remote IP of the client that is actually sending me these requests?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.