Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
How to use sorl-thumbnail without django?
10,002,629
0
0
350
0
python,django,sorl-thumbnail
I'd recommend you look at the source for sorl-thumbnail. Really all sorl-thumbnail is, is a wrapper around PIL (python imaging library). Although I'm sure you can figure out a way possibly to uncouple sorl-thumbnail from django it's going to be nontrivial. That said if you say set it up as a management command (with the full django environment as a result), you'd be able to use the low level api as documented in the sorl-thumbnail docs. That said you probably will be better off just figuring out how sorl-thumbnail interfaces with PIL and reproducing that part of the code in a decoupled manner since you will have to do additional post processing (likely with PIL again) that sorl-thumbnail can't do anyway. Also bear in mind on all of this, I'm not sure what your intention is...none of this is something you could set up to be run on off of a webserver. You can't run a python script on a client's computer who is connecting to a django server, that's just beyond impossible. However, generating a python program to produce watermarked thumbnails is completely possible if you're just handing it out to some coworkers or something. If you indeed are trying to have arbitrary people visit your site and upload watermarked/thumbnailed files to you via a web interface...well then you'll want to start some serious javascript studying.
0
0
0
0
2012-04-03T20:03:00.000
2
0
false
10,000,504
0
0
1
1
In my case I have an opportunity to generate image thumbnails and do some post-processing before they are uploaded to a server (Amazon S3) on administrators computer. I know that sorl checks if thumbnail exists before generating it, but utilize kinda complicated naming scheme, so I hope there is a way to access sorl directly from my script. Official documentation says nothing about using sorl-thumbnail standalone, any suggestions?
Generating a unique data store key from a federated identity
10,023,490
2
2
194
1
python,google-app-engine,authentication
User.federated_identity() "Returns the user's OpenID identifier.", which is unique by definition (it's a URL that uniquely identifies the user).
0
1
0
0
2012-04-03T22:13:00.000
1
1.2
true
10,002,209
0
0
1
1
I need a unique datastore key for users authenticated via openid with the python 2.7 runtime for the google apps engine. Should I use User.federated_identity() or User.federated_provider() + User.federated_identity()? In other words is User.federated_identity() unique for ALL providers or just one specific provider?
need to ssh to remote machine from web page with python/django
10,003,264
2
0
2,644
0
python,django,ssh
You're close. The problem here is probably that your web server runs as a non-privileged user (NOT root), like www or www-data or nobody (depending on your operating system). While that user can probably run the SSH binary, when doing so as nobody, it probably doesn't have a home directory, can't find your .ssh directory, and can't find the key file (.ssh/id_rsa for example) that it needs to use for authentication. You have a number of options. Make your private key available to the web server software, then launch ssh with the -i option to select an identity file. Or do this in an SSH config file that you specify with the -F option. Or launch ssh using sudo, and give your web server software the ability to run ssh as some other (shell) user. I can't provide a more specific answer because you haven't provided specifics in your question. Operating system, sample code, etc. Hope this helps. Oh, and you should also consider NOT doing this, and finding some other solution. A web application, even an internal one, that has SSH access to your firewall? Sounds like a recipe for eventual disaster to me. :-)
0
0
0
1
2012-04-03T23:09:00.000
1
0.379949
false
10,002,771
0
0
1
1
I acquired a python script that will either telnet to some equipment, or if the equipment is in a lab, ssh to a firewall machine and then it will telnet to the equipment, and run a command, returning the output for more processing. I took this script and tied it into a Django web app so that I could, from a browser, fill out a form with the target system info and have it display the results. If I start up this web app from the command line, and then access it from the browser (python manage.py app), everything works fine. However, if I set this up to run in "production" mode, using a virtual host with Apache, the SSH fails. I suspect that this is running under root or some web account and cannot SSH to the firewall. Can someone suggest how I get this to work? I don't have any privileges on the firewall machine, so I can't setup SSH to run under some web account. Would I need to collect username and password from the user, in the case where SSH is used, and then pass it to ssh, or are there other ways to get the telnet info and command through to the equipment?
Maximizing apache server instances with large mod_wsgi application
10,020,054
1
2
214
1
python,database,django,apache,mod-wsgi
First up, look at daemon mode of mod_wsgi and don't use embedded mode as then you can control separate to Apache child processes the number of Python WSGI application processes. Secondly, you would be better off putting the memory hungry bits in a separate backend process. You might use XML-RPC or other message queueing system to communicate with the backend processes, or even perhaps see if you can use Celery in some way.
0
0
0
1
2012-04-04T19:06:00.000
1
1.2
true
10,017,645
0
0
1
1
I'm writing a Oracle of Bacon type website that involves a breadth first search on a very large directed graph (>5 million nodes with an average of perhaps 30 outbound edges each). This is also essentially all the site will do, aside from display a few mostly text pages (how it works, contact info, etc.). I currently have a test implementation running in Python, but even using Python arrays to efficiently represent the data, it takes >1.5gb of RAM to hold the whole thing. Clearly Python is the wrong language for a low-level algorithmic problem like this, so I plan to rewrite most of it in C using the Python/C bindings. I estimate that this'll take about 300 mb of RAM. Based on my current configuration, this will run through mod_wsgi in apache 2.2.14, which is set to use mpm_worker_module. Each child apache server will then load up the whole python setup (which loads the C extension) thus using 300 mb, and I only have 4gb of RAM. This'll take time to load and it seems like it'd potentially keep the number of server instances lower than it could otherwise be. If I understand correctly, data-heavy (and not client-interaction-heavy) tasks like this would typically get divorced from the server by setting up an SQL database or something of the sort that all the server processes could then query. But I don't know of a database framework that'd fit my needs. So, how to proceed? Is it worth trying to set up a database divorced from the webserver, or in some other way move the application a step farther out than mod_wsgi, in order to maybe get a few more server instances running? If so, how could this be done? My first impression is that the database, and not the server, is always going to be the limiting factor. It looks like the typical Apache mpm_worker_module configuration has ServerLimit 16 anyways, so I'd probably only get a few more servers. And if I did divorce the database from the server I'd have to have some way to run multiple instances of the database as well (I already know that just one probably won't cut it for the traffic levels I want to support) and make them play nice with the server. So I've perhaps mostly answered my own question, but this is a kind of odd situation so I figured it'd be worth seeing if anyone's got a firmer handle on it. Anything I'm missing? Does this implementation make sense? Thanks in advance! Technical details: it's a Django website that I'm going to serve using Apache 2.2.14 on Ubuntu 10.4.
How can I detect total MySQL server death from Python?
10,192,810
0
2
181
1
python,django,mysql-python
I would think this would be more inline with setting a read_timeout on your front-facing webserver. Any number of reasons could exist to hold up your django app indefinitely. While you have found one specific case there could be many more (code errors, cache difficulties, etc).
0
0
0
0
2012-04-04T19:35:00.000
1
0
false
10,018,055
0
0
1
1
I've been doing some HA testing of our database and in my simulation of server death I've found an issue. My test uses Django and does this: Connect to the database Do a query Pull out the network cord of the server Do another query At this point everything hangs indefinitely within the mysql_ping function. As far as my app is concerned it is connected to the database (because of the previous query), it's just that the server is taking a long time to respond... Does anyone know of any ways to handle this kind of situation? connect_timeout doesn't work as I'm already connected. read_timeout seems like a somewhat too blunt instrument (and I can't even get that working with Django anyway). Setting the default socket timeout also doesn't work (and would be vastly too blunt as this would affect all socket operations and not just MySQL). I'm seriously considering doing my queries within threads and using Thread.join(timeout) to perform the timeout. In theory, if I can do this timeout then reconnect logic should kick in and our automatic failover of the database should work perfectly (kill -9 on affected processes currently does the trick but is a bit manual!).
User login overwrites user fields
10,024,636
0
1
80
0
django,ldap,django-authentication,python-ldap
For those that are interested, the solution was that there is a setting in the LDAP module called AUTH_LDAP_ALWAYS_UPDATE_USER which you need to set to False to make sure it doesn't update every time.
0
0
0
0
2012-04-04T22:18:00.000
1
1.2
true
10,020,073
0
0
1
1
I have a Django app which uses LDAP as the authentication backend. I'm not sure whether it's the LDAP module or just Django itself but, if a user changes their email address, first name or last name and then logs out and back in again, the values revert to their original values (ie. the ones obtained from the LDAP record). Has anyone seen this kind of behaviour before and is there any way to prevent it? The problem I have is that the email addresses in the LDAP records are incorrect and need updating but I have no control over them. The only thing I can control is the Django user database.
Integrating scientific python into an existing Rails webapp
10,021,813
0
4
917
0
python,ruby-on-rails,numpy
I assume that the Python part is a stack of back-end libraries/modules of scientific functions while the Ruby-on-Rail part is mostly as front-end. Then an relatively simple solution would be wrap those Python modules/libraries into services and let the Ruby-on-Rail-built front-end functions call those services. To build the Python services, you can use xmlrpc for simple implementation, or some sophisticated framework such as twisted for larger scale implementation. You can also created only one portal service that hosts all your Python modules/libraries, that would basically give you an application server that handles all the requests form the Ruby-on-Rails front-end. Using which strategy depends on the scale and complexity of your Python libraries/modules.
0
0
0
0
2012-04-04T23:55:00.000
2
0
false
10,020,908
0
0
1
1
I'm a fan of using the right tool for the job. At my company, the data analysts (incl myself) primarily use Python because of the powerful scientific libraries; the web people downstairs use Ruby on Rails for building our own HR management webapp as well as maintaining our online presence. We would like to have the two teams working a little closer together allowing the development of scientific webapps but are unsure about how to proceed. We have significant investment in both technologies with a substantial codebase that we would need to continue to use. Are there any suggestions about the best way to integrate the two domains of scientific programming and web apps using the two separate languages?
Interpreting session activity
10,022,609
0
0
56
0
python,django
If you put some data in request.session it automatically creates a record in django_session on process_response of session middleware. So if you put nothing in the session, and it stores only login data - then it will be number of logins (also by the same user - logount clears session data so it's being recreated if anyone logs in)
0
0
0
0
2012-04-05T02:21:00.000
2
0
false
10,021,846
0
0
1
2
I am looking at my table for django_session and would like to know what this essentially means. If I have 100 session entries in the past hour, does that mean 100 people have logged in in the past hour? Or does it mean something else?
Interpreting session activity
10,030,017
0
0
56
0
python,django
It means there are 100 unique visting during which the request.session got modified. The unique visiting is based on per-user-browser: a user who uses Firefox and then Chrome to access your site AND having request.session modified during the process will be identified as two or more visting. Normally, there is no modification upon request.session thus no session entry will be generated. The default login page sets test flag in request.session, thus session entry will be generated for visitor, no matter whether he can log in successfully . Also, if a visitor denies cookie, his every access to the login page would generate a session entry.
0
0
0
0
2012-04-05T02:21:00.000
2
0
false
10,021,846
0
0
1
2
I am looking at my table for django_session and would like to know what this essentially means. If I have 100 session entries in the past hour, does that mean 100 people have logged in in the past hour? Or does it mean something else?
Python File Download
10,023,435
0
2
1,137
1
python
Examine the Content-Disposition header of the response to discover what the server wants you to call the file.
0
0
0
0
2012-04-05T06:05:00.000
3
0
false
10,023,418
0
0
1
1
I am having a problem and not sure if this is possible at all, so if someone could point me in the right direction. I need to open a file from a webpage, open it in excel and save the file. The problem I am running into the file name on the website has a file name ( not an active link ) and then it will have a "download " button that is not specific to the file I need to download. So instead of the download button being "file1todaysdate", they are nothing that I could use from day to day. Is there a way I could locate file name then grab the file from the download icon? then save in excel? If not sorry for wasting time.
Getting error No module named modules.facebook
42,826,179
0
0
9,533
0
python,web2py,web2py-modules
You should mention your directory name in environmental variable PYTHONPATH so that it will locate your directory. Hopes this will help you :)
0
0
0
0
2012-04-05T06:57:00.000
3
0
false
10,023,941
0
0
1
2
I have to develop web2py app and have to use Facebook sdk in that. I have downloaded facebook sdk zip folder to my windows pc and extarct it and uploaded facebook.py module to my web2py application and using statement "from applications.modules.facebook import * " in controller/default.py and when i run the application its giving below error: No module named modules.facebook. when i browse to application directory, I found facebook.py inside modules folder so not able to find what is the problem. Please guide me to resolve it. Regards, Piks
Getting error No module named modules.facebook
10,024,541
0
0
9,533
0
python,web2py,web2py-modules
What is the folder structure of the facebook sdk? If it facebook/ and facebook/__init__.py is a file within it, then you can import it using import facebook or from facebook import * from the directory where you have facebook folder. If you are importing it from within modules, then make sure that you have placed the facebook package in the modules directory of the application.
0
0
0
0
2012-04-05T06:57:00.000
3
0
false
10,023,941
0
0
1
2
I have to develop web2py app and have to use Facebook sdk in that. I have downloaded facebook sdk zip folder to my windows pc and extarct it and uploaded facebook.py module to my web2py application and using statement "from applications.modules.facebook import * " in controller/default.py and when i run the application its giving below error: No module named modules.facebook. when i browse to application directory, I found facebook.py inside modules folder so not able to find what is the problem. Please guide me to resolve it. Regards, Piks
django multiple sites wsgi is enough?
10,029,391
6
2
191
0
python,django,amazon-ec2,wsgi
Yes you can have as many different django sites as your server can handle! You can set up separate virtual hosts pointing to the appropriate wsgi for each site you want. Remember just because it is possible doesn't mean it's a good idea, keep in mind the resources each site is consuming.
0
0
0
0
2012-04-05T13:06:00.000
1
1.2
true
10,029,190
0
0
1
1
I intend to have many python with django applications on my server, is it ok to have django with wsgi and many sites on the one server? thanks!
reproduce unicode error in django
10,033,919
2
2
309
0
python,django
Have you left any debug prints in your view? That will cause a conversion to the console's encoding, which may be ascii causing this error.
0
0
0
0
2012-04-05T16:07:00.000
2
0.197375
false
10,032,077
0
0
1
2
Sometimes I m getting unicode errors like below in my django site when user submit form data. "'ascii' codec can't encode character u'\u2014' in position 109: ordinal not in range(128)" How can i reproduce the unicode error in my system Many Thanks.
reproduce unicode error in django
10,032,150
5
2
309
0
python,django
Submit the form causing the errors with unicode characters (e.g.é) in it! This is a very common error in Django projects and it almost always means you are calling str() somewhere. Django uses unicode strings internally but when you call str() you are asking Python to give you an ascii string back, which fails with this message. If you give us more info, we'll be able to help you further. Good luck!
0
0
0
0
2012-04-05T16:07:00.000
2
1.2
true
10,032,077
0
0
1
2
Sometimes I m getting unicode errors like below in my django site when user submit form data. "'ascii' codec can't encode character u'\u2014' in position 109: ordinal not in range(128)" How can i reproduce the unicode error in my system Many Thanks.
Android Back End Technology - Language (Java, Python) & IDE (CoderBuddy, exo Cloud, Cloud 9)
10,033,297
1
5
2,053
0
java,android,python,ide,cloud
My guess is that using Java you will have lots of frameworks to find solutions and I really don't think Python will offer you that. About IDE, I don't think you should worry about it with Python, you can use SublimeText 2 or Eclipse(have to install python editor). Both work great and Python is easy to deploy. With Java I use Eclipse but a friend is using NetBeans and it has some "shortcuts" to create things like services, for instance. Also with Java, you'll be more familiarized because of Android so I think it is a plus, makes more sense. You need to at least start so you can have a better idea of what is best for you. And get ready, it will be a LOT different from Delphi ;)
0
1
0
0
2012-04-05T16:09:00.000
1
1.2
true
10,032,093
0
0
1
1
I've done my research and narrowed this down. OK, so I am deciding on the language and and tool to use for backend (server side) of developing cloud based android applications.. I've decided on Google App Engine as my framework. As I am going to be developing on my android tablet I want a cloud based IDE. (I am going to use a native android IDE app for client side). App Engine supports the Go Programming Language, Java and Python. As there doesn't appear to be a stable cloud IDE that supports Go, I am left with Java & Python. I've narrowed my vast list of IDEs down to: Coderbuddy - (Designed for App Engine but Python only) exo Cloud - (Java & Python supported) Cloud 9 - (Java & Python supported) I know neither language. I have to learn Java in any case for Android client side development. I understand that Python is faster to code in and so that's definately a factor but I absolutely don't want to sacrifice performance or scalability. I will be doing lots of SQL database stuff. Finally if you think I am way off and should look in another direction please let me know. Thanks! Edit: My background language is Delphi (Object Pascal)
Django changing the default object manager
10,057,890
0
2
627
0
python,django
If you really need to do that modify the django code itself. Monkey patching is an option also, there are a lot of techniques for that out there.
0
0
0
0
2012-04-07T15:03:00.000
2
0
false
10,055,537
0
0
1
2
Is there a way to change the default object manager for all Models? (which would include the object managers on third party apps)
Django changing the default object manager
10,055,699
2
2
627
0
python,django
The default manager is attached in the function ensure_default_manager in django.db.models.manager. It attaches by default a manager of class Manager. You could monkeypatch this function to attach a different (subclass of) Manager. But you have to consider whether this is the most ideal solution to the problem you're trying to solve.
0
0
0
0
2012-04-07T15:03:00.000
2
0.197375
false
10,055,537
0
0
1
2
Is there a way to change the default object manager for all Models? (which would include the object managers on third party apps)
Java's Application Context in Django
10,516,164
0
1
650
0
java,python,django,servlets
There seems to be nothing like this in Django, there are thread locals which can be used, but they are not the exact same as Application context.
0
0
0
0
2012-04-08T13:27:00.000
2
1.2
true
10,063,145
0
0
1
1
For a Servlets based web application there exists something called an Application Context. It is an Object accessible from anywhere in the application and can be used to store data which is relevant in the context of the application. Is there something like this in Django? If not, what are the alternates that are available in Django for use cases of an Application Context.
Adding field to SQL table from Django Application
10,066,588
5
0
72
1
python,sql,django
You really don't want to implement each question/answer as a separate DB field. Instead, make a table of questions and a table of answers, and have a field in the answers table (in general, a ForeignKey) to indicate which question a given answer is associated with.
0
0
0
0
2012-04-08T21:14:00.000
1
1.2
true
10,066,573
0
0
1
1
I am developing an application designed for a secretary to use. She has a stack of hundreds of ballot forms which have a number of questions on them, and wishes to input this data into a program to show the total votes for each answer. Each question has a number of answers. For example: Q: "Re-elect current president of the board" A: Choice between "Yes" or "No" or "Neutral" Year on year the questions can change, as well as the answers, but the current application used in the company is hard coded with the questions and answers of last year. My aim is to create an app (in Django/Python) which allows the secretary to add/delete questions and answers as she wishes. I am relatively new to Django... I have created an app in University and know how to create basic models and implement the Twitter bootstrap for the GUI. But I'm a little confused about how to enable the secretary to add custom fields in (which are obviously defined in SQL). Does anyone have any small tips on how to get started? By the way, I recognize that this could be achievable using the admin part of website and would welcome any suggestions about that. Thank you.
sel.click("xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a") is not working
10,233,957
0
0
146
0
python,xpath,selenium-rc
It looks like a problem of timing. May be you can intentionally add wait time till the element appears on the page. Another possibility is that element which you are trying to interact is hidden. Would be great if you can post errors you are getting when you test fails.
0
0
1
0
2012-04-09T04:24:00.000
2
0
false
10,068,871
0
0
1
2
Exception: ERROR: Element xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a not found. I checked in Fierbug. The path is correct but I don't know what's the reason for this test case to fail.
sel.click("xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a") is not working
10,234,023
0
0
146
0
python,xpath,selenium-rc
Can I have the site for checking? BTW sometimes you should to wait the loading of the page, so you need to do before of this action an instructions like: clickAndWait(30000) in my cases it solves a lotof problems :)
0
0
1
0
2012-04-09T04:24:00.000
2
0
false
10,068,871
0
0
1
2
Exception: ERROR: Element xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a not found. I checked in Fierbug. The path is correct but I don't know what's the reason for this test case to fail.
Passwords being stored as sha1 and pbkdf2_sha256
10,069,190
3
3
1,609
0
python,django
Did you perhaps upgrade from Django 1.3 to Django 1.4 while retaining user data? Django 1.4 introduced the newer, more secure hash for password storage, but should still be backwards-compatible with the old hashes as far as I am aware.
0
0
0
0
2012-04-09T05:12:00.000
1
1.2
true
10,069,167
0
0
1
1
I have a django auth_user table, and for some reason, some of the passcodes are stores as sha1$... and others pbkdf2_sha256$.... I don't see any rhyme or reason to it -- what is the difference between these two and why would some be stored as one version, but others as another?
Session Handling in Chrome and Firefox
10,085,230
1
1
308
0
python,google-app-engine,firefox,google-chrome,tipfy
check the Tipfy session configuration attributes, Check the path attribute '/' you need to do some modification their
0
0
1
0
2012-04-09T06:10:00.000
1
1.2
true
10,069,594
0
0
1
1
I have a problem regarding with session handling in Chrome and Firefox. When in authenticate to a website and I closed it and reopen the home page in Firefox it shows my name . But when I do the same thing in chrome it didnt show my name . it shows as guest. The session for this site implemented by tipfy. do I have to configure the session management?
can we use java code in web2py application code?
10,090,081
2
1
632
0
python,python-2.7,web2py,web2py-modules
I would consider looking into webservices. If you could expose url from java, that will route to a method/function of java where logic is performed and it returns json object. While in web2py urllib2 you can make a request and decode that json into native python dictionary. The clue is that you would have to expose all the methods of objects and pass the object back and forth as json. Do not be scared in most programming lanugages objects are just hash_arrays/dictionaries with some special qualities. So if you can serialize and deserialize the object and expose apriopriate urls you will be fine. Also there is implementation of web2py in jython. But then the entire stack will be in JVM and i may be more complex to work with.
0
0
0
0
2012-04-09T08:26:00.000
2
1.2
true
10,070,703
0
0
1
1
I have to implement one web2py application which has to access java code (which has code to connect to the remote machine) but not sure whether we can do it in web2py or not.My PC has Java 1.6, Python2.7 ,web2py ,eclipse installed. Use case is : I have created one button in web2py application and upon clicking the button, it should instantiate the java object and invoke particular method of that java object which will further connect to the remote machine. Doubts are: Can we deploy that particular java class to web2py server so web2py application can easily access it? Is it possible to import that class from python code? How to instantiate java object from python code? And how to invoke java method from python code? Regards, Piks
scipy: significance of the return values of spearmanr (correlation)
10,076,295
4
2
1,006
0
python,statistics,scipy,correlation
It's up to you to choose the level of significance (alpha). To be coherent you shall choose it before running the test. The function will return you the lowest alpha you can choose for which you reject the null hypothesis (H0) [reject H0 when p-value < alpha or equivalently -p-value>-alpha]. You therefore know that the lowest value for which you reject the null hypothesis (H0) is p-value (2.3569040685361066e-65). Therefore being p-value incredibly small your null hypothesis is rejected for any relevant level of alpha (usually alpha = 0.05).
0
0
0
0
2012-04-09T16:20:00.000
1
1.2
true
10,076,222
0
1
1
1
The output of spearmanr (Spearman correlation) of X,Y gives me the following: Correlation: 0.54542821980327882 P-Value: 2.3569040685361066e-65 where len(X)=len(Y)=800. My questions are as follows: 0) What is the confidence (alpha?) here ? 1) If correlation coefficient > alpha, the hypothesis of the correlation being a coincidence is rejected, thus there is correlation. Is this true ? Thanks in advance..
Django templates: accessing the previous and the following element of the list
10,078,813
1
3
4,660
0
python,django,django-templates
You can create an external tag which does that but django templating system which was build to be lightweight has no such feature in for loops.
0
0
0
0
2012-04-09T19:32:00.000
4
0.049958
false
10,078,683
0
0
1
1
I am rather new to django templates and have an impression that I have not understood some basics. I have a list of elements and I need to render an element of the list based on conditions against the the previous and the next elements (in case the following or the previous elements are hidden, I need to mark the current element as border element). How can I reference the previous and the following elements within a for loop in Django templates?
Getting and serializing the state of dynamically created python instances to a relational model
10,094,298
1
2
136
0
python,metaprogramming,pickle
Just use an ORM. This is what they are for. What you are proposing to do is create your own half-assed ORM on your own time. Save your time for your own code that does things, and use the effort other people put for free into solving this problem for you. Note that all class creation in python is "dynamic" - this is not an issue, for, well, anything at all. In fact, if you are assembling classes programmatically, it is probably slightly easier with an ORM, because they provide reifications of fields. In the worst case, if you really do need to store your objects in a fake nosql-type schema, you will still only have to write your own backend driver if you use an existing ORM, rather than coding the whole stack yourself. (As it happens, you're not the first person to face this - solutions exist. Goole "python orm store dynamically created models" and "sqlalchemy store dynamically created models") Candidates include: Django ORM SQLAlchemy Some others you can find by googling "Python ORM".
0
0
0
1
2012-04-10T18:18:00.000
1
0.197375
false
10,094,217
0
0
1
1
I'm developing a framework of sorts. I'm providing a base class, that will be subclassed by other developers to add behavior to the system. The instances of those classes will have attributes that my framework doesn't necessarily expect, except by inspecting those instances' __dict__. To make things even more interesting, some of those classes can be created dynamically, at any time. I'd like some things to be handled by the framework, namely, I will need to persist those instances, display their attribute values to the user, and let her search/filter instances using those values. I have to use a relational database. I know there are some decent python OO database out there, but unfortunately they're not an option in this case. I'm not looking for a full-blown ORM too... and it may not even be an option, given that some of the classes can be created dynamically. So, my question is, what state of a python instance do I need to serialize to ensure that I can deserialize it later on? Is it enough to look at __dict__, or are there other private attributes that I should be using? Pickling the instances is not enough, because I'll need to unpickle them to search/filter the attribute values, and I'm afraid it's too much data to do it in-memory (instead of letting the database do it).
Securing data in the google app engine datastore
10,097,467
3
17
3,792
0
python,security,google-app-engine,rsa,sha
You can increase your hashing algorithm security by using HMAC, a secret key, and a unique salt per entry (I know people will disagree with me on this but it's my belief from my research that it helps avoid certain attacks). You can also use bcrypt or scrypt to hash which will make reversing the hash an extremely time consuming process (but you'll also have to factor this in as time it takes your app to compute the hash). By disabling code downloads and keeping your secret key protected, I can't imagine how someone can get a hold of it. Just make sure your code is kept protected under similar safe guards or that you remove the secret key from your code during development and only pull it out to deploy. I assume you will keep your secret key in your code (I've heard many people say to keep it in memory to be ultra secure but given the nature of AppEngine and instances, this isn't feasible). Update: Be sure to enable 2-factor authentication for all Google accounts that have admin rights to your app. Google offers this so not sure if your restriction for enabling this was imposed by an outside force or not.
0
1
0
0
2012-04-10T20:50:00.000
3
0.197375
false
10,096,268
0
0
1
2
Our google app engine app stores a fair amount of personally identifying information (email, ssn, etc) to identify users. I'm looking for advice as to how to secure that data. My current strategy Store the sensitive data in two forms: Hashed - using SHA-2 and a salt Encrypted - using public/private key RSA When we need to do look ups: Do look-ups on the hashed data (hash the PII in a query, compare it to the hashed PII in the datastore). If we ever need to re-hash the data or otherwise deal with it in a raw form: Decrypt the encrypted version with our private key. Never store it in raw form, just process it then re-hash & re-encrypt it. My concerns Keeping our hash salt secret If an attacker gets ahold of the data in the datastore, as well as our hash salt, I'm worried they could brute force the sensitive data. Some of it (like SSN, a 9-digit number) does not have a big key space, so even with a modern hash algorithm I believe it could be done if the attacker knew the salt. My current idea is to keep the salt out of source control and in it's own file. That file gets loaded on to GAE during deployment and the app reads the file when it needs to hash incoming data. In between deployments the salt file lives on a USB key protected by an angry bear (or a safe deposit box). With the salt only living in two places The USB key Deployed to google apps and with code download permanently disabled, I can't think of a way for someone to get ahold of the salt without stealing that USB key. Am I missing something? Keeping our private RSA key secret Less worried about this. It will be rare that we'll need to decrypt the encrypted version (only if we change the hash algorithm or data format). The private key never has to touch the GAE server, we can pull down the encrypted data, decrypt it locally, process it, and re-upload the encrypted / hashed versions. We can keep our RSA private key on a USB stick guarded by a bear AND a tiger, and only bring it out when we need it. I realize this question isn't exactly google apps specific, but I think GAE makes the situation somewhat unique. If I had total control, I'd do things like lock down deployment access and access to the datastore viewer with two-factor authentication, but those options aren't available at the moment (Having a GAE specific password is good, but I like having RSA tokens involved). I'm also neither a GAE expert nor a security expert, so if there's a hole I'm missing or something I'm not thinking of specific to the platform, I would love to hear it.
Securing data in the google app engine datastore
10,098,781
11
17
3,792
0
python,security,google-app-engine,rsa,sha
When deciding on a security architecture, the first thing in your mind should always be threat models. Who are your potential attackers, what are their capabilities, and how can you defend against them? Without a clear idea of your threat model, you've got no way to assess whether or not your proposed security measures are sufficient, or even if they're necessary. From your text, I'm guessing you're seeking to protect against some subset of the following: An attacker who compromises your datastore data, but not your application code. An attacker who obtains access to credentials to access the admin console of your app and can deploy new code. For the former, encrypting or hashing your datastore data is likely sufficient (but see the caveats later in this answer). Protecting against the latter is tougher, but as long as your admin users can't execute arbitrary code without deploying a new app version, storing your keys in a module that's not checked in to source control, as you suggest, ought to work just fine, since even with admin access, they can't recover the keys, nor can they deploy a new version that reveals the keys to them. Make sure to disable downloading of source, obviously. You rightly note some concerns about hashing of data with a limited amount of entropy - and you're right to be concerned. To some degree, salts can help with this by preventing precomputation attacks, and key stretching, such as that employed in PBKDF2, scrypt, and bcrypt, can make your attacker's life harder by increasing the amount of work they have to do. However, with something like SSN, your keyspace is simply so small that no amount of key stretching is going to help - if you hash the data, and the attacker gets the hash, they will be able to determine the original SSN. In such situations, your only viable approach is to encrypt the data with a secret key. Now your attacker is forced to brute-force the key in order to get the data, a challenge that is orders of magnitude harder. In short, my recommendation would be to encrypt your data using a standard (private key) cipher, with the key stored in a module not in source control. Using hashing instead will only weaken your data, while using public key cryptography doesn't provide appreciable security against any plausible threat model that you don't already have by using a standard cipher. Of course, the number one way to protect your users' data is to not store it in the first place, if you can. :)
0
1
0
0
2012-04-10T20:50:00.000
3
1.2
true
10,096,268
0
0
1
2
Our google app engine app stores a fair amount of personally identifying information (email, ssn, etc) to identify users. I'm looking for advice as to how to secure that data. My current strategy Store the sensitive data in two forms: Hashed - using SHA-2 and a salt Encrypted - using public/private key RSA When we need to do look ups: Do look-ups on the hashed data (hash the PII in a query, compare it to the hashed PII in the datastore). If we ever need to re-hash the data or otherwise deal with it in a raw form: Decrypt the encrypted version with our private key. Never store it in raw form, just process it then re-hash & re-encrypt it. My concerns Keeping our hash salt secret If an attacker gets ahold of the data in the datastore, as well as our hash salt, I'm worried they could brute force the sensitive data. Some of it (like SSN, a 9-digit number) does not have a big key space, so even with a modern hash algorithm I believe it could be done if the attacker knew the salt. My current idea is to keep the salt out of source control and in it's own file. That file gets loaded on to GAE during deployment and the app reads the file when it needs to hash incoming data. In between deployments the salt file lives on a USB key protected by an angry bear (or a safe deposit box). With the salt only living in two places The USB key Deployed to google apps and with code download permanently disabled, I can't think of a way for someone to get ahold of the salt without stealing that USB key. Am I missing something? Keeping our private RSA key secret Less worried about this. It will be rare that we'll need to decrypt the encrypted version (only if we change the hash algorithm or data format). The private key never has to touch the GAE server, we can pull down the encrypted data, decrypt it locally, process it, and re-upload the encrypted / hashed versions. We can keep our RSA private key on a USB stick guarded by a bear AND a tiger, and only bring it out when we need it. I realize this question isn't exactly google apps specific, but I think GAE makes the situation somewhat unique. If I had total control, I'd do things like lock down deployment access and access to the datastore viewer with two-factor authentication, but those options aren't available at the moment (Having a GAE specific password is good, but I like having RSA tokens involved). I'm also neither a GAE expert nor a security expert, so if there's a hole I'm missing or something I'm not thinking of specific to the platform, I would love to hear it.
Call and receive output from Python script in Java?
10,097,783
4
64
184,949
0
java,python
I've looked for different libraries like Jepp or Jython, but most seem to be very out of date. Jython is not "a library"; it's an implementation of the Python language on top of the Java Virtual Machine. It is definitely not out of date; the most recent release was Feb. 24 of this year. It implements Python 2.5, which means you will be missing a couple of more recent features, but it is honestly not much different from 2.7. Note: the actual communication between Java and Python is not a requirement of the aforementioned assignment, so this isn't doing my homework for me. This is, however, the only way I can think of to easily perform what needs to be done. This seems extremely unlikely for a school assignment. Please tell us more about what you're really trying to do. Usually, school assignments specify exactly what languages you'll be using for what, and I've never heard of one that involved more than one language at all. If it did, they'd tell you if you needed to set up this kind of communication, and how they intended you to do it.
0
0
0
0
2012-04-10T22:40:00.000
10
0.07983
false
10,097,491
0
0
1
1
What's the easiest way to execute a Python script from Java, and receive the output of that script? I've looked for different libraries like Jepp or Jython, but most appear out of date. Another problem with the libraries is that I need to be able to easily include a library with the source code (though I don't need to source for the library itself) if I use a library. Because of this, would the easiest/most effective way be to simply do something like call the script with runtime.exec, and then somehow capture printed output? Or, even though it would be very painful for me, I could also just have the Python script output to a temporary text file, then read the file in Java. Note: the actual communication between Java and Python is not a requirement of the problem I am trying to solve. This is, however, the only way I can think of to easily perform what needs to be done.
How do I navigate a website through software?
10,099,151
2
2
343
0
java,javascript,python,html,navigation
You can use a tool like Selenium to emulate a user clicking things in a web browser (I believe it actually "drives" a real instance of whatever browser you choose.) Selenium has a domain-specific language for specifying what actions you want to perform, and Python bindings for controlling it programmatically. I haven't actually used it, so I can't say much more about it, but you should go check it out.
0
0
1
0
2012-04-11T02:09:00.000
2
0.197375
false
10,098,963
0
0
1
1
I need to navigate though a website that is written mostly in Javascript. There are no hard links at all, as the page is simply modified through the script. I can do what I need to using Javascript injections one after another, but chrome starts searching for my input instead of injecting it after a certain string length. I've tried to use frames to do this in HTML, but chrome won't let me use Javascript inside the frame since the source is from a different domain. Is there a good way that I can do this? I've looked into using Java or Python, but I don't see anything that lets you work with Javascript. EDIT: Thanks for telling me about different software, but I don't want to use other third-party software. I would really like to know how to execute Javascript injections in a systematic manner from a HTML page. I can do it from the browser, so why can't I do it from an HTML document?
simple slider captcha implementation for google app engine-python
10,108,518
0
0
826
0
python,google-app-engine,captcha
Qaptcha seens to use cookies. To manipulate cookies on appengine, use this pseudo code: if response.cookies['iQaptcha'] is None:
0
0
0
0
2012-04-11T09:14:00.000
2
0
false
10,103,053
0
0
1
1
i have created a simple web app which contains recaptcha authentication. however, i feel that it is too difficult for my userbase. i'd like a simple and decently secure solution that i can use with GAE-py without much effort, which uses something like a slider/drag-drop approach. something like Qaptcha. Any recommendations?
Apply decorator to context processor
10,105,994
3
0
280
0
python,django,facebook,fandjango
There's no need to wrap the context processor. If you've wrapped the view in the first place, then the request will already be annotated with the facebook.user attribute. Since the request is passed to the context processor anyway, you have access to that attribute. You should probably do a quick check - if hasattr(request, 'facebook') - within the context processor, just to make sure it's being called from a wrapped view.
0
0
0
0
2012-04-11T12:09:00.000
1
1.2
true
10,105,610
0
0
1
1
I'm using Fandjango for a Django Facebook Canvas app. To use fandjango, you need to wrap all view functions with @facebook_authorization_required, which makes sure you're authorized, then gives you the variable request.facebook.user. What I want is to make a context processor which defines a few more variables based on this, i.e., I want all my templates to be able to use fb_user as a shortcut for request.facebook.user. The problem is, I don't know how to use the decorator on the context processor. Any ideas how I can do this?
Google App Engine & Google Storage
10,115,635
2
2
644
0
python,google-app-engine,google-cloud-storage
Since you created the API console project with an Apps account (one @yourdomain.com), the project is automatically treated as an Apps project, and only users from your domain can be added to it. To avoid this, create a new project using a @gmail.com account, and then add all the developers you want to have access to it. You can then remove the @gmail.com account.
0
1
0
0
2012-04-11T13:41:00.000
2
1.2
true
10,107,136
0
0
1
1
I try to enable Cloud Storage for my GAE app. I read in the docs that: Add the service account as a project editor to the Google APIs Console project that the bucket belongs to. For information about permissions in Cloud Storage, see Scopes and Permissions on the Cloud Storage documentation. However when I try to the service account to Team Members at the API Console I get the following message: User *@*.gserviceaccount.com may not be added to Project "**". Only members from domain *.com may be added. Any ideas?
Django-piston takes huge response time in read
10,225,848
1
1
149
0
python,django,python-2.7,django-piston
Maybe you have defined the fields parameter to fetch many related objects(Objects with foreignkeys,onetoonekeys and manytomanyrelations to the object being fetched). This will slow down your response. Can you post your code?
0
0
0
0
2012-04-12T11:50:00.000
1
0.197375
false
10,122,992
0
0
1
1
When returning result from the read method, it takes a huge amount of time to generate/send response (for like 30,000 records with 6 columns, it takes around 14 seconds). Is this fine and it normally takes this much time? If this ins't fine, what can I do to reduce the time? What/Where could I refer to? Any help?
Debugger in python for Google App Engine and Django
10,139,804
1
2
337
0
django,google-app-engine,python-2.7
After a week of racking my brain, I finally figured out the problem. The gaesessions code was the culprit. We put DEFAULT_LIFETIME = datetime.timedelta(hours=1) and originally it was DEFAULT_LIFETIME = datetime.timedelta(days=7). Not sure why running it through any debugger such as wing or pycharm would prevent the browser from getting a session. The interesting thing is the code change with hours=1 works fine on linux with wing debugger. Very Strange!
0
1
0
0
2012-04-12T12:53:00.000
2
0.099668
false
10,123,958
0
0
1
1
I am having a problem that has baffled me for over a week. I have a project that is written in python with Django on Google App Engine. The project has a login page and when I run the application in Google App Engine or from the command line using dev_server.py c:\project, it works fine. When I try to run the application through a debugger like Wing or Pycharm, I cannot get past the login page. After trying to login, it takes me back to the login screen again. When I look at the logs, it shows a 302 (redirect) in the debugger but normally it shows a 200 (OK). Could someone explain why this would be happening? Thanks -Dimitry
How to set the default libraries when doing unit tests under Python 2.7
21,678,252
0
1
111
0
unit-testing,google-app-engine,python-2.7
app.yaml configuration is not applied when doing unit tests with webtest app and NoseGAE. use_library does not work neither. The right solution for this case is to provide proper python path to the preferred lib version, e.g. PYTHONPATH=../google_appengine/lib/django-1.5 when running nosetests.
0
1
0
1
2012-04-12T14:38:00.000
2
0
false
10,125,860
0
0
1
1
I'm in the process of migrating my Google AppEngine solution from Python 2.5 to 2.7. The application migration was relatively easy, but I'm struggling with the unittests. In the 2.5 version I was using the use_library function to set the django version to 1.2, but this isn't supported anymore on 2.7. Now I set the default version in the app.yaml. When I'm now running my unittests the default django version becomes 0.96 and I can't manage to set the 1.2 as the default version. Who knows how I can set the default libraries for the unittest, so the match the settings in the app.yaml?
Crawler processes dying unexpectedly
10,136,943
0
2
493
0
python,scrapy,scrapyd
I think i had a similar situation. The reason that processes were dying was that spiders were generating an exception making the process to stop. To find out the exception look at the log files somewhere in .scrapy folder. For each started crawler process scrapy creates a log file with job id in its name.
0
0
0
0
2012-04-12T19:25:00.000
1
0
false
10,130,367
0
0
1
1
I am facing a problem with crawler processes dying unexpectedly. I am using scrapy 0.14, the problem existed in 0.12 as well . The scrapyd log shows entries like: Process died: exitstatus=None The spider logs dont show spider closed information as depicted by my database status also. Has anybody else faced similar situation? How can i trace the reason for these processes vanishing, any ideas, suggestions?
Is it bad practice to have independent Django applications to handle simple one-file components?
10,133,338
0
0
105
0
python,django
I've been do django projects for 4 years. And all I've got from all projects is few ContextProcessors. If it doesn't do any project-specific operations, I don't see it fitting on any of my project apps, but I also find it to be too small to have an application on its own. Is it? Just look at this like it's always project-specified until you need it in another project. So my answer is: Write what you want and how you want. If you need stuff in other place then you'll separate it from project. Don't optimize prematurely.
0
0
0
0
2012-04-12T21:39:00.000
1
0
false
10,132,266
0
0
1
1
I've been reading about good practices regarding Django projects management. As I understand, is good to: Split the project into multiple small applications with specific responsibilities. Always code thinking in redistributable components. The second point has become quite important to me since I usually work on more than one project. So whenever I can, I modularize my components into installable packages which I can later reuse. The question is... to what extend is this a good practice? how should I handle very simple components which are also highly reusable by other applications? An example would be a simple reusable templatetag, which may be 40~60 lines of code + tests. If it doesn't do any project-specific operations, I don't see it fitting on any of my project apps, but I also find it to be too small to have an application on its own. Is it?
ZeroMQ selective pub/sub pattern?
14,329,435
1
4
956
0
python,zeromq
What I see as the only possibility is to use the DEALER-ROUTER combination. DEALER at the frontend, ROUTER at the backend. Every frontend server shall contain a DEALER socket for every backend server (for broadcast) and one DEALER socket on top connected to all the backend servers at once for the round-robin thing. Now let me explain why. You can't really use PUB-SUB in such a critical case, because that pattern can very easily drop messages silently, it does not queue. So in fact the message posted to PUB can arrive to any subset of SUB since it's (dis)connecting in the background. For this reason you need to simulate broadcast by looping over DEALER sockets assigned to all the background servers. It will queue messages if the backend part is not connected, but beware of the HWM. The only final solution is to use heartbeat to know when a backend is dead and destroy the socket assigned to it. A ROUTER socket at the background is a logical solution since you can asynchronously accept any number of requests and since it's a ROUTER socket it is super easy to send the response back to the frontend that requested the task. By having a single ROUTER in the background servers you can make it in a way that they are not even aware of the fact that there is a broadcast happening, they see everything as a direct request to them. Broadcasting is purely a frontend thing. The only issue with this solution might be that if your backend server is not fast enough, all the frontend servers may fill it up so that it reaches the HWM and starts dropping the packages. You can prevent this by having more threads/processes processing the messages from the ROUTER socket. zmq_proxy() is a useful function for this stuff. Hope this helps ;-)
0
1
0
0
2012-04-13T15:51:00.000
2
0.099668
false
10,144,158
0
0
1
1
I'm trying to design ZeroMQ architecture for N front-end servers and M back-end workers, where front-end servers would send task to back-end ones. Front-end servers do have information about back-end ones, but back-end ones do not know about front-end. I have two types of tasks, one type should use round robin and go to just one back-end server, while other type should be broadcasted to all back-end servers. I don't want to have a central broker, as it would be single point of failure. For the first type of tasks request/response pattern seems to be the right one, while for the second it would be publisher/subscriber pattern. But how about pattern combining the two? Is there any patter that would allow me to select at send time if I want to sent message to all or just one random back-end servers? The solution I've come up with is just use publisher/subscriber and prepend messages with back-end server ID and some magic value if it's addressed to all. However, this would create lot unnecessary traffic. Is there cleaner and more efficient way to do it?
Trouble posting un-checked checkbox value in django
10,147,278
0
0
1,133
0
python,django,checkbox,web
Your browser is what posts the value as 'on'. This is normal behavior for checkbox inputs without value="blah" attribute set. If it is always posting as 'on' even when the checkbox isn't checked then perhaps there is something on the browser side that is setting this.
0
0
0
0
2012-04-13T18:11:00.000
1
0
false
10,146,086
0
0
1
1
I am finding trouble in posting the state of a checkbox in a Django form (Django v1.2). Here's the field in my model: subscribe = models.BooleanField(default=False, verbose_name="Subscribe") In the relevant template file: {{ form.subscribe }} This renders the checkbox as un-checked initially. But when I post the form (without touching anything else), django sends u'subscribe': [u'on'] in request.POST. That is, the response always contains u'subscribe': [u'on'] irrespective of whether the checkbox is checked or not. When the checkbox is not checked, the <input> tag in template is rendered as <input type="checkbox" name="subscribe" id="id_subscribe" /> And, when the checkbox is checked, it is rendered as <input type="checkbox" name="subscribe" id="id_subscribe" checked="checked" /> Am I missing anything here?
Getting a piece of information from development GAE server to local filesystem
10,152,181
3
0
66
0
python,google-app-engine
How about writing the XML data to the blobstore and then write a handler that uses send_blob to download to your local file system? You can use the files API to write to the blobstore from you application.
0
1
1
0
2012-04-14T08:01:00.000
1
1.2
true
10,152,055
0
0
1
1
I have an application I am developing on top of GAE, using Python APIs. I am using the local development server right now. The application involves parsing large block of XML data received from outside service. So the question is - is there an easy way to get this XML data exported out of the GAE application - e.g., in regular app I would just write it to a temp file, but in GAE app I can not do that. So what could I do instead? I can not easily run all the code that produces the service call outside of GAE since it uses some GAE functions to create the call, but it would be much easier if I could take the XML result out and develop/test the parser part outside and then put it back to GAE app. I tried to log it using logging and then extract it from the console, but when XML is getting big it doesn't work well. I know there's bulk data import/export APIs but seems to be an overkill for extracting just this one piece of information to write it to data store and then export the whole store. So how to do it in the best way?
Flask framework for a small and medium size projects
10,152,732
3
3
1,818
0
python,flask
Flask is great for all kind of projects. As long as you don't need Django's ORM (and all batteries like admin's pages), Flask is the right choice.
0
0
0
0
2012-04-14T10:03:00.000
3
0.197375
false
10,152,721
0
0
1
2
I'm about to try Flask framework and look if it fits my needs. I worked with Django, it is cool, but I want to try Flask. I have one small and maybe one medium-sized project and wanted to ask if Flask is the right framework to use for those? Do you guys have experience by running medium-sized (or even large-scale) projects with Flask? Would be nice to hear facts and not just things like "I like Django because it is cool" or "I like Flask just because it is small" :) Anyway, I will try to play with it, just from curiosity.
Flask framework for a small and medium size projects
70,235,186
0
3
1,818
0
python,flask
Normally I use Flask. But for rapid development I just wanted to build a project with Django however I didn't know it is a burden. When I don't want to use its default authentication mechanism and go with the things in my mind is hard to get rid of the built in structure. On the other hand, using Flask is way more flexible and maintainable. As I told I just wanted to do it for faster development but it took awhile to accept stiffness of Django. So it is not my personal favorite. Just wanted to write even this is an old post.
0
0
0
0
2012-04-14T10:03:00.000
3
0
false
10,152,721
0
0
1
2
I'm about to try Flask framework and look if it fits my needs. I worked with Django, it is cool, but I want to try Flask. I have one small and maybe one medium-sized project and wanted to ask if Flask is the right framework to use for those? Do you guys have experience by running medium-sized (or even large-scale) projects with Flask? Would be nice to hear facts and not just things like "I like Django because it is cool" or "I like Flask just because it is small" :) Anyway, I will try to play with it, just from curiosity.
Nginx: Speeding up Image Upload?
10,165,928
1
1
673
0
python,image,nginx
Yes, set the proxy_max_temp_file_size to zero, or some other reasonably small value. Another option (which might be a better choice) is to set the proxy_temp_path to faster storage so that nginx can do a slightly better job of insulating the application from buggy or malicious hosts.
0
1
0
1
2012-04-14T23:05:00.000
1
0.197375
false
10,158,096
0
0
1
1
My python application sits behind an Nginx instance. When I upload an image, which is one of the purpose of my app, I notice that nginx first saves the image in filesystem (used 'watch ls -l /tmp') and then hands it over to the app. Can I configure Nginx to work in-memory with image POST? My intent is to avoid touching the slow filesystem (the server runs on an embedded device).
django: add form field to generated form from another table
10,165,663
0
2
1,987
0
python,django,forms,django-forms
The short answer is yes. You would have to be careful with your template and views. Can you please share your code... view, django models and template. Are you using model forms? Why are you keeping them as separate models (tables)? My suggestion is if you don't need to keep the models separate, edit the Product model to include Pictures. Then your form will suit your needs nicely. Hope this helps. If not, share code.
0
0
0
0
2012-04-15T18:56:00.000
2
0
false
10,165,046
0
0
1
1
I have this table Products: size color etc and another table Pictures: product_id picture and I have generated form from Products table, but I also need there field for adding a picture to that product. Is it possible to add a field to the product generated form for a picture? Thanks in advance.
Static folders structure in Django 1.4?
10,169,482
1
7
3,453
0
python,django,convention,directory-structure,django-1.4
STATIC_ROOT is just a file path where the staticfiles contrib app will collect and deposit all static files. It is a location to collect items, that's all. The key thing is that this location is temporary storage and is used mainly when packaging your app for deployment. The staticfiles app searches for items to collect from any directory called static in any apps that are listed in INSTALLED_APPS and in addition any extra file path locations listed in STATICFILES_DIRS. For my projects I create a deploy directory in which I create a www folder that I use for static files, and various other files used only when deploying. This directory is at the top level of the project. You can point the variable to any location to which your user has write permissions, it doesn't need to be in the project directory.
0
0
0
0
2012-04-16T04:34:00.000
3
0.066568
false
10,168,761
0
0
1
1
This is the new project structure (from the Django 1.4 release notes). myproject |-- manage.py |-- myproject | |-- __init__.py | |-- settings.py | |-- urls.py | `-- wsgi.py `-- polls |-- __init__.py |-- models.py |-- tests.py `-- views.py What I am not sure about is whether I should point STATIC_ROOT to myproject/myproject/static/ (together with settings.py, urls.py...) OR The top-level directory myproject/static (next to myproject, myapp1, myapp2)?
Full proto too large to save, cleared variables
10,192,419
9
9
1,244
0
python,django,google-app-engine
Are you using appstats? It looks like this can happen when appstats is recording state about your app, especially if you're storing lots of data on the stack. It isn't harmful, but you won't be able to see everything when inspecting calls in appstats.
0
1
0
0
2012-04-16T06:26:00.000
1
1
false
10,169,574
0
0
1
1
I got this error while rendering google app engine code. Do any body have knowledge about this error?
Django not using correct Python instance from within virtualenv
10,178,380
1
0
332
0
python,django,virtualenv
Are you running ./manage.py shell or python manage.py shell? It can make a difference. Using the ./ version uses the shebang line for the interpreter and normally results in using the system-level interpreter. As you've seen yourself, running python uses the virtualenv's version, so python manage.py shell should as well.
0
0
0
0
2012-04-16T16:52:00.000
1
1.2
true
10,178,313
0
0
1
1
I have created a virtualenv for developing in Django but Django is not using the correct instance of Python. Here's what I've found out: C:\Python27 is not in my path. If I run python from a command prompt it says it's not recognized When I start up the virtualenv, run python and check sys.executable it does point to the virtualenv's instance of python and sys.path is also pointing to the correct place When I run manage.py shell from within the virtualenv and check the sys.executable and sys.path they are both pointing to the C:\python27 installation Any ideas as to what's going on?
Is it possible to enable debug for specific hosts in django?
10,182,151
1
1
401
0
python,django,apache,debugging
I find it easiest to setup a whole extra subdomain for testing different versions of a django site. It really is probably bad form that django doesn't plain out give examples of how to do this as it leads to people doing odd things. My setup is nginx/ uwsgi emperor so posting config file examples prob won't help you much.
0
0
0
0
2012-04-16T20:57:00.000
1
1.2
true
10,181,609
0
0
1
1
I have a production server which uses Apache / FastCGI / DJango to serve up my website. This works well and I have some cunning settings which mean that if a maintenance file exists, the world sees the maintenance message but my IP address can still work on the site. The detection of the maintenance file is done at the apache level, but is there a way I can set the DEBUG setting (normally configured through settings.py) so that debug is enabled for my IP address?
Django: setup.py versus copying directory
10,183,287
0
0
560
0
python,django,installation,setup.py,django-1.4
Try adding "C:\Python26\;C:\Python26\Scripts;" to your PATHenvironmental variable and then running django-admin.py startproject mysite.
0
0
0
0
2012-04-16T22:52:00.000
1
0
false
10,182,828
0
0
1
1
On windows 7 I have python2.6.6 installed at C:\Python26 I wanted to install Django, so I: downloaded and untarred the files into C:\Python26\Django-1.4 ran python setup.py install verified it was installed by opening IDLE and typing, import django The next part is the problem... in the tutorial, it says to now run django-admin.py startproject mysite, however django-admin.py wasn't found, and while looking for it, I discovered that there is a duplication in the directories C:\Python26\Django-1.4\build\lib\django C:\Python26\Django-1.4\django I didn't see anything in setup.cfg that would allow me to make sure that didn't happen or to pick a different setup folder, etc... but in the file C:\Python26\Django-1.4\INSTALL, it is stated that "AS AN ALTERNATIVE, you can just copy the entire "django" directory to python's site-packages directory" So for my question: besides avoiding this duplication of code in the Django directories, what else is the difference with using the setup.py install command versus copying the directory? Are there other pros/cons?
Synchronize Memcache and Datastore on Google App Engine
10,188,925
0
3
1,816
0
python,google-app-engine,memcached,google-cloud-datastore
I think you could create tasks which will persist the data. This has the advantage that, unlike memcached the tasks are persisted and so no chats would be lost. when a new chat comes in, create a task to save the chat data. In the task handler do the persist. You could either configure the task queue to pull at 1 per second (or slightly slower) and save each bit of chat data held in the task persist the incoming chats in a temporary table (in different entity groups), and every have the tasks pull all unsaved chats from the temporary table, persist them to the chat entity then remove them from the temporary table.
0
1
0
0
2012-04-17T03:20:00.000
5
0
false
10,184,591
0
0
1
2
I'm writing a chat application using Google App Engine. I would like chats to be logged. Unfortunately, the Google App Engine datastore only lets you write to it once per second. To get around this limitation, I was thinking of using a memcache to buffer writes. In order to ensure that no data is lost, I need to periodically push the data from the memcache into the data store. Is there any way to schedule jobs like this on Google App. Engine? Or am I going about this in entirely the wrong way? I'm using the Python version of the API, so a Python solution would be preferred, but I know Java well enough that I could translate a Java solution into Python.
Synchronize Memcache and Datastore on Google App Engine
10,191,468
0
3
1,816
0
python,google-app-engine,memcached,google-cloud-datastore
i think you would be fine by using the chat session as entity group and save the chat messages . this once per second limit is not the reality, you can update/save at a higher rate and im doing it all the time and i don't have any problem with it. memcache is volatile and is the wrong choice for what you want to do. if you start encountering issues with the write rate you can start setting up tasks to save the data.
0
1
0
0
2012-04-17T03:20:00.000
5
0
false
10,184,591
0
0
1
2
I'm writing a chat application using Google App Engine. I would like chats to be logged. Unfortunately, the Google App Engine datastore only lets you write to it once per second. To get around this limitation, I was thinking of using a memcache to buffer writes. In order to ensure that no data is lost, I need to periodically push the data from the memcache into the data store. Is there any way to schedule jobs like this on Google App. Engine? Or am I going about this in entirely the wrong way? I'm using the Python version of the API, so a Python solution would be preferred, but I know Java well enough that I could translate a Java solution into Python.
It's possible to validate/lint/bleach a piece of code given by analytics/tracking sites like Google Analytics or Piwik
10,196,121
1
0
69
0
python,django,validation,analytics,tracking
The easy way would be to grab the code from the tracking site and hard-code everything but the unique portion (usually a user ID number), offer the user a choice of approved trackers (with a radio button) and have them paste in their ID, then insert that value when you render the page. If I remember correctly, blogger worked like this before they tied it in directly with analytics via the google account.
0
0
0
0
2012-04-17T17:20:00.000
1
0.197375
false
10,195,955
0
0
1
1
I provide in the administration interface of the site I make an area for the webmaster to place tracking codes from external analytic tools. Essentially, this codes must be included 'as-is', but my concern is that any typo could render the page useless, mess-up the HTML, etc... It's possible (at some extent) to cleanup/validate these codes so I least it ensures the HTML won't be corrupted? I'm using Python/Django, but i guess the Django part is somewhat irrelevant for this topic. Regards
Is apache2 reload for .conf changes only or is it allowable to be used when application code changes?
10,201,507
0
0
67
0
python,http,apache2,webserver,wsgi
The 'reload' and 'graceful' would have the same effect as far as reloading your web application. If you are seeing issues with imports like you describe, it is likely to be an issue in your application code with you having import order dependencies or import cycles. One sees this a lot with people using Django. Suggest you actually post an example of the error you are getting.
0
0
0
0
2012-04-17T20:08:00.000
2
0
false
10,198,359
0
0
1
1
When the code for my python WSGI applicaiton changes should I use apache2's reload or graceful restart feature? Currently we use reload, but have noticed that sometimes the application does not load properly and errors pertaining to missing modules are logged to the error files even though the modules have existed for a long time.
RESTful Web service or API for a Python program in WebFaction
10,332,534
1
1
597
0
python,web-services,api
Well, your question is a little bit generic, but here are a few pointers/tips: Webfaction allows you to install pretty much anything you want (you need to compile it / or ask the admins to install some CentOS package for you). They provide some default Apache server with mod_wsgi, so you can run web2py, Django or any other wsgi frameworks. Most popular Python web frameworks have available installers in Webfaction (web2py, django...), so I would recommend you to go with one of them. I would also install supervisord to keep your service running after some reboot/crash/problem. I would be glad to help you if you have any specific question...
0
0
1
0
2012-04-17T21:49:00.000
1
1.2
true
10,199,697
0
0
1
1
I have developed a few python programs that I want to make available online. I am new to web services, and I am not sure what I need to do in order to create a service where somebody makes a request to an URL (for example), and the URL triggers a Python program that displays something in the user's browser, or a set of inputs are given to the program via browser, and then python does whatver it is supposed to do. I was playing with the google app engine, which runs fine with the tutorial, and was planning to use it becuase it looks easy, but the problem with GAE is that it does not work well (or does not work at all) with some libraries that I plan to use. I guess what I am trying to do is some sort of API using my WebFaction account. Can anybody point me in the right directions? What choices do I have in WebFaction? What are the easiest tools available? Thank you very much for your help in advance. Cheers
Restricting the number of task requests per App Engine instance
10,200,635
7
1
228
0
python,google-app-engine,memory-management,python-2.7,task-queue
There's not currently any way to advise the App Engine infrastructure about this. You could have your tasks return a non-200 status code if they shouldn't run now, in which case they'll be automatically retried (possibly on another instance), but that could lead to a lot of churn. Backends are probably your best option. If you set up dynamic backends, they'll only be spun up as required for task queue traffic. You can send tasks to a backend by specifying the URL of the backend as the 'target' argument. You can gain even more control over task execution by using pull queues. Then, you can spin up backends as you choose (or use push queue tasks, for that matter), and have the instances pull tasks off the pull queue in whatever manner suits.
0
1
0
0
2012-04-17T22:12:00.000
1
1
false
10,199,963
0
0
1
1
I have a Google App Engine app that periodically processes bursts of memory-intensive long-running tasks. I'm using the taskqueue API on the Python2.7 run-time in thread-safe mode, so each of my instances is handling multiple tasks concurrently. As a result, I frequently get these errors: Exceeded soft private memory limit with 137.496 MB after servicing 8 requests total After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application. As far as I can tell, each instance is taking on 8 tasks each and eventually hitting the soft memory limit. The tasks start off using very low amounts of memory but eventually grow to about 15-20MB. Is there any way to restrict tell App Engine to assign no more than 5 requests to an instance? Or tell App Engine that the task is expected to use 20MB of memory over 10 minutes and to adjust accordingly? I'd prefer not to use the backend APIs since I want the number of instances handling tasks to automatically scale, but if that's the only way, I'd like to know how to structure that.
using django based mod_wsgi and raw python based mod_python together on same apache
10,205,585
0
1
209
0
python,django,apache,mod-wsgi,mod-python
There is no preferred version of mod_python. It's deprecated. Don't use it.
0
0
0
0
2012-04-18T06:17:00.000
3
0
false
10,203,839
0
0
1
3
I am building a web application in django 1.4 which I have to deploy on apache using mod_wsgi. The problem is that there is already a raw python web application running on it using mod_python. On studying through internet, I found that its possible to use both applications. My question is what combination of versions(of course more recent versions are more prefered) of python, mod_python, mod_wsgi, apache and django are compatible? Thanks in advance
using django based mod_wsgi and raw python based mod_python together on same apache
10,204,865
1
1
209
0
python,django,apache,mod-wsgi,mod-python
Believe it or not, I have the exact same setup. The simplest way to handle it is to partition the applications under VirtualHosts. If you can do this, then it's all super easy. You just have a VirtualHost entry for each application. If you need to run them under HTTP/S, then you may run into problems. Apache can only have one VirtualHost for all HTTP/S sites on the same server. We are running the following versions on our main production machine: Apache/2.2.14 (Ubuntu) mod_python/3.3.1 Python/2.6.5 mod_ssl/2.2.14 OpenSSL/0.9.8k mod_wsgi/2.8
0
0
0
0
2012-04-18T06:17:00.000
3
1.2
true
10,203,839
0
0
1
3
I am building a web application in django 1.4 which I have to deploy on apache using mod_wsgi. The problem is that there is already a raw python web application running on it using mod_python. On studying through internet, I found that its possible to use both applications. My question is what combination of versions(of course more recent versions are more prefered) of python, mod_python, mod_wsgi, apache and django are compatible? Thanks in advance
using django based mod_wsgi and raw python based mod_python together on same apache
10,204,877
0
1
209
0
python,django,apache,mod-wsgi,mod-python
Django runs best and is recommended to run in production using mod_wsgi IF you are using apache. uwsgi is better if you are using nginx ( I find nginx far better than apache personally ) You can run it whatever way you want but that's the best way. You can run mod_python fcgi or cgi processes at the same time as other mod_wsgi apps and use apache as a reverse proxy ( or sit nginx in front of apache as a reverse proxy ). You can diver traffic to the relevant apps this way.
0
0
0
0
2012-04-18T06:17:00.000
3
0
false
10,203,839
0
0
1
3
I am building a web application in django 1.4 which I have to deploy on apache using mod_wsgi. The problem is that there is already a raw python web application running on it using mod_python. On studying through internet, I found that its possible to use both applications. My question is what combination of versions(of course more recent versions are more prefered) of python, mod_python, mod_wsgi, apache and django are compatible? Thanks in advance
Google app engine database access on localhost
10,212,501
1
1
577
0
python,google-app-engine
Since you are just getting started I assume you don't care much about what is in your local datastore. Therefore, when starting your app, pass the --clear_datastore to dev_appserver.py. What is happening? As daemonfire300 said, you are having conflicting application IDs here. The app you are trying to run has the ID "sample-app". Your datastore holds data for "template-builder". The easiest way to deal with it is clearing the datastore (as described above). If you indeed want to keep both data, pass --default_partition=dev~sample-app to dev_appserver.py (or the other way around, depending on which app ID you want to use).
0
1
0
0
2012-04-18T07:05:00.000
1
1.2
true
10,204,521
0
0
1
1
I am new to Python and Google App Engine. I have installed an existing Python application on localhost, and it is running fine for the static pages. But, when I try to open a page which is fetching data, it is showing an error message: BadRequestError: app "dev~sample-app" cannot access app "dev~template-builder"'s data template-builder is the name of my online application. I think there is some problem with accessing the Google App Engine data on localhost. What should I do to get this to work?
maintaining user authentication if i have some web pages on mod_python and some on mod_wsgi
10,222,042
0
0
332
0
python,django,apache,mod-wsgi,mod-python
Conceptually: store a cookie using your raw python web page that you process in a "welcome" view or custom middleware class in Django, and insert them into the sessions db. This is basically what hungnv suggests. The most ridiculous way to do this would be to figure out how Django deals with sessions and session cookies, insert the correct row into Django's session database from your raw python app, and then custom-set the session cookie using Django's auth functions.
0
0
0
0
2012-04-18T10:00:00.000
3
0
false
10,207,087
0
0
1
1
I have a web application written in raw python and hosted on apache using mod_python. I am building another web application which is django based and will be hosted on same server using mod_wsgi. Now, the scenerio is such that user will login from the web page which is using mod_python and a link will send him to my application which will be using mod_wsgi. My question is how can I maintain session? I need the same authentication to work for my application. Thanks in advance.
OpenERP: insert Data code
10,208,766
0
2
794
1
python,postgresql,openerp
OpenERP works with PostgreSQl as the Back-end Structure. Postgresql is managed by pgadmin3 (Postgres GUI),you can write sql queries there and can add/delete records from there. It is not advisable to insert/remove data directly into Database!!!!
0
0
0
0
2012-04-18T11:10:00.000
3
0
false
10,208,147
0
0
1
2
I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)
OpenERP: insert Data code
10,225,346
0
2
794
1
python,postgresql,openerp
The addition of columns in the .py files of the corresponding modules you want to chnage will insert coumns to the pgadmin3 also defenition of classes will create tables...when the fields are displayed in xml file and values are entered to the fields through the interface the values get stored to the table values to the database...
0
0
0
0
2012-04-18T11:10:00.000
3
0
false
10,208,147
0
0
1
2
I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)
How to determine status code of request
10,212,363
1
0
111
0
python,google-app-engine,http-status-codes
I might be wrong but there is no such thing as incoming request status code, application doing requests in case of redirect gets 302 from initial request checks location and do another request. And history of incoming request in shape of thing like "traceroute" just don't exists in HTTP.
0
0
1
0
2012-04-18T15:06:00.000
2
0.099668
false
10,212,293
0
0
1
1
How can I check the status code of a request? I want to check what kind of redirects were used to access a page. For response objects I would use *response.status_code*
django-social-auth profile builder
10,213,618
1
0
210
0
python,django,facebook,celery,django-signals
This really comes down to synchronous vs asynchronous. Django signals are synchronous that is they will block the response until they are completed. Celery tasks are asynchronous. Which will be better will depend on whether the benefits of handling the profile building asynchronously outweighs the negatives of the maintaining the extra infrastructure necessary for celery. It's basically impossible to answer this without a lot more specific information regarding the specifics of your situation.
0
0
0
0
2012-04-18T15:32:00.000
1
0.197375
false
10,212,791
0
0
1
1
I recently started playing around with django-social-auth and am looking for some help from the community to figure out the best way to move forward with an idea. Once a user has registered you have access to his oauth token which allows you to pull certain data. In my case I want to build a nice little profile based on the users avatar, location and maybe some other information if it's available. Would the best way be to: build a custom task for celery and pull the information and build the profile? or, make use of signals to build the profile?
How does xpathselector affect the speed of scrapy crawl running?
10,217,884
1
0
410
0
python,web-crawler,scrapy
This has nothing to do with download speed. XPath //* selects the entire page. XPath //script/text() selects only text inside script elements. So of course the second one is faster, because there is less text to search with the re() call!
0
0
1
0
2012-04-18T20:47:00.000
3
0.066568
false
10,217,708
0
0
1
3
I am using Scrapy crawler to crawl over 100k pages website. The speed is the big concern in this case. Today I noticed that hxs.select('//*').re('something') is way slower than hxs.select('//script/text()').re('something'). Can any expert explain to me why? As I understand, the crawler should download the entire page no matter what xpath selector I use. So the xpath should not affect the speed much at all. Thanks a lot for any tips.
How does xpathselector affect the speed of scrapy crawl running?
10,248,266
0
0
410
0
python,web-crawler,scrapy
XPath definitely has a role in crawler speed, crawler download the page, but Xpath process the Html that crawler downloaded. so if the page is big then xpath will take time to process whole Html.
0
0
1
0
2012-04-18T20:47:00.000
3
0
false
10,217,708
0
0
1
3
I am using Scrapy crawler to crawl over 100k pages website. The speed is the big concern in this case. Today I noticed that hxs.select('//*').re('something') is way slower than hxs.select('//script/text()').re('something'). Can any expert explain to me why? As I understand, the crawler should download the entire page no matter what xpath selector I use. So the xpath should not affect the speed much at all. Thanks a lot for any tips.
How does xpathselector affect the speed of scrapy crawl running?
10,217,990
1
0
410
0
python,web-crawler,scrapy
I am afraid that you might look for 'something' in the entire document, so you probably should still use hxs.select('//*').re('something'). And about the speed question: the answer is that if you look for the word 'something' in a document which is 4k large, of course it will take longer then filtering the document for text() and after looking for that word within that text.
0
0
1
0
2012-04-18T20:47:00.000
3
0.066568
false
10,217,708
0
0
1
3
I am using Scrapy crawler to crawl over 100k pages website. The speed is the big concern in this case. Today I noticed that hxs.select('//*').re('something') is way slower than hxs.select('//script/text()').re('something'). Can any expert explain to me why? As I understand, the crawler should download the entire page no matter what xpath selector I use. So the xpath should not affect the speed much at all. Thanks a lot for any tips.
How to get number of visitors of a page on GAE?
10,221,347
1
2
1,919
0
python,google-app-engine,analytics,visitors
There is no way to tell when someone stops viewing a page unless you use Javascript to inform the server when that happens. Forums etc typically assume that someone has stopped viewing a page after n minutes of inactivity, and base their figures on that. For minimal resource use, I would suggest using memcache exclusively here. If the value gets evicted, the count will be incorrect, but the consequences of that are minimal, and other solutions will use a lot more resources.
0
1
0
0
2012-04-18T21:02:00.000
3
0.066568
false
10,217,948
0
0
1
1
I need to get the number of unique visitors(say, for the last 5 minutes) that are currently looking at an article, so I can display that number, and sort the articles by most popular. ex. Similar to how most forums display 'There are n people viewing this thread' How can I achieve this on Google App Engine? I am using Python 2.7. Please try to explain in a simple way because I recently started learning programming and I am working on my first project. I don't have lots of experience. Thank you!
Selenium+python Reporting
10,218,792
1
7
24,481
0
python,selenium
My experience has been that any sufficiently useful test framework will end up needing a customized logging solution. You are going to end up wanting domain specific and context relevant information, and the pre-baked solutions never really fit the bill by virtue of being specifically designed to be generic and broadly applicable. If you are already using Python, I'd suggest looking in to the logging module and learning how to write Handlers and Formatters. It's actually pretty straight forward, and you will end up getting better results than trying to shoehorn the logging you need in to some selenium-centric module.
0
0
1
1
2012-04-18T21:59:00.000
6
0.033321
false
10,218,679
0
0
1
1
I am doing some R&D on selenium+python. I wrote some test cases in python using selenium webdriver and unittest module. I want to know how can I create report of the test cases. Is there inbuilt solution available in selenium or I need to code to generate file. Or is there any other web testing framework with javascript support available in python which have reporting functionality. I am basically new to python as well as selenium. Just trying to explore.
djutils autodiscover importing command without top level parent, queue_consumer doesn't like it
10,219,986
0
0
35
0
python,django,queue
Looks like duplicate imports in my project were causing the problems.
0
1
0
0
2012-04-19T00:13:00.000
1
1.2
true
10,219,934
0
0
1
1
When djutils goes through autodiscover in the init, it imports my task "app.commands.task1" and states this as part of the output of the log. But when I run webserver and try to queue the command, the queue_consumer log indicates that it cannot find the key "project.app.commands.queuecmd_task1" QueueException: project.app.commands.queuecmd_task1 not found in CommandRegistry I assume that the fact that the string it is trying to find has "project" prepended is the reason it cannot find the task. Why would that be happening?
onetomany relation field in Openerp
10,224,838
2
0
759
0
python,xml,openerp
No you cannot do that: the field names are keys in a Python dictionary, in what you write the second invoice_line will overwrite the first one this would mess up OpenERP's ORM anyway as it does not handle relations to different tables. So you need two different columns, one relative to account.invoice.line and the other to account.service.line. If you really need a merged view, then you can add a function field which will return the union of the invoice and service lines found by the two previous fields. But I'm not sure the forms will be able to handle this.
0
0
1
0
2012-04-19T06:05:00.000
1
1.2
true
10,222,493
0
0
1
1
I try to create a related field on OpenERP 6.0.1 . is it possible to define two different onetomany relationfor the same field name? What all changes i must do in the(.py file and XML Files).
how to trigger a content rule if PloneFormGen's form data is submitted in Plone 4.1
10,232,165
0
1
173
0
python,plone,ploneformgen
As I recall, PFG submissions don't generate Plone events. While you could potentially modify PFG (or better yet, contribute enhancements) to add events on form submission, I think the use-case you are describing is probably better realized by building a custom content type to represent your Leave Application Form. While this is slightly more work than building a PFG form, you can then easily take advantage of workflow, events, content rules, etc.
0
0
0
0
2012-04-19T12:57:00.000
2
0
false
10,228,596
0
0
1
2
how to trigger a content rule if PloneFormGen's form data is submitted in Plone 4.1. eg. I have created a Leave application form for employees. Once the employee submits data, the content rule should send the leave data to his manager. If he approves, final approval should be taken from the General Manager(GM). if the intermediate manager rejects, mail is sent to the employee directly. If approved by GM , mail is sent to employee directly. I want reviewer at 2 or 3 levels with different states.I am unable to define the states and transitions correctly. Can anybody guide?
how to trigger a content rule if PloneFormGen's form data is submitted in Plone 4.1
10,234,004
2
1
173
0
python,plone,ploneformgen
Use uwosh.pdf.d2c to store the content submissions as actual plone content. Then you can use content rules on those objects.
0
0
0
0
2012-04-19T12:57:00.000
2
1.2
true
10,228,596
0
0
1
2
how to trigger a content rule if PloneFormGen's form data is submitted in Plone 4.1. eg. I have created a Leave application form for employees. Once the employee submits data, the content rule should send the leave data to his manager. If he approves, final approval should be taken from the General Manager(GM). if the intermediate manager rejects, mail is sent to the employee directly. If approved by GM , mail is sent to employee directly. I want reviewer at 2 or 3 levels with different states.I am unable to define the states and transitions correctly. Can anybody guide?
Automatically deleting archive on server in Django when client side download completes
10,248,076
0
1
113
0
python,django,download
In such cases, I usually return an archived file to user with HttpResponse and set the content type to "attachment". This way the download starts automatically and I don't have to save the archived file. Is this approach helps you or not?
0
0
0
0
2012-04-19T15:15:00.000
1
0
false
10,231,221
0
0
1
1
In my server side code in Django, i download a set of files on the server and create an archive which is then sent to the client side for download. Is there a way i can automatically delete this archive once the download on the client side is complete or aborted ? Thank You
Hosting my Django site
25,833,953
14
15
29,244
0
python,django,web-services
AWS : Free tier available great support(but for technical help you got to pay) can use platform (PAAS) BeanStalk can customize architecture in case you get a dedicated instance great community of support Custom domain great documentation can SSH Most popular Heroku:(Django) Free to some extent Can use only POSTgresql in free plan git must Good support easy to start custom domain Can use bash in production(Not SSH).. cannot make direct changes in production. This is what makes your App stable. Any change/update goes through git. code maintenance - good (deployment through git heroku commands only) use AWS S3 to store static files Temporary files are removed perodically Once you scale that they start to bill, it is really costly. Since this is a PAAS, you have got what you have got. It takes lots of efforts to customize(to some extent) the architecture of the APP. Google App Engine:( Flask/Django project.) Free to some extent very easy to start(hello world app) custom domain code maintenance - good (automatic deployment) Support is not available Pythonanywhere: Free to some extent No custom domain in free plan easy to use Good support Webfaction:(Django) Not free.. (I think (minimal plan) costs 10 $ per month on shared hosting. ) SSH available custom domain Architecture customization. Good support
0
0
0
0
2012-04-19T16:36:00.000
5
1
false
10,232,673
0
0
1
1
Hi I'm looking for some advice I currently own a resseller package with Heart internet as I host a few personal websites. However I'm currently learning Django (The python Framework) and want to be able to host my own server. I have been setting up virtual servers to play around with. Anyway to have SSH access you have to send in and ask them to open it for you, in the meantime of asking them if it was possible to install Django / set up SSH access i was advised that i can't use Django unless i purchase a Virtual machine even though Python is intalled on the server. Surley i can install Django onto my server if i have SSH access? Has anyone else has a similair issue? Or can anyone advise me on what to do.. The last thing i was to do is spend more money with them. Thanks.
sphinx js:function directive doesn't recognise params
10,245,168
4
3
423
0
javascript,parameters,python-sphinx
You need to add a blank line between the directive and its options.
0
0
0
0
2012-04-19T17:01:00.000
1
0.664037
false
10,233,088
1
0
1
1
I get the following error using the js:function directive. Why doesn't :param recognise multiple values between ::? " invalid option data: extension option field name may not contain multiple words. .. js:function:: f(test,test2) :param test: :param test2: "
Same MySql DB working with a php and a python framework
10,233,231
0
0
73
1
php,python,mysql,django,cakephp
This is one of the reasons to use RDBMS to provide access for different users and applications to the same data. There should absolutely no problem with this.
0
0
0
0
2012-04-19T17:08:00.000
2
0
false
10,233,187
0
0
1
1
I have a web application that has been done using Cakephp with MySql as the DB. The webapp also exposes a set of web services that get and update data to the MySQL DB. I will like to extend the app to provide a fresh set of web services but will like to use a python based framework like web2py/django etc. Since both will be working of the same DB will it cause any problems? The reason I want to do it is because the initial app/web services was done by somebody else and now I want to extend it and am more comfortable using python/web2py that php/cakephp.
502 Bad Gateway using Beautiful Soup, Python/Django
10,235,640
0
1
1,024
0
python,django,beautifulsoup
Try copying and pasting that URL into your browser. I get an access key error; fix that and your problem is solved.
0
0
0
0
2012-04-19T19:42:00.000
2
0
false
10,235,579
0
0
1
1
Beautiful Soup works in the Python shell using Django. I can also successfully import from bs4 import BeautifulSoup into views.py, but when I call something like soup = BeautifulSoup(xml), I get a 502 Bad Gateway error. I talked to my host, and they could not find the problem. Any ideas? Note the xml is xml = urllib2.urlopen("http://isbndb.com/api/books.xml?access_key=000000&results=details&index1=isbn&value1=0000").read(), but it works in the Python shell (within myproject folder), so I wouldn't think that's the problem.
Creating a web based point of sale system
10,236,867
4
3
9,646
0
python,point-of-sale,prototyping
Python is a very quick and productive language to develop in, so that would be a good choice, IMO. Personally I find it the most pleasant language to develop in. But I think a POS system is a terrible first programming project. A proper POS system covers too many aspects like security, authentication, data storage, client-server. Each of those has its own gotcha's and significant learning curve. If you want to go through with it nonetheless, chop the project up into manageable pieces that can be built and tested separately. You could start by writing a simple program that accepts text commands from the console and stores the transactions in e.g. a text file or in a pickled Python dictionary. This would be the start of the server. Later you can add a web or GUI front-end, or have the server store transactions in a database.
0
0
0
1
2012-04-19T20:35:00.000
2
0.379949
false
10,236,321
0
0
1
1
I am considering of prototyping a web based point-of-sale system. I don't have programming skills but I'm thinking of using this project in order to learn. I would like to ask you the following two questions: Do you think the above task is achievable within the period of 6 months (for building a rough prototype of the basic functions of a POS)? If yes, which programming language would you recommend me and why? (I was thinking of Python) Your advice is greatly appreciated!
Django models: when should I use the @property decorator for attributes?
10,239,569
4
6
1,584
0
python,django
It's not a problem unless you find yourself writing exactly the same code on model after model. At that point you should consider writing a template tag that takes the model as a parameter instead.
0
0
0
0
2012-04-20T01:32:00.000
1
1.2
true
10,239,073
1
0
1
1
I would like to know what would be the best practice. I have been using the @property decorator a lot because it allows me to avoid creating custom context variables when I want to display something related to a model instance on a template. I feel like my models are too big. Is this ok?
Store file for duration of time on webpage
10,266,536
0
3
136
0
php,python,temporary-files
I don't know why I didn't think of it, but I was on an IRC for Python, discussing a completely unrelated issue, and someone asked why I didn't just serve the file. Exactly! I never needed to save the file, just to send it back to the user with the correct header. Problem solved!
0
0
1
0
2012-04-20T07:23:00.000
3
1.2
true
10,241,950
0
0
1
1
I need to set up a page that will let the user upload a file, modify the file with a script, and then serve the file back to the user. I have the uploading and modifying parts down, but I don't know where to put the file. They are going to be in the area of 1 or 2mb, and I have very little space on my webhosting plan, so I want to get rid of the files as soon as possible. There's no reason for the files to exist any longer than after the users are given the option to download by their browser upon being redirected. Is the only way to this with a cron job that checks the creation time of the files and deletes them if they're a certain age? I'm working with python and PHP. edit: First the file is uploaded. Then the location of the file is sent back to the user. The javascript on the page redirects to the path of the file. The browser opens save file dialog, and they choose to save the file or cancel. If they cancel, I want to delete the file immediately. If they choose to save the file, I want to delete the file once their download has completed.
PHP together with django in heroku instance
10,243,597
1
1
252
0
php,python,django,wordpress,heroku
no, application type is determined at slug compilation time when the application is pushed to Heroku. Not sure if you could do anything with a custom build pack but I would have thought so.
0
0
0
0
2012-04-20T09:10:00.000
1
1.2
true
10,243,310
0
0
1
1
Is it possible to run some php app (e.g. wordpress) together with django within one heroku instance, so that the part of app's urls would be served by php and the rest by django?
Python: possible to send text to a java app's stdin that is already running?
10,251,723
1
0
159
0
java,python,windows,stdin
Have the java app read from a named pipe. A named pipe allows multiple clients to write to it, and are language-agnostic.
0
1
0
0
2012-04-20T16:55:00.000
1
1.2
true
10,250,353
0
0
1
1
I need to send text to a java app's stdin that is started independently from Python. I have been using pywin32 sendkeys up to this point, but there are some inconsistencies with the output that are making me look for other solutions. I am aware of subprocess, but it looks like that can only be used to interact with a child process that was started by Python, not one that is started independently. Socket is not an option for me because Windows does not allow multiple connections to the same port.
Creating interface with the software without API-Possibilities
10,251,394
1
0
266
0
java,python
Ask the company that makes the software if they have an SDK or documentation on their API. Even if they have one, if you already don't like the application, this may not be much use to you. If the main purpose of the application is to report on the contents of a database, there are plenty of libraries in python for reading/writing to databases. SQLAlchemy/Storm and PyQt could probably do what you want.
0
0
0
0
2012-04-20T17:00:00.000
1
1.2
true
10,250,437
0
0
1
1
I have one proprietary software my office uses for database access and reporting which nobody likes in my office. I am thinking of building a python/Java application with a simple interface which does the task of communicating with the propriety software. My problem is: Since the software is proprietary, there is no known API of any sort I am aware of such that I can interface. Is there a way around to get through this or is it mandatory to have API to access the software? I am doing this in windows XP platform.
Django template check for empty when I have an if inside a for
10,255,173
6
1
3,317
0
python,django,django-templates
Even if this might be possible to achieve in the template, I (and probably many other people) would advise against it. To achieve this, you basically need to find out whether there are any objects in the database matching some criteria. That is certainly not something that belongs into a template. Templates are intended to be used to define how stuff is displayed. The task you're solving is determining what stuff to display. This definitely belongs in a view and not a template. If you want to avoid placing it in a view just because you want the information to appear on each page, regardless of the view, consider using a context processor which would add the required information to your template context automatically, or writing a template tag that would solve this for you.
0
0
0
0
2012-04-20T22:42:00.000
1
1.2
true
10,254,466
0
0
1
1
I have the following code in my template: {% for req in user.requests_made_set.all %} {% if not req.is_published %} {{ req }} {% endif %} {% empty %} No requests {% endfor %} If there are some requests but none has the is_published = True then how could I output a message (like "No requests") ?? I'd only like to use Django templates and not do it in my view! Thanks
Running a Python app on real Android phone
10,258,955
2
6
1,128
0
android,python
Your users have to install SL4A and the language plugin (python?) on their phones, this is not a built-in functionality. Since the source code is available, it's possible to create combined application, which includes your scripts and SL4A/python code, but in my opinion this defies the purpose of scripting in the first place.
1
0
0
0
2012-04-21T12:01:00.000
2
1.2
true
10,258,703
1
0
1
1
In order to develop an Android app using Python, I need to install Python for Android and SL4A on my computer to be used with the Android emulator. My question is, when I distribute this app to actual users/phones, do the phones need to get Python for Android and SL4A explicitly? Or is the supporting infrastructure built into Android devices? Or is there a way to package the Python application where the users do not have to get SL4A and Python for Android in order to run the application?
How to set up celery workers on separate machines?
43,633,216
2
58
26,305
0
python,celery
The way I deployed it is like this: clone your django project on a heroku instance (this will run the frontend) add RabitMQ as an add on and configure it clone your django project into another heroku instance (call it like worker) where you will run the celery tasks
0
1
0
0
2012-04-21T16:36:00.000
2
0.197375
false
10,260,925
0
0
1
2
I am new to celery.I know how to install and run one server but I need to distribute the task to multiple machines. My project uses celery to assign user requests passing to a web framework to different machines and then returns the result. I read the documentation but there it doesn't mention how to set up multiple machines. What am I missing?
How to set up celery workers on separate machines?
10,261,277
60
58
26,305
0
python,celery
My understanding is that your app will push requests into a queueing system (e.g. rabbitMQ) and then you can start any number of workers on different machines (with access to the same code as the app which submitted the task). They will pick out tasks from the message queue and then get to work on them. Once they're done, they will update the tombstone database. The upshot of this is that you don't have to do anything special to start multiple workers. Just start them on separate identical (same source tree) machines. The server which has the message queue need not be the same as the one with the workers and needn't be the same as the machines which submit jobs. You just need to put the location of the message queue in your celeryconfig.py and all the workers on all the machines can pick up jobs from the queue to perform tasks.
0
1
0
0
2012-04-21T16:36:00.000
2
1.2
true
10,260,925
0
0
1
2
I am new to celery.I know how to install and run one server but I need to distribute the task to multiple machines. My project uses celery to assign user requests passing to a web framework to different machines and then returns the result. I read the documentation but there it doesn't mention how to set up multiple machines. What am I missing?
Capturing errors in Flask but continuing with the request as normal
10,269,311
2
2
327
0
python,flask
I figured it's probably not possible. It's not possible for the server to receive the entire request as it has to terminate the connection once the max_content_length threshold has been passed, discarding any other form data that would have been sent after the file upload. The server resets the connection with a HTTP 413 status code. While it appears it's possible to register a function to handle HTTP 413 errors (and presumably to return a custom error page), this doesn't work in Flask. I assume this is a bug.
0
0
0
0
2012-04-22T03:10:00.000
1
0.379949
false
10,264,905
0
0
1
1
I've read through the docs and Googled my problem, but I don't seem to be able to figure out a way to handle errors in Flask without terminating the request and displaying an error page. The error I want to handle is werkzeug.exceptions.RequestEntityTooLarge which is raised when a file upload exceeds the specified limit. Ideally I want to be able to add an element to the flask.request.files dictionary indicating that the uploaded file exceeded the maximum upload size. The error could then be presented inline with the original form so the user can try again. Is this even possible in Flask?
Python Desktop Applications
10,285,038
1
5
3,254
0
python,django,desktop-application,web-frameworks
This might not fit with how your users use your application but one option would be to make a Linux virtual machine (Virtualbox supports most common operating systems as hosts) and distribute that instead. This would give you a single target to develop against and, as a bonus if you looked into the update mechanism of your chosen distribution (Apt, Yum etc.) you should be able to add your own server as a source and have the VM keep itself updated without your users needing to do anything.
0
0
0
0
2012-04-23T14:17:00.000
3
0.066568
false
10,282,347
0
0
1
1
I'm using wxPython since about 2 years for several small scientific programs which I distribute to many Colleagues. I like wxPython and I'm already very familiar with it but there are few things which drive me crazy (not because of wxPython, actually I would like to continue to use it): 1) I have many users on different Operation systems. I know wxPython is cross platform but I have already no nerves and time to port all my small software’s (and more will come) every time to different Operation systems. Especially I'm not using some of them (Windows7, Mac), so it's hard for me to solve problems and user requests. 2) We update our software’s quite a lot (because all the time new ideas come from users and ourselves) which means for me to generate all standalones again, upload them and for the users to uninstall and install again. Nasty... I was thinking already to switch to Web Frameworks but there are some problems. First, many users like to use my software’s offline, e.g. when they travel or have no internet. Second we have some data in some databases which should NEVER go on a server. It’s all about patents and will be always a discussion, so I prefer to have some of my programs a standalone desktop application to simplify things. Others can be online, no problem. So, in general I would love a browser based solution, since everybody has a browser. I saw that some people ported Django projects as a standalone desktop application, which I found not a bad idea. I also red about Camelot but I think this is rather for databases. Camelot would be useful only for some of my tools which are rather a database searching and extraction programs. But other doesn’t use databases at all. Can anyone suggest me, what would be a good solution for my tools?
Updating app engine project code
10,296,008
0
0
270
0
python,google-app-engine
Try calling python appcfg.py update myapp/
0
1
0
0
2012-04-24T10:16:00.000
1
1.2
true
10,295,954
0
0
1
1
I am new to google app engine and python. I have created an application in python with the help of google app engine. i am using cmmand 'appcfg.py update myapp/' from command prompt to update live code. this command was working perfectly but suddenly it stops working. Now every time i run this command it opens up the appcfg.py file. Please help me what is happening with the command
What measures can I take to safeguard the source code of my django site from others?
10,304,687
1
3
2,223
0
python,django,web-deployment,source-code-protection
There is almost no scenario where your hosting provider would be interested in your source code. The source code of most websites just isn't worth very much. If you really feel it is necessary to protect your source code, the best thing to do is serve it from a system that you own and control physically and have exclusive access to. Failing that, there are a few techniques for obfuscating python, the most straightforward of which is to only push .pyc files and not .py files to your production server. However, this is not standard practice with Django because theft of web site source code by hosting providers is not really an extant problem. I do not know whether or not this technique would work with Django specifically.
0
0
0
0
2012-04-24T19:01:00.000
4
0.049958
false
10,304,363
0
0
1
4
I have picked up python/django just barely a year. Deployment of django site is still a subject that I have many questions about, though I have successfully manual deployed my site. One of my biggest questions around deployment is what measures can I take to safeguard the source code of my apps, including passwords in django's setting.py, from others, especially when my site runs on a virtual hosting provided by some 3rd party. Call me paranoid but the fact that my source code is running on a third-party server, which someone has the privileges to access anything/anywhere on the server, makes me feel uneasy.
What measures can I take to safeguard the source code of my django site from others?
10,304,555
1
3
2,223
0
python,django,web-deployment,source-code-protection
If someone has the privileges to access anything/anywhere on the server you can't do much, because what you can do others can do too, you can try some way of obfuscation but that will not work. Only solution is NOT to use such shared repository. Edit: options Keep working with shared repository if your data is not very sensitive Use dedicated hosting from companies like rack-space etc Use AWS to run your own instance Use google-app-engine server but that may require a DB change Run your own server (most secure)
0
0
0
0
2012-04-24T19:01:00.000
4
0.049958
false
10,304,363
0
0
1
4
I have picked up python/django just barely a year. Deployment of django site is still a subject that I have many questions about, though I have successfully manual deployed my site. One of my biggest questions around deployment is what measures can I take to safeguard the source code of my apps, including passwords in django's setting.py, from others, especially when my site runs on a virtual hosting provided by some 3rd party. Call me paranoid but the fact that my source code is running on a third-party server, which someone has the privileges to access anything/anywhere on the server, makes me feel uneasy.
What measures can I take to safeguard the source code of my django site from others?
10,311,535
0
3
2,223
0
python,django,web-deployment,source-code-protection
Protecting source code is not that important IMHO. I would just deploy compiled files and not worry too much about it. Protecting your config (specially passwords) is indeed important. Temia's point is good.
0
0
0
0
2012-04-24T19:01:00.000
4
0
false
10,304,363
0
0
1
4
I have picked up python/django just barely a year. Deployment of django site is still a subject that I have many questions about, though I have successfully manual deployed my site. One of my biggest questions around deployment is what measures can I take to safeguard the source code of my apps, including passwords in django's setting.py, from others, especially when my site runs on a virtual hosting provided by some 3rd party. Call me paranoid but the fact that my source code is running on a third-party server, which someone has the privileges to access anything/anywhere on the server, makes me feel uneasy.
What measures can I take to safeguard the source code of my django site from others?
10,305,580
1
3
2,223
0
python,django,web-deployment,source-code-protection
While your source code's probably fine where it is, I'd recommend not storing your configuration passwords in plaintext, whether the code file is compiled or not. Rather, have a hash of the appropriate password on the server, have the server generate a hash of the password submitted during login and compare those instead. Standard security practice. Then again I could just be talking out my rear end since I haven't fussed about with Django yet.
0
0
0
0
2012-04-24T19:01:00.000
4
0.049958
false
10,304,363
0
0
1
4
I have picked up python/django just barely a year. Deployment of django site is still a subject that I have many questions about, though I have successfully manual deployed my site. One of my biggest questions around deployment is what measures can I take to safeguard the source code of my apps, including passwords in django's setting.py, from others, especially when my site runs on a virtual hosting provided by some 3rd party. Call me paranoid but the fact that my source code is running on a third-party server, which someone has the privileges to access anything/anywhere on the server, makes me feel uneasy.
Creating web interface to a controller process in python
10,327,650
0
0
182
0
python,scheduling,pyramid
I would avoid running your Controller in the same process as the web application - it is a common practice to run web-applications with lowered permissions, for example; in some multi-threaded/multi-process environment which may spawn multiple workers and then possibly kill/recycle them whenever it feels like doing so. So having your controller running in a separate process with some kind of RPC mechanism seems like a much better idea. Regarding code duplication - there are 2 options: you can extract the common code (models) into a separate module/egg which is used by both applications if you're finding that you need to share a lot of code - nothing forces you to have separate projects for those applications at all. You can have a single code base with two or more "entry points" - one of which would start a Pyramid WSGI application and another would start your Controller process.
0
0
0
0
2012-04-25T05:44:00.000
1
1.2
true
10,309,956
1
0
1
1
I have a multi-stage process that needs to be run at some intervals. I also have a Controller program which starts the process at the right times, chains together the stages of the process, and checks that each stage has executed correctly. The Controller accesses a database which stores information about past runs of the process, parameters for future executions of the process, etc. Now, I want to use Pyramid to build a web interface to the Controller, so that I can view information about the process and affect the operation of the Controller. This will mean that actions in the web interface must effect changes in the controller database. Naturally, the web interface will use the exact same data models as the Controller. What's the best way for the Controller and Web Server to interact? I've considered two possibilities: Combine the controller and web server by calling sched in Pyramid's initialisation routine Have the web server make RPCs to the controller, e.g. using Pyro. How should I proceed here? And how can I avoid code duplication (of the data models) when using the second option?
GAE - Deployment Error: "AttributeError: can't set attribute"
12,338,986
1
11
4,377
0
python,google-app-engine,deployment
Add the --oauth2 flag to appcfg.py update for an easier fix
0
1
0
0
2012-04-25T11:55:00.000
6
0.033321
false
10,315,069
0
0
1
4
When I try to deploy my app I get the following error: Starting update of app: flyingbat123, version: 0-1 Getting current resource limits. Password for avigmati: Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in run_file(__file__, globals()) File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file execfile(script_path, globals_) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in main(sys.argv) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main result = AppCfgApp(argv).Run() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run self.action(self) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__ return method() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update self.UpdateVersion(rpcserver, self.basepath, appyaml) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion self.options.max_size) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload resource_limits = GetResourceLimits(self.rpcserver, self.config) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits resource_limits.update(GetRemoteResourceLimits(rpcserver, config)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits version=config.version) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send self._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate super(HttpRpcServer, self)._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate auth_token = self._GetAuthToken(credentials[0], credentials[1]) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken e.headers, response_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__ self.reason = args["Error"] AttributeError: can't set attribute 2012-04-25 19:30:15 (Process exited with code 1) The following is my app.yaml: application: flyingbat123 version: 0-1 runtime: python api_version: 1 threadsafe: no It seems like an authentication error, but I'm entering a valid email and password. What am I doing wrong?