Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Server doesn't follow TEMPLATE_DIRS path
21,827,881
0
0
57
0
python,django,templates
The static url should point to the staticfiles directory. And why do you put templates under staticfiles? You may have it as a seperate folder in the main folder(along with manage.py)
0
0
0
0
2014-02-17T11:09:00.000
2
0
false
21,827,432
0
0
1
1
The path to my templates folder in TEMPLATE_DIRS looks like this: os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) + '/static/templates/' When I run my server locally and open the page everything works fine and it detects the templates at ~/Documents/projects/clupus/static/templates. Whenever I pull everything onto my server and access the URL it gives me this error: Django tried loading these templates, in this order: Using loader django.template.loaders.filesystem.Loader: /home/ubuntu/public_html/clupus.com/clupus/templates/clupus/index.html (File does not exist) It's not following TEMPLATE_DIRS and is looking in the wrong directory. I've checked the TEMPLATE_DIRS value that's on the server and it matches that which I have locally. What's the issue? EDIT Rather embarrassingly there was nothing wrong with my code and I simply forgot to restart apache by doing sudo service apache2 restart. As to why my templates folder was inside static this is at the request of the front end developer. When I asked him why he said: the reason why they are inside it is because I'm trying to reference the templates in Javascript aswell because we are using shared templates between server and client
How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page?
22,005,969
0
0
313
0
python,selenium,automated-tests,robotframework
Could you please provide a part of your code you use to get the span element and a part of your GUI application where you are trying to get the element from (HTML, or smth.)?
0
0
1
0
2014-02-18T07:32:00.000
2
0
false
21,846,978
0
0
1
2
I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ??? Thanks in advance
How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page?
22,013,855
0
0
313
0
python,selenium,automated-tests,robotframework
Selenium provides various ways to locate elements in the page. If you can't use id, consider using CSS or Xpath.
0
0
1
0
2014-02-18T07:32:00.000
2
0
false
21,846,978
0
0
1
2
I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ??? Thanks in advance
call a method in Django admin site
21,852,651
0
1
2,361
0
python,django,django-admin,django-sites
1 WAY You can go to your models.py into your app by using django signal you can do this. from django.db.models.signals import post_save class Test(models.Model): # ... fields here # method for updating def update_on_test(sender, instance, **kwargs): # custome operation as you want to perform # register the signal post_save.connect(update_on_test, sender=Test) 2 WAY You can ovveride save() method of modeladmin class if you are filling data into table by using django admin. class TestAdmin( admin.ModelAdmin ): fields = ['title', 'body' ] form = TestForm def save_model(self, request, obj, form, change): # your login if you want to perform some comutation on save # it will help you if you need request into your work obj.save()
0
0
0
0
2014-02-18T11:34:00.000
1
0
false
21,852,518
0
0
1
1
i have a Django project and right now everything works fine. i have a Django admin site and now, i want that when i add a new record to my model, a function calls simultaneously and a process starts. how i can do this? what is this actions name?
How to add Indian Standard Time (IST) in Django?
61,074,643
0
56
54,914
0
python,django
Adding to Jon Answer, If timezone.now() still not working after changing the TIME_ZONE='Asia/Kolkata'. Instead of timezone.now() you can use timezone.localtime(). Hope it solves.. :)
0
0
0
0
2014-02-18T13:29:00.000
14
0
false
21,855,357
1
0
1
6
We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST?
How to add Indian Standard Time (IST) in Django?
56,727,629
0
56
54,914
0
python,django
Simple Change TIME ZONE from 'UTC' to 'Asia/Kolkata' remember K and A is Capital here.
0
0
0
0
2014-02-18T13:29:00.000
14
0
false
21,855,357
1
0
1
6
We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST?
How to add Indian Standard Time (IST) in Django?
63,849,327
0
56
54,914
0
python,django
LANGUAGE_CODE = 'en-us' TIME_ZONE = 'Asia/Calcutta' USE_I18N = True USE_L10N = True USE_TZ = True This should work.
0
0
0
0
2014-02-18T13:29:00.000
14
0
false
21,855,357
1
0
1
6
We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST?
How to add Indian Standard Time (IST) in Django?
46,900,867
0
56
54,914
0
python,django
Modify setting.py and change the time zone to TIME_ZONE = 'Asia/Kolkata'
0
0
0
0
2014-02-18T13:29:00.000
14
0
false
21,855,357
1
0
1
6
We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST?
How to add Indian Standard Time (IST) in Django?
48,295,467
6
56
54,914
0
python,django
Use Below settings its worked for me. TIME_ZONE = 'Asia/Kolkata' USE_I18N = True USE_L10N = True USE_TZ = False
0
0
0
0
2014-02-18T13:29:00.000
14
1
false
21,855,357
1
0
1
6
We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST?
How to add Indian Standard Time (IST) in Django?
54,306,812
0
56
54,914
0
python,django
Keep TIME_ZONE = 'Asia/Kolkata' in settings.py file and restart the service from where you are accessing the timezone (server or shell). In my case, I restarted the python shell in which I was working and it worked fine for me.
0
0
0
0
2014-02-18T13:29:00.000
14
0
false
21,855,357
1
0
1
6
We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST?
How to hide url after domain in web2py?
21,868,611
1
0
493
0
python,url,web,web2py
Well I guess you're going to have a build the worlds most remarkable single page application :) Security through obscurity is never a good design pattern. There is absolutely no security "reason" for hiding a URL if your system is designed in a such a way that the use of the URLs is meaningless unless the access control layer defines permissions for such use (usually through an authentication and role/object based permission architecture). Keep in mind - anyone these days can use Chrome inspector to see whatever you are trying to hide in the address bar. For example. Say you want to load domain.com/adduser Sure you make an AJAX call to that URL, and the browser address bar would never change from domain.com/ - but a quick look in the source will uncover /adduser pretty quickly. Sounds like you need to have a think about what these addresses really expose and start locking them down.
0
0
0
0
2014-02-18T23:43:00.000
1
0.197375
false
21,867,972
0
0
1
1
I am building a website using web2py. For security reasons I would like to hide the url after the domain to the visitors. For example, when a person clicks a link to "domain.com/abc", it will go to that page and the address bar shows "domain.com". I have played with the routes_in and routes_out, but it only seems to map your typed url to a destination but not hiding the url. How can I do that? Thanks!
Flask request waiting for asynchronous background job
25,578,832
1
0
2,430
0
python,api,flask
Tornado would do the trick. Flask is not designed for asynchronization. A Flask instance processes one request at a time in one thread. Therefore, when you hold the connection, it will not proceed to next request.
0
0
1
0
2014-02-19T00:47:00.000
1
0.197375
false
21,868,709
0
0
1
1
I have an HTTP API using Flask and in one particular operation clients use it to retrieve information obtained from a 3rd party API. The retrieval is done with a celery task. Usually, my approach would be to accept the client request for that information and return a 303 See Other response with an URI that can be polled for the response as the background job is finished. However, some clients require the operation to be done in a single request. They don't want to poll or follow redirects, which means I have to run the background job synchronously, hold on to the connection until it's finished, and return the result in the same response. I'm aware of Flask streaming, but how to do such long-pooling with Flask?
Find all possible query string values with python
21,872,039
0
0
299
0
python,parsing,query-string
It depends on the website itself. If it has other values of field1 or field2, you can only know that by looking into the code or documentation(if available). That's the only accurate way of knowing it. Otherwise, you can try brute forcing (trying all possible alphanumeric values Ever), but that doesn't guaranty anything. In that case you'll need a way to know which values are valid and which are not. Hardly efficient.
0
0
1
0
2014-02-19T05:13:00.000
1
0
false
21,871,636
1
0
1
1
I'm trying to figure out how to parse a website that doesn't have documentation available to explain the query string. I am wondering if there is a way to get all possible valid values for different fields in a query string using Python. For example, let's say I have the current URL that I wish to parse: http://www.website.com/stat?field1=a&field2=b Is there a way to find all of the possible values for field1 that return information? Let's say that field1 of the qs can take either values "a" or "z" and I do not know it can take value "z". Is there a way to figure out that "z" is the only other value that is possible in that field without any prior knowledge?
OpenERP trunk server gives import Error psutil
21,872,185
5
2
3,312
0
python,openerp,importerror,openerp-7
Getting this error because psutil is not installed. you have to install psutil using this command. sudo apt-get install python-psutil in terminal. after this restart server. This will solve your error.
0
0
0
0
2014-02-19T05:38:00.000
1
1.2
true
21,871,997
0
0
1
1
I download trunk version of OpenERP from lauchpad. When i start server it's gives following error Traceback (most recent call last): File "./openerp-server", line 2, in import openerp File "/home/jack/trunk/trunk-server/openerp/init.py", line 72, in import http File "/home/jack/trunk/trunk-server/openerp/http.py", line 37, in from openerp.service import security, model as service_model File "/home/jack/trunk/trunk-server/openerp/service/init.py", line 28, in import server File "/home/jack/trunk/trunk-server/openerp/service/server.py", line 10, in import psutil ImportError: No module named psutil
Setting/Configuring the output file after running scrapy spider
21,873,566
2
1
487
0
python,json,scrapy
I think you should use scrapy crawl yourspider -o output.json -t json where -o output filename and -t output format.
0
0
0
0
2014-02-19T05:50:00.000
2
0.197375
false
21,872,179
1
0
1
1
I'm able to run a scrapy spider from a script. But I want to store the output in a specific file(say output.json) in json format. I did a lot of research & also tried to override FEED_URI & FEED_FORMAT from settings. I also tried to use JsonItemExporter function but all in vain. Any help will be appreciated. Thanks!
Flask python blueprint logic code separation
21,885,753
0
1
344
0
python,flask,blueprint
am I right your logic should be in models and service classes? and blueprints (aka views) only a thin middleware between template and these modules?
0
0
0
0
2014-02-19T15:42:00.000
2
0
false
21,885,365
0
0
1
1
I'm a bit confused about separation for my flask app. Users can login, post adverts and these are available to the public. The URL structure would be something like this: User Home - www.domain.com/user User login - www.domain.com/user/login User advert List - www.domain.com/user/advert User advert add - www.domain.com/user/vacancy/add Public Advert - www.domain.com/advert/1 The issue comes from the fact that there is advert forms and logic which is required inside and outside of the user control panel. Which of these is the most correct way of laying out my application: Option 1: User Blueprint (no url prefix) Contains all user related logic Advert Blueprint (no url prefix) Contains all advert related logic, including the user posting adverts and displaying them to the public Option 2 User Blueprint (/user/ prefix) Contains user logic and advert logic (adding adverts from the user control panel) Advert Blueprint (/advert/ prefix) Contains advert logic relating only to advert tasks outside of the user control panel.
Custom jinja2 tag in sphinx template
21,909,382
1
0
737
0
python,jinja2,python-sphinx
I've found a good way to do this. Sphinx's configuration parameter template_bridge allows to control over TemplateBribge object - which is responsible for themes rendering. Standard sphinx.jinja2glue.TemplateBridge constructs environment attribute in init method (it's not a constructor, unfortunate name for method) - which is jinja2's environment itself used for templates rendering. So just subclass TemplateBridge and override init method.
0
0
0
0
2014-02-19T16:03:00.000
2
1.2
true
21,885,856
0
0
1
1
I'd like to implement custom navigation to my sphinx docs. I use my custom theme based on basic sphinx theme. But I don't know how to create new tag for template system or use my custom sphinx plugin's directive in html templates. Any ideas where I can plug in? Update As I can see in sphinx sources, jinja2 environment constructed in websupport jinja2glue module. Though I can't understand the way it can be reconfigured besides monkey-patching.
BOTO distribute scraping tasks among AWS
21,899,718
1
0
262
0
python,amazon-web-services,cron,queue,boto
Question: (1) How to use boto to pass the variable/argument to another AWS instance and start a script to work on those variable Use shared datasource, such as DynamoDB or messaging framework such as SQS and .. use boto to retrieve the result back to the master box. Again, shared datasource, or messaging. (2) What is the best way to schedule a job only on specific time period inside Python code. Say only work on 6:00pm to 6:00 am everyday... I don't think the Linux crontab will fit my need in this situation. I think crontab fits well here.
0
0
1
0
2014-02-19T20:58:00.000
1
0.197375
false
21,892,302
0
0
1
1
I have 200,000 URLs that I need to scrape from a website. This website has a very strict scraping policy and you will get blocked if the scraping frequency is 10+ /min. So I need to control my pace. And I am thinking about start a few AWS instances (say 3) to run in parallel. In this way, the estimated time to collect all the data will be: 200,000 URL / (10 URL/min) = 20,000 min (one instance only) 4.6 days (three instances) which is a legit amount of time to get my work done. However, I am thinking about building a framework using boto. That I have a paragraph of code and a queue of input (a list of URLs) in this case. Meanwhile I also don't want to do any damage to their website so I only want to scrape during the night and weekend. So I am thinking about all of this should be controlled on one box. And the code should look similar like this: class worker (job, queue) url = queue.pop() aws = new AWSInstance() result aws.scrape(url) return result worker1 = new worker() worker2 = new worker() worker3 = new worker() worker1.start() worker2.start() worker3.start() The code above is totally pseudo and my idea is to pass the work to AWS. Question: (1) How to use boto to pass the variable/argument to another AWS instance and start a script to work on those variable and .. use boto to retrieve the result back to the master box. (2) What is the best way to schedule a job only on specific time period inside Python code. Say only work on 6:00pm to 6:00 am everyday... I don't think the Linux crontab will fit my need in this situation. Sorry about that if my question is more verbally descriptive and philosophical.. Even if you can offer me any hint or throw away some package/library name that meet my need. I will be gratefully appreciated!
Using memcached to host images
21,925,544
1
0
112
0
python,image,flask,memcached
Yes, you can do it. Create a controller or serlvet called for example www.yoursite.com/getImage/ID When you execute this URL, your program shoud connect to the memcached and return the image object that you have previously stored in it. Finally when in your html you add: src="www.yoursite.com/getImage/ID" the browser will execute this url, but instead of reading a file from disk it will ask the memcached for the specific ID. Be sure to add the correct content-type in your response from the server in order that the browser understand that you are sending an image content. Fer
0
0
0
0
2014-02-20T01:19:00.000
1
1.2
true
21,896,157
0
0
1
1
Im writing simple blogging platform in Flask microframework, and I'd like to allow users to change image on the front page but without actually writing it into filesystem. Is it possible to point src attribute in img tag to an object stored in memory?
Record rules on same object with different CRUD options?
21,903,368
0
0
78
0
python,openerp,record,crud,access-rights
It seems this one works: Needed to create two rules (applies r,w,c): ['&', ('user_id','=',user.id),('state','=','stage1')] And second rule (applies r): [('stage','=','stage2')]
0
0
0
0
2014-02-20T09:02:00.000
1
1.2
true
21,902,861
0
0
1
1
I need to apply different record rules on same object to give different access rights depending on state that record is. For example there are three stages: stage1, stage2, stage3. On first stage user with specific access rights group can do this: Read, Write, Create his own records. When he presses button to go to stage2, then he can only Read that record (if that record would go back to stage1 - not by that user, then he could do previous things). And on stage3 that user does not see any records nor his nor any others. I tried doing something like this: First rule (applies r,w,c): [('user_id','=',user.id)] This one works. But I get problems when going to other stages. I tried to create another rule2 (applies r): [('stage','=','stage2')] But it does not work, that user can still do anything that he can do in stage1. If I make rule like this (applies r,w,c): ['|', ('user_id','=',user.id),('stage','=','stage1')] Then it gives access rights error that you can't go to next stage, because you don't have read access rights on that stage. How can this be solved?..
Connect to flask over public connection
21,905,637
0
0
115
0
python,flask
You need to set your router to forward the relevant port to your laptop.
0
0
1
0
2014-02-20T10:54:00.000
1
1.2
true
21,905,560
0
0
1
1
I have flask running on my Macbook (10.9.1 if it makes a difference). I have no problem accessing what I have hosted there over my local network, but I'm trying to see if I can access it publicly. For example load a webpage on my iPhone over it's 3G connection. It doesn't appear to be as simple as /index. With my limited knowledge, my public IP seems to be the one for our internet connection, moreso my own laptop. Is that what is causing the issue? Appreciate any help!
Parrallel south migration in django causes errors
21,927,635
0
0
87
0
python,django,amazon-web-services,django-south
Turns out the problem was that the git fetch on one of the front servers didnt take, which is what was causing the problem..it had nothing to do with running migrations in parallel (though I shouldnt have done that anyway)
0
0
0
0
2014-02-20T12:07:00.000
2
0
false
21,907,349
0
0
1
1
I ran a code update that points at two front end servers (Amazon Web Service Instances). A south migration was included as part of the update.Since the migration the live site appears to flit between the current code revision , and the previous revision, at will. Since discovering this, A previous developer (who has left the company before I turned up), said, & I quote: "never run migrations in parallel. Running migrations twice causes duplication of new objects >and other errors!" My code changes did not involve any models.py changes ; the migrate commands were just part of the fabric update script. Also no errors were thrown during the migrations, they seemingly ran as normal. I have database backups, so I can roll back the database as a last resort. Is there any other way to sort the issue without doing this? Thanks for reading edit: I should add, I pushed the same code to a staging server and it worked fine, so the issue isnt the code
Django & postgres - drawbacks of storing data as json in model fields
21,909,779
3
1
1,302
1
python,json,django,postgresql
Storing data as json (whether in text-typed fields, or PostgreSQL's native jsontype) is a form of denormalization. Like most denormalization, it can be an appropriate choice when working with very difficult to model data, or where there are serious performance challenges with storing data fully normalized into entities. PostgreSQL reduces the impact of some of the problems caused by data denormalization by supporting some operations on json values in the database - you can iterate over json arrays or key/value pairs, join on the results of json field extraction, etc. Most of the useful stuff was added in 9.3; in 9.2, json support is just a validating data type. In 9.4, much more powerful json features will be added, including some support for indexing in json values. There's no simple one-size-fits all answer to your question, and you haven't really characterized your data or your workload. Like most database challenges "it depends" on what you're doing with the data. In general, I would tend to say it's best to relationally model the data if it is structured and uniform. If it's unstructured and non-uniform, storage with something like json may be more appropriate.
0
0
0
0
2014-02-20T12:38:00.000
1
0.53705
false
21,908,068
1
0
1
1
I'm sometimes using a TextField to store data with a structure that may change often (or very complex data) into model instances, instead of modelling everything with the relational paradigm. I could mostly achieve the same kind of things using more models, foreignkeys and such, but it sometimes feels more straightforward to store JSON directly. I still didn't delve into postgres JSON type (can be good for read-queries notably, if I understand well). And for the moment I perform some json.dumps and json.loads each time I want to access this kind of data. I would like to know what are (theoretically) the performance and caching drawbacks of doing so (using JSON type and not), compared to using models for everything. Having more knowledge about that could help me to later perform some clever comparison and profiling to enhance the overall performance.
SOA versus Django ORM with multiple processes
21,914,906
0
0
138
1
python,django,sqlite,rest,orm
It depends on what your application is doing. If your REST application reads a piece of data from SQLITE using the Django ORM and then the other app does a write you can run into some interesting race situations. To prevent that it might make sense to have both these applications as django-app in a single Django project.
0
0
0
0
2014-02-20T15:57:00.000
1
0
false
21,912,993
0
0
1
1
I have a django app which provides a rest api using Django-rest-framework. The API is used by clients as expected, but I also have another process(on the same node) that uses Django ORM to read the app's database, which is sqlite3. Is it better architecture for the process to use the rest api to interact(only reads) with the app's database? Or is there a better, perhaps more efficient way than making a ton of HTTP requests from the same node? The problem with the ORM approach(besides the hacky nature) is that occasionally reads fail and must be retried. Also, I want to write to the app's db which would probably causes more sqlite concurrency issues.
clean up .pyc files in virtualenv stored in souce repository after the fact?
21,936,238
4
2
1,403
0
python,version-control,virtualenv,ignore,pyc
That is fine, just remove them! Python auto-generates them from the corresponding .py file any time it wants to, so you needn't worry about simply deleting them all from your repository. A couple of related tips - if you don't want them generated at all on your local dev machine, set the environment variable PYTHONDONTWRITEBYTECODE=1. Python 3.2 fixed the annoyance of source folders cluttered with .pyc files with a new __pycache__ subfolder
0
0
0
0
2014-02-21T13:45:00.000
1
1.2
true
21,936,158
1
0
1
1
I've created a virtualenv for my project and checked it into source control. I've installed a few projects into the virtualenv with pip: django, south, and pymysql. After the fact I realized that I had not set up source control for ignoring .pyc files. Could there be any subtle problems in simply removing all .pyc files from my project's repository and then putting in place the appropriate file ignore rules? Or is removing a .pyc file always a safe thing to do?
Whats the smartest way to password protect an entire Django site for testing purposes
21,937,650
0
3
4,949
0
python,html,django
Why you do not do simple form on index page when user is not authenticated?
0
0
0
1
2014-02-21T14:25:00.000
3
0
false
21,937,072
0
0
1
1
Here is the deal, how do I put the simplest password protection on an entire site. I simply want to open the site to beta testing but don't really care about elegance - just a dirty way of giving test users a username and password without recourse to anything complex and ideally i'd like to not to have to install any code or third party solutions. I'm trying to keep this simple.
AppEngine real time querying - cost, performance, latency balancing act and quotas
21,962,823
0
1
173
1
python,google-app-engine,mapreduce,task-queue
First, writes to the datastore take milliseconds. By the time your user hits the refresh button (or whatever you offer), the data will be as "real-time" as it gets. Typically, developers become concerned with real-time when there is a synchronization/congestion issue, i.e. each user can update something (e.g. bid on an item), and all users have to get the same data (the highest bid) in real time. In your case, what's the harm if a user gets the number of check-ins which is 1 second old? Second, data in Memcache can be lost at any moment. In your proposed solution (update the datastore every 5 minutes), you risk losing all data for the 5 min period. I would rather use Memcache in the opposite direction: read data from datastore, put it in Memcache with 60 seconds (or more) expiration, serve all users from Memcache, then refresh it. This will minimize your reads. I would do it, of course, unless your users absolutely must know how many checkins happened in the last 60 seconds. The real question for you is how to model your data to optimize writes. If you don't want to lose data, you will have to record every checkin in datastore. You can save by making sure you don't have unnecessary indexed fields, separate out frequently updated fields from the rest, etc.
0
1
0
0
2014-02-21T17:23:00.000
2
0
false
21,941,030
0
0
1
1
I am trying to design an app that uses Google AppEngine to store/process/query data that is then served up to mobile devices via Cloud Endpoints API in as real time as possible. It is straight forward enough solution, however I am struggling to get the right balance between, performance, cost and latency on AppEngine. Scenario (analogy) is a user checks-in (many times per day from different locations, cities, countries), and we would like to allow the user to query all the data via their device and provide as up to date information as possible. Such as: The number of check-ins over the last: 24 hours 1 week 1 month All time Where is the most checked in place/city/country over the same time periods Where is the least checked in place over the same time periods Other similar querying reports We can use Memcache to store the most recent checkins, pushing to the Datastore every 5 minutes, but this may not scale very well and is not robust! Use a Cron job to run the Task Queue/Map Reduce to get the aggregates, averages for each location every 30 mins and update the Datastore. The challenge is to use as little read/writes over the datastore because the last "24 hours" data is changing every 5 mins, and hence so is the last weeks data, last months data and so on. The data has to be dynamic to some degree, so it is not fixed points in time, they are always changing - here in lies the issue! It is not a problem to set this up, but to set it up in an efficient manner, balancing performance/latency for the user and cost/quotas for us is not so easy! The simple solution would be to use SQL, and run date range queries but this will not scale very well. We could eventually use BigTable & BigQuery for the "All time" time period querying, but in order to give the users as real-time as possible data via the API for the other time periods is proving quite the challenge! Any suggestions of AppEngine architecture/approaches would be seriously welcomed. Many thanks.
Django delete unused media files
21,943,591
2
37
19,573
0
python,django,django-admin
You can use signals. Than use os.remove() to clean up related files on delete. This way your file system always reflects you db. No need for hitting some button.
0
0
0
0
2014-02-21T17:45:00.000
5
0.07983
false
21,941,503
0
0
1
1
I have a django project in which the admins are able to upload media. As items sell, they are deleted from the site, thus removing their entry in the MySQL database. The images associated with the item, however, remain on the file system. This isn't neccessarily bad behavior - I don't mind keeping files around in case a deletion was an accident. The problem I forsee is two years from now, when storage space is limited because of a media folder bloated with old product images. Does anyone know of a systematic/programmatic way to sort through ALL the images and compare them to the relevant MySQL fields, deleting any image which DOESN'T have a match from the filesystem? In the perfect world I'm imagining a button in the django-admin like "Clean-up unused media" which executes a python script capable of this behavior. I'll be sharing whatever my eventual solution is here, but what I'm looking for right now is anyone who has ideas, knows resources, or has done this at some point themselves.
Heroku: Putting a file in the bin folder
21,942,195
1
0
301
0
python,linux,django,heroku
You create it in the project root.
0
0
0
0
2014-02-21T18:15:00.000
1
1.2
true
21,942,110
0
0
1
1
I am sorry if this is too basic of a question but I spent the whole morning unsuccessfully figuring it out. I want to use the Heroku Scheduler for a Django app, and as per their documentary, I am supposed to put the python file I wan't to be executed by the Scheduler in the bin/ folder on Heroku. Now on my local copy of the project, where do I create the folder bin w.r.t. the project root?
Possible to make flask login system which doesn't use client-side session/cookie?
21,947,739
-1
4
1,749
0
python,session,flask,flask-login
Nope. That's literally impossible over pure HTTP. HTTP is a stateless protocol, which means that in order to preserve state, the client has to be able to identify itself on every request. What you might be able to do is HTTP Basic Authentication over HTTPS, then access that Authentication on the server side.
0
0
0
0
2014-02-22T00:11:00.000
4
-0.049958
false
21,947,723
0
0
1
2
I'm working on a Flask app to be used by a medical client. Their IT dept is so up tight about security that they disable cookies and scripting network-wide. Luckly, wtf-forms was able to address one of these issues with server-side validation of form input. However, I'm getting hung up on the login system. I've implemented flask-login, but this apparently requires client-side data as I'm unable to log in when testing in a browser with these features disabled. Is there any way to create a login with zero client-side data? Thanks for the help.
Possible to make flask login system which doesn't use client-side session/cookie?
21,947,780
1
4
1,749
0
python,session,flask,flask-login
With such restrictions as not having zero client side data, you could pass a session token in the GET parameters of every link rendered in the html page. Or you could create only POST views with a hidden token input (may be more secure indeed).
0
0
0
0
2014-02-22T00:11:00.000
4
0.049958
false
21,947,723
0
0
1
2
I'm working on a Flask app to be used by a medical client. Their IT dept is so up tight about security that they disable cookies and scripting network-wide. Luckly, wtf-forms was able to address one of these issues with server-side validation of form input. However, I'm getting hung up on the login system. I've implemented flask-login, but this apparently requires client-side data as I'm unable to log in when testing in a browser with these features disabled. Is there any way to create a login with zero client-side data? Thanks for the help.
Error 404 when trying to access a Django app installed in a subdomain
21,969,799
0
0
718
0
python,django,.htaccess,subdomain,virtualenv
The issue was solved by contacting the support service and asking them to open the port 8000 for me.
0
1
0
0
2014-02-23T00:15:00.000
2
1.2
true
21,962,475
0
0
1
1
I just installed Django and create a project and an app following the basic tutorial part 1, I created a virtualenv since centOS default python version is 2.4.3, I also created a subdomain to work on this for the development phase. when I try to access like dev.domain.com/admin/ or dev.domain.com/ I get a 404 error, it's like django is not even there. When I run the server I get a normal response: (python2.7env)-bash-3.2# python manage.py runserver Validating models... 0 errors found February 22, 2014 - 23:54:07 Django version 1.6.2, using settings 'ct_project.settings' Starting development server at http://127.0.0.1:8000/ Any ideas what I'm missing? EDIT:after starting the server correctly(with the right ip) I tried again and as a result I got the browser hanging. Then I went to tried an online port scanner and found out that the port 8000 is not responding, any ideas what I can try next? Thanks
Python based asynchronous workflow modules : What is difference between celery workflow and luigi workflow?
25,704,688
27
37
8,450
0
python,celery,luigi
Update: As Erik pointed, Celery is better choice for this case. Celery: What is Celery? Celery is a simple, flexible and reliable distributed system to process vast amounts of messages, while providing operations with the tools required to maintain such a system. Why use Celery? It is simple to use & has lots of features. django-celery: provides good integration with Django. flower: Real-time monitor and web admin for Celery distributed task queue. Active & large community(based on Stackoverflow activity, Pyvideos, tutorials, blog posts). Luigi What is Luigi? Luigi(Spotify's recently open sourced Python framework) is a Python package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization, handling failures, command line integration, and much more. Why use Luigi? Builtin support for Hadoop. Generic enough to be used for everything from simple task execution and monitoring on a local work station, to launching huge chains of processing tasks that can run in synchronization between many machines over the span of several days. Lugi's visualiser: Gives a nice visual overview of dependency graph of workflow. Conclusion: If you need a tool just to simply schedule tasks & run them you can use Celery. If you are dealing with big data & huge processing you can go for Luigi.
0
1
0
0
2014-02-23T11:08:00.000
2
1.2
true
21,967,398
0
0
1
2
I am using django as a web framework. I need a workflow engine that can do synchronous as well as asynchronous(batch tasks) chain of tasks. I found celery and luigi as batch processing workflow. My first question is what is the difference between these two modules. Luigi allows us to rerun failed chain of task and only failed sub-tasks get re-executed. What about celery: if we rerun the chain (after fixing failed sub-task code), will it rerun the already succeed sub-tasks? Suppose I have two sub-tasks. The first one creates some files and the second one reads those files. When I put these into chain in celery, the whole chain fails due to buggy code in second task. What happens when I rerun the chain after fixing the code in second task? Will the first task try to recreate those files?
Python based asynchronous workflow modules : What is difference between celery workflow and luigi workflow?
34,112,320
45
37
8,450
0
python,celery,luigi
(I'm the author of Luigi) Luigi is not meant for synchronous low-latency framework. It's meant for large batch processes that run for hours or days. So I think for your use case, Celery might actually be slightly better
0
1
0
0
2014-02-23T11:08:00.000
2
1
false
21,967,398
0
0
1
2
I am using django as a web framework. I need a workflow engine that can do synchronous as well as asynchronous(batch tasks) chain of tasks. I found celery and luigi as batch processing workflow. My first question is what is the difference between these two modules. Luigi allows us to rerun failed chain of task and only failed sub-tasks get re-executed. What about celery: if we rerun the chain (after fixing failed sub-task code), will it rerun the already succeed sub-tasks? Suppose I have two sub-tasks. The first one creates some files and the second one reads those files. When I put these into chain in celery, the whole chain fails due to buggy code in second task. What happens when I rerun the chain after fixing the code in second task? Will the first task try to recreate those files?
PyCharm include and modify External library in project
27,070,676
2
13
15,029
0
python,django,python-2.7,ide,pycharm
Another option would be to place libraries into separate project (or go even further and place each library in its own project) and then open this project/these projects side-by-side with the main project. This way you have clear separation between the main project and libraries used. This comes handy when you work on another project using some of the same libraries as then you only need to open already existing project containing libraries and you are done.
0
0
0
0
2014-02-23T16:12:00.000
2
0.197375
false
21,970,771
1
0
1
1
I have an issue where I am developing a Django project which includes other libraries we are also developing. My current structure is as follows: Main Project App1 App2 Libraries Library 1 Library 2 All libraries have their own setup scripts and are in separate git repositories, and we are adding them in PyCharm in the PYTHONPATH, and referencing them simply by their name. Which works good, but they are not in my current project, which means no re-factoring ( renaming, moving etc... ) and I have to use External search to find my class from the libraries. How do I set some libraries as project related to make them view-able and refactorable like we do on the currently set project.
Django Error: OperationalError: no such table: polls_poll
23,184,956
11
5
13,012
1
python,django,shell,sqlite
I meet the same problem today and fix it I think you miss some command in tutorial 1 just do follow: ./python manage.py makemigrations polls python manage.py sql polls ./python manage.py syncdb then fix it and gain the table polls and you can see the table created you should read the "manage.py makemigrations" command
0
0
0
0
2014-02-23T23:49:00.000
4
1.2
true
21,976,383
0
0
1
1
Going through Django tutorial 1 using Python 2.7 and can't seem to resolve this error: OperationalError: no such table: polls_poll This happens the moment I enter Poll.objects.all() into the shell. Things I've already tried based on research through the net: 1) Ensured that 'polls' is listed under INSTALLED_APPS in settings.py Note: I've seen lots of suggestions inserting 'mysite.polls' instead of 'polls' into INSTALLED_APPS but this gives the following error: ImportError: cannot import name 'polls' from 'mysite' 2) Run python manage.py syncdb . This creates my db.sqlite3 file successfully and seemingly without issue in my mysite folder. 3) Finally, when I run python manage.py shell, the shell runs smoothly, however I do get some weird Runtime Warning when it starts and wonder if the polls_poll error is connected: \django\db\backends\sqlite3\base.py:63: RuntimeWarning: SQLite received a naive datetime (2014-02-03 17:32:24.392000) while time zone support is active. Any help would be appreciated.
Global leaderboard in Google App Engine
21,979,231
0
3
930
0
python,google-app-engine,cron,leaderboard
Whether this is simpler or not is debatable. I have assumed that ranking is not just a matter of ordering an accumulation of points, in which case thats just a simple query. I ranking involves other factors rather than just current score. I would consider writing out an Event record for each update of points for a User (effectively a queue) . Tasks run collecting all the current Event records, In addition you maintain a set of records representing the top of the leaderboard. Adjust this set of records, based on the incoming event records. Discard event records once processed. This will limit your reads and writes to only active events in a small time window. The leader board could probably be a single entity, and fetched by key and cached. I assume you may have different ranking schemes like current active rank (for the current 7 days), vs all time ranks. (ie players not playing for a while won't have a good current rank). As the players view their rank, you can do that with two simple queries Players.query(Players.score > somescore).fetch(5) and Players.query(Players.score < somescore).fetch(5) this shouldn't cost too much and you could cache them.
0
1
0
0
2014-02-24T04:43:00.000
3
0
false
21,979,038
0
0
1
2
I want to build a backend for a mobile game that includes a "real-time" global leaderboard for all players, for events that last a certain number of days, using Google App Engine (Python). A typical usage would be as follows: - User starts and finishes a combat, acquiring points (2-5 mins for a combat) - Points are accumulated in the player's account for the duration of the event. - Player can check the leaderboard anytime. - Leaderboard will return top 10 players, along with 5 players just above and below the player's score. Now, there is no real constraint on the real-time aspect, the board could be updated every 30 seconds, to every hour. I would like for it to be as "fast" as possible, without costing too much. Since I'm not very familiar with GAE, this is the solution I've thought of: Each Player entity has a event_points attribute Using a Cron job, at a regular interval, a query is made to the datastore for all players whose score is not zero. The query is sorted. The cron job then iterates through the query results, writing back the rank in each Player entity. When I think of this solution, it feels very "brute force". The problem with this solution lies with the cost of reads and writes for all entities. If we end up with 50K active users, this would mean a sorted query of 50K+1 reads, and 50k+1 writes at regular intervals, which could be very expensive (depending on the interval) I know that memcache can be a way to prevent some reads and some writes, but if some entities are not in memcache, does it make sense to query it at all? Also, I've read that memcache can be flushed at any time anyway, so unless there is a way to "back it up" cheaply, it seems like a dangerous use, since the data is relatively important. Is there a simpler way to solve this problem?
Global leaderboard in Google App Engine
21,980,623
2
3
930
0
python,google-app-engine,cron,leaderboard
You don't need 50,000 reads or 50,000 writes. The solution is to set a sorting order on your points property. Every time you update it, the datastore will update its order automatically, which means that you don't need a rank property in addition to the points property. And you don't need a cron job, accordingly. Then, when you need to retrieve a leader board, you run two queries: one for 6 entities with more or equal number of points with your user; second - for 6 entities with less or equal number of points. Merge the results, and this is what you want to show to your user. As for your top 10 query, you may want to put its results in Memcache with an expiration time of, say, 5 minutes. When you need it, you first check Memcache. If not found, run a query and update the Memcache. EDIT: To clarify the query part. You need to set the right combination of a sort order and inequality filter to get the results that you want. According to App Engine documentation, the query is performed in the following order: Identifies the index corresponding to the query's kind, filter properties, filter operators, and sort orders. Scans from the beginning of the index to the first entity that meets all of the query's filter conditions. Continues scanning the index, returning each entity in turn, until it encounters an entity that does not meet the filter conditions, or reaches the end of the index, or has collected the maximum number of results requested by the query. Therefore, you need to combine ASCENDING order with GREATER_THAN_OR_EQUAL filter for one query, and DESCENDING order with LESS_THAN_OR_EQUAL filter for the other query. In both cases you set the limit on the results to retrieve at 6. One more note: you set a limit at 6 entities, because both queries will return the user itself. You can add another filter (userId NOT_EQUAL to your user's id), but I would not recommend it - the cost is not worth the savings. Obviously, you cannot use GREATER_THAN/LESS_THAN filters for points, because many users may have the same number of points.
0
1
0
0
2014-02-24T04:43:00.000
3
1.2
true
21,979,038
0
0
1
2
I want to build a backend for a mobile game that includes a "real-time" global leaderboard for all players, for events that last a certain number of days, using Google App Engine (Python). A typical usage would be as follows: - User starts and finishes a combat, acquiring points (2-5 mins for a combat) - Points are accumulated in the player's account for the duration of the event. - Player can check the leaderboard anytime. - Leaderboard will return top 10 players, along with 5 players just above and below the player's score. Now, there is no real constraint on the real-time aspect, the board could be updated every 30 seconds, to every hour. I would like for it to be as "fast" as possible, without costing too much. Since I'm not very familiar with GAE, this is the solution I've thought of: Each Player entity has a event_points attribute Using a Cron job, at a regular interval, a query is made to the datastore for all players whose score is not zero. The query is sorted. The cron job then iterates through the query results, writing back the rank in each Player entity. When I think of this solution, it feels very "brute force". The problem with this solution lies with the cost of reads and writes for all entities. If we end up with 50K active users, this would mean a sorted query of 50K+1 reads, and 50k+1 writes at regular intervals, which could be very expensive (depending on the interval) I know that memcache can be a way to prevent some reads and some writes, but if some entities are not in memcache, does it make sense to query it at all? Also, I've read that memcache can be flushed at any time anyway, so unless there is a way to "back it up" cheaply, it seems like a dangerous use, since the data is relatively important. Is there a simpler way to solve this problem?
How to make a task start right after another one finish in google app engine?
21,981,710
0
1
159
0
python,google-app-engine,task
It seems impossible to guarantee that B will be next.
0
1
0
0
2014-02-24T07:25:00.000
2
0
false
21,981,387
0
0
1
2
On taskqueue in gae. For example, I have task A, B. How to ensure that task B starts right after task A finishes. There could be other tasks, like C, to fix this problem. Also, 'right after' could be loose to 'after'. How about a dedicate queue with max_current_requests set to 1?
How to make a task start right after another one finish in google app engine?
21,982,975
2
1
159
0
python,google-app-engine,task
If you only have two tasks, you can start task B at the end of task A. For example, a task that updates user scores can start a task to send emails after it finished updating scores. In this case, you are guaranteed that task B is executed after task A, but there is no guarantee that there is no task C in between them - unless, of course, you don't have task C - or any other tasks - at all.
0
1
0
0
2014-02-24T07:25:00.000
2
0.197375
false
21,981,387
0
0
1
2
On taskqueue in gae. For example, I have task A, B. How to ensure that task B starts right after task A finishes. There could be other tasks, like C, to fix this problem. Also, 'right after' could be loose to 'after'. How about a dedicate queue with max_current_requests set to 1?
How to create a filter which compares only month in date type in OpenERP?
21,987,900
3
0
670
0
python,openerp,openerp-7
you can write a stored functional integer field on hr.employee with a function returning the month as integer. then you can use this field for filters.
0
0
0
0
2014-02-24T11:17:00.000
1
1.2
true
21,986,203
0
0
1
1
In HR module, in Employee form, I want to create a filter which gives me list of all employees whose birthday's appear in current month. Currently I am trying with static month, as below - but gives me error. [('birthday.month','=','02')] Error: File "/usr/lib/pymodules/python2.7/openerp/osv/expression.py", line 1079, in __leaf_to_sql or left in MAGIC_COLUMNS, "Invalid field %r in domain term %r" % (left, leaf) AssertionError: Invalid field 'birthday.month' in domain term ('birthday.month', '=', '02') Is there any way out to accomplish it?
OAuth fails after deploying google glass application
22,016,838
0
0
48
0
python,google-glass
A couple of things that are standard debugging practices, and you may want to update the original question to clarify: Did OAuth actually fail? What information do you have that it failed? Can you verify from web server logs that the callback URL was hit and that it contained non-error return values? Can you check your web server and app server logs to see if there are any error messages or exceptions logged?
0
0
1
0
2014-02-24T13:02:00.000
1
0
false
21,988,618
0
0
1
1
I went through the instructions on for the Google Glass Python Quick Start. I deployed the app and the app supposedly finished deploying successfully. I then went to the main URL for the app and attempted to open the page. The page asked me which Google Account I wanted to use to access the app, and I chose one. It went through some type of redirect and then came back to my app and tried to open up the openauth2callback page at which time nothing else happened. It just stopped on the openauth2callback page and sat there whitescreened. I assume that the app is supposed to look like the sample app that was posted where I should see timeline cards and be able to send messages, but I don't see any of that. I checked my oauth callbacks and they look exactly like the quick start instructions said to make them. What am I missing?
Running the same spider simultaniously
22,020,164
0
1
79
0
python,python-2.7,scrapy
The only application I see for running at the same time multiple instances of a spider is when each instance will have it's own part of start_urls. But each instance should be run on a different network interface, otherwise you cannot control effectively crawling intensity for the same domain.
0
0
0
0
2014-02-24T20:37:00.000
1
0
false
21,998,474
1
0
1
1
I am using scrapy 0.20 with python 2.7 I want to ask, what is the cons and pros of running the same spider twice in the same time? Please know that I am using a pipeline in order to write the results to a json file. Thanks
A Few questions on writing a RESTful web service
22,004,122
1
0
81
0
python,web-services,api,rest,amazon-web-services
A bit broad really especially the Python part. Yes this can be considered a API. Think of SOAP and REST services as an API available via the network. This question is opinion based and not suited for discussion here. A guideline is that if it works for you it is good. Yes you should use the REST services for the website otherwise you will duplicate work.
0
0
1
0
2014-02-24T22:55:00.000
2
0.099668
false
22,000,918
0
0
1
1
I just decided to start working on a mobile application for fun, but it will require a back-end. So I created an EC2 instance on Amazon Web Services, with an Amazon Linux AMI installed. I also have set up an database instance as well, and inserted some dummy data in there. Now, the next step I want to take is to write an RESTful web service that will run on my server that will interface with my database (which is independent from my server) First question, would this be considered an API? Second, I am doing research to implement this web service in Python, in your opinion are there better choices? Third, if I make a website, would/should it also be able to use this RESTful web service to query data from the database?
backing up data from app engine datastore as spreadsheet or csv
22,031,727
0
0
163
0
python,google-app-engine,google-cloud-datastore,app-engine-ndb
You need to take a look at the google spreadsheets api, google it, try it, come back when something specific doesnt work. Also consider using google forms instead which already do what you want (save responses to a spreadsheet)
0
1
0
0
2014-02-24T23:42:00.000
1
0
false
22,001,578
0
0
1
1
I am making a survay website on google app engine using Python. For saving the survey form data i am using NDB Datastore. After the survey I have to import it as spreadsheet or CSV. How can i do that. Thanks.
how to do authentication of rest api from javascript, if javascript is on third party site?
25,426,295
0
1
117
0
javascript,jquery,python,api
It all depends on what you're authenticating. If you're authenticating each user that uses your API, you have to do something like the following: Your site has to somehow drop a cookie in that user's browser, Your API needs to support CORS (we use easyXDM.js), somehow upon logging in to their site, their site needs to send the user to your site to have a token passed that authenticates the user against your API (or vice versa, depending on the relationship). If you're just authenticating that a certain site is authorized to use your API, you can issue that site an API key. You check for that API key whenever your API is called. The problem with this approach is that JavaScript is visible to the end user. Anyone who really wants to use your API could simply use the same API key. It's not really authentication without some sort of server to server call. At best, you're simply offering a very weak line of defense against the most obvious of attacks.
0
0
1
0
2014-02-25T11:59:00.000
1
0
false
22,013,532
0
0
1
1
I have a javascript placed on third party site and this js makes API calls to my server. JS is publicly available and third party cannot save credentials in JS. I want to authenticate API calls before sharing JSON and also want to rate limit. Any one has ideas on how can i authenticate API?
Django - listening to rabbitmq, in a synchronized way. without celery. in the same process of the web bound django
32,414,239
2
2
1,008
0
python,django,rabbitmq
If anyone else bumps into this problem: The solution is using a RabbitMQ consumer from a different process (But in the same Django codebase) then Django (Not the running through wsgi, etc. you have to start it by it self) The consumer, connects to the appropriate rabbitmq queues and writes the data into the Django models. Then the usual Django process(es) is actually a "read model" of the data inserted/updated/created/deleted as delivered by the message queue (RabbitMQ or other) from a remote process.
0
1
0
0
2014-02-25T15:30:00.000
1
0.379949
false
22,018,798
0
0
1
1
I need to implement a quite simple Django server that server some http requests and listens to a rabbitmq message queue that streams information into the Django app (that should be written to the db). the data must be written to the db in a synchronized order , So I can't use the obvious celery/rabbit configuration. I was told that there is no way to do this in the same Django project. since Django would listen to http requests on It's process. and It can't handle another process to listen for Rabbit - forcing me to to add Another python/django project for the rabbit/db writes part - working with the same models The http bound django project works with.. You can smell the trouble with this config from here. .. Any Ideas how to solve this? Thanks!
Best way to build a custom Django CMS
22,035,806
0
1
2,589
0
python,django,content-management-system,django-cms
django CMS is a CMS on top of django. It supports multiple languages really well and plays nice together with your own django apps. The basic idea is that you define placeholders in your template and then are able to fill those placeholders with content plugins. A content plugin can be a anything from text, picture, twitter stream, multi column layout etc.
0
0
0
0
2014-02-26T02:47:00.000
2
0
false
22,030,696
0
0
1
1
I am new to Django, but heard it was promising when attempting to create a a custom CMS. I am looking to get started, but their seems to be a lack of documentation, tutorials, etc on how to actually get something like this going. I am curious if their are any books/tutorials/guides that can help me get started with CMS django building. PS- I have heard of django-cms, but am unsure what exactly it is and how it is different from django.
Achieving django url-like functionality using resource_name in tastypie
22,069,707
1
0
158
0
python,django,tastypie
Although I'm not sure that the approach of using the resource_name with slashes will always work for you, in order to resolve your issue you can simply change the order of the URL registration. When register the urls, register the resource with the name "library/books" last. The reason that you have the issue is that "library/books/shelf" is caught as the book with the pk of "shelf". If the url patterns of the resource "library/books/shelf" will come first, they will be caught by Django before trying to resolve library/books/pk.
0
0
0
0
2014-02-26T15:38:00.000
1
0.197375
false
22,046,149
0
0
1
1
Is there a way to create a hierarchy of resources in tastypie using resource_name that will behave like regular django urls? I'm aiming to have tastypie urls that look like this: <app_name>/<module_name>/<functionality>, but I'm having trouble. I've created resources with the following resource_name: library/books library/books/shelf library/books/circulation (Note that the parent resource library/book has no trailing slash) In this case, I can access the parent resource just fine. However, when trying to access one of the children resources (e.g. /api/v1/library/books/circulation) I receive the following error: Invalid resource lookup data provided (mismatched type). On the other hand, when I define the parent's resource_name as library/books/ (with a trailing slash), the children resources come back fine - but the parent resource itself returns a 404 error. All is well if I format the resource_names with underscores (library_books, library_books_circulation) - but then they're really ugly... I'm running Python 2.7.3, using Django 1.6 with Tastypie 0.10.0.
Using Django without templates?
22,055,844
2
0
1,483
0
python,django,django-templates
If you are making a web app, I'd say you need templates. Any other solution would be a mess. However, django templates have been known to not scale well because rendering them is relatively slow compared to other solutions like jinja2. There are several apps that integrate jinja2 into django. There's also been a lot of discussion on integrating jinja2 into django core itself someday in the future. So if you are scaling up big time, you may to investigate performance and optimize template rendering. There are some big sites using django like Pinterest, Instagram, and bitbucket, so they must have figured out a way. But for the most part, django template performance is just fine.
0
0
0
0
2014-02-26T23:03:00.000
1
0.379949
false
22,055,432
0
0
1
1
Forgive my knowledge on django, although I was briefly talking with a developer from Google whom I had met and he stated something confusing to me. He mentioned something that I hadn't really gotten a chance to ask him more about. He told me to be careful with django templates because in terms of scale, they can cause problems and almost always need to be re-written. Rather he mentioned something like using a 'full stack' with django. I think back, and I don't exactly follow what he means by that. Is their a way to use Django without templates? Is it better ? Why or why not?
How to re-architect a portal for creating mobile app
22,103,027
0
0
183
0
python,django,angularjs,heroku,architecture
What you guys think about architecture? This is a common Service Oriented Architecture with decoupled Clients. You just have REST endpoints on your backend, and any Client can consume those endpoints. You should also think about: Do you need RESTful service (RESTful == stateless, will you store any state on the server?) How to scale the service in the future? (this is a legit thing as you already aware of huge traffic increase and assume 2 servers) How it can be improved? Use scala instead of python :) Will performance of portal will go down after adding above layers in architecture? It depends. It will get some performance penalty (any additional abtract layer has it's tax), but most probably you won't event notice it. But still, you should measure it using some stress tests. In the above architecture whether 2 servers should be used to run this (like one for client and other for serving the API's) or one server will be enough. Currently Heroku is used for deployment. Well, as usual, it depends. It depends on the usage profile you have right now and on the resources available. If you are interested in whether the new design will perform better than the old one? - there are a number of parameters. Resume This is a good overall approach for the system with different clients. It will allow you: Totally decouple mobile app and frontend development from backend development. (It could be different independent teams, outsourceable) Standardize your API layer (as all clients will consume the same endpoints) Make you service scalable easier (this includes the separate webserver for static assets and many more).
0
0
0
0
2014-02-27T11:54:00.000
2
0
false
22,067,766
0
0
1
1
Currently I am working on a portal which is exposed to end users. This portal is developed using Python 2.7, Django 1.6 and MySQL. Now we want to expose this portal as a mobile app. But current design does not support that as templates, views and database are tightly coupled with each other. So we decided to re-architect the whole portal. After some research I found following: Client side: AngularJS for all client side operations like show data and get data using ajax. Server side: Rest API exposed to AngularJS. This Rest API can be developed using either Tastypie or Django Rest Framework (still not decided). Rest API will be exposed over Django. I have few questions: What you guys think about architecture? Is this is a good or bad design? How it can be improved? Will performance of portal will go down after adding above layers in architecture? In the above architecture whether 2 servers should be used to run this (like one for client and other for serving the API's) or one server will be enough. Currently Heroku is used for deployment. Currently portal is getting 10K hits in a day and it is expected to go to 100K a day in 6 months. Will be happy to provide more information if needed.
Multiple storage engines for django media: prefer local, fallback to CDN
22,083,369
1
3
675
0
python,django,rackspace,mezzanine
The best way is to have this working, is to have a different web server serving all of your media (I used nginx). Then you setup a load balancer to detect failure and redirect all the requests to CDN in case of a failure. One thing that you might have to figure out is the image path.(use HAProxy to rewrite the request URL, if you need to)
0
1
0
0
2014-02-27T22:31:00.000
2
0.099668
false
22,082,005
0
0
1
1
I have a django/mezzanine/django-cumulus project that uses the rackspace cloudfiles CDN for media storage. I would like to automatically serve all static files from the local MEDIA_ROOT, if they exist, and only fallback to the CDN URL if they do not. One possible approach is to manage the fallback at the template level, using tags. I would prefer not to have to override all the admin templates (eg) just for this, however. Is there a way to modify the handling of all media to use one storage engine first, and switch to a second on error?
Retrieving product list form openerp 7
22,103,548
0
0
273
0
python,openerp
Working through XML-RPC is pretty much like working directly on the server, only slower. To get the product list you'll need to interact wit product.product, and to narrow the list (and the data) you'll need to specify a domain such as domain=[('color','=','red'),('style','=','sport')] and fields=['id','name','...']. Hopefully that's enough to get you going.
0
0
0
0
2014-02-28T17:26:00.000
1
1.2
true
22,101,857
0
0
1
1
I'm currently working on a mobile app that connects with an openerp 7 instance though XML-RPC. Although xmlrpc comm between iOS & Openerp 7 works perfectly, I'm puzzled at which objects I need to interact with at the openerp side in order to get the product list with only the items I want and to post a sale. Any one? Thanx, M
Selenium phantomjs (python) not redirecting to welcome page after login, page is load dynamically using dojo
22,103,574
0
2
633
0
python,selenium,phantomjs
I just discovered that my problem was with a elem.send_keys(Keys.ENTER) line. Phantomjs seems to be very fast so I had to put a time.sleep of 2 seconds before that line, and now the script works fine. What happened is that Enter button for login wasn't clicked properly. Of course time.sleep(2) isn't the best way to solve it, I will change the ENTER statement into a click with xpath.
0
0
1
0
2014-02-28T17:51:00.000
1
1.2
true
22,102,352
0
0
1
1
I was trying to log into a website that is loaded fuly dinamically using dojo.js scripts. On my tests I am using: Selenium 2.40 Phantomjs 1.9.7 (downloaded via npm) Ubuntu 12.04 When I try my script with: driver = webdriver.Firefox() Everything works fine, Firefox logins through login page /login.do, gets through authentication page and arrives at the landing page and everything works perfectly. But I have to make this code for an Ubuntu Server so I can't use a GUI, when I change to: driver = webdriver.PhantomJS() I arrived again at /login.do ( print driver.current_url) I have tried to use WebDriverWait and nothing happens. Does PhantomJS for python an issue with dynamically loading pages? If not, can I use another tool or better yet, someone knows a book or tutorial to understand XHR Requests and doing this job with requests and urllib2?
How to debug a django application in a comfortable way?
22,116,486
0
0
91
0
python,django,debugging
To make your life easier, try IDE like PyCharm. I use pdb or ipdb to debug simple python file, but they wouldn't be so useful in debugging complex Python scripts. Also, django-debug-tools is a good tool to debug and optimize Django application.
0
0
0
0
2014-03-01T13:26:00.000
2
0
false
22,114,984
0
0
1
1
I'm currently debugging a django application by inserting import pdb; pdb.set_trace() in the code and using the debugger commands to navigate through running application. The debugger shows the current line, but most of the time it is helpful to have a bit more context. Therefore I open the current file in an editor in another window. Now whenever the flow changes to another class I need to manually open the new file in the editor. - This feels like there is an easier way to do this. Is there any kind of IDE integration able to debug a running django application? Is there some other way I am not yet aware of?
Using RabbitMQ with Django to get information from internal servers
22,125,431
0
4
3,991
0
python,linux,django,sockets
You need the following two programs running at all times: The producer, which will populate the queue. This is the program that will collect the various messages and then post them on the queue. The consumer, which will process messages from the queue. This consumer's job is to read the message and do something with it; so that it is processed and removed from the queue. The function that this consumer does is entirely up to you, but what you want to do in this scenario is write information from the message to a database model; the same database that is part of your django app. As the producer pushes messages and the consumer removes them from the queue, your database will get updated. On the django side, the process is simply to filter this database and display records for a particular machine. As such, django does not need to be aware of how the records are being populated in the database - all django is doing is fetching, filtering, sending to the template and rendering the views. The question comes how best (well actually, easily) populate the databases. You can do it the traditional way, by using Python's well documentation DB-API and write your own SQL statements; but since celery is so well integrated with django - you can use the django's ORM to do this work for you as well. I hope this gets you going in the right direction.
0
0
0
0
2014-03-01T22:42:00.000
3
0
false
22,121,368
0
0
1
1
I've been trying to make a decision about my student project before going further. The main idea is get disk usage data, active linux user data, and so on from multiple internal server and publish them with Django. Before I came to RabbitMQ I was thinking about developing a client application for each linux server and geting this data through a socket. But I want to make that student project simple. Also, I don't know how difficult it is to make a socket connection via Django. So, I thought I could solve my problem with RabbitMQ without socket programming. Basically, I send a message to rabbit queue. Then get whatever I want from the consumer server. On the Django side, the client will select one of the internal servers and click the "details" button. Then I want to show this information on web page. I already read almost all documentation about rabbitmq, celery and pika. Sending messages to all internal servers(clients) and the calculation of information that I want to get is OKAY but I can't figure out how I can put this data on a webpage with Django? How would you figure out that problem if you were me? Thank you.
Should I use an ORM like SQLAlchemy for a lightweight Flask web service?
22,128,680
1
0
941
1
python,sqlalchemy,flask,flask-sqlalchemy
SQL Alchemy is generally not faster (esp. as it uses those driver to connect). However, SQL Alchemy will help you structure your data in a sensible way and help keep the data consistent. Will also make it easier for you to migrate to a different db if needed.
0
0
0
0
2014-03-02T13:49:00.000
2
0.099668
false
22,128,419
0
0
1
2
I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?
Should I use an ORM like SQLAlchemy for a lightweight Flask web service?
22,134,840
1
0
941
1
python,sqlalchemy,flask,flask-sqlalchemy
Your question is too open to anyone guarantee SQLAlchemy is not a good fit, but SQLAlchemy probably will never be your problem to handle scalability. You'll have to handle almost the same problems with or without SQLAlchemy. Of course SQLAlchemy has some performance impact, it is a layer above the database driver, but it also will help you a lot. That said, if you want to use SQLAlchemy to help with your security (SQL escaping), you can use the SQLAlchemy just to execute your raw SQL queries, but I recommend it to fix specific bottlenecks, never to avoid the ORM.
0
0
0
0
2014-03-02T13:49:00.000
2
1.2
true
22,128,419
0
0
1
2
I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?
user upload to my S3 bucket
22,162,436
0
0
141
1
python,file-upload,amazon-web-services,amazon-s3
This answer is relevant to .Net as language. We had such requirement, where we had created an executable. The executable internally called a web method, which validated the app authenticated to upload files to AWS S3 or NOT. You can do this using a web browser too, but I would not suggest this, if you are targeting big files.
0
0
0
0
2014-03-04T00:59:00.000
2
0
false
22,160,820
0
0
1
1
I would like for a user, without having to have an Amazon account, to be able to upload mutli-gigabyte files to an S3 bucket of mine. How can I go about this? I want to enable a user to do this by giving them a key or perhaps through an upload form rather than making a bucket world-writeable obviously. I'd prefer to use Python on my serverside, but the idea is that a user would need nothing more than their web browser or perhaps opening up their terminal and using built-in executables. Any thoughts?
Permission to get the source code using spider
22,165,257
0
1
43
0
python,web-scraping
robots.txt file does have limits. Its better to inform the owner of the site if you are crawling too often and read reserved rights at the bottom of the site. It is a good idea to provide a link, to the source of your content.
0
0
1
0
2014-03-04T07:01:00.000
1
1.2
true
22,165,086
0
0
1
1
I am working on creating a web spider in python. Do i have to worry about permissions from any sites for scanning there content? If so, how do i get those? Thanks in advance
Saving user's social auth name using python social auth in django
22,521,700
2
3
1,254
0
django,python-social-auth
remove SOCIAL_AUTH_USER_MODEL because you are using Django Default User model.
0
0
0
0
2014-03-04T07:41:00.000
1
0.379949
false
22,165,792
0
0
1
1
I work on django project that migrate from django social auth to python social auth. Previously new social auth user first name/last name will be saved automatically for first time login. Now after using python social auth, it's not. Seems I have to use this setting: SOCIAL_AUTH_USER_MODEL but SOCIAL_AUTH_USER_MODEL = 'django.contrib.auth.models.User' generate error when invoking runserver: django.core.management.base.CommandError: One or more models did not validate: default.usersocialauth: 'user' has a relation with model web.models.User, which has either not been installed or is abstract. Wanted to try subclassing User model in the project from django.contrib.auth.models import User class User(User): but that is not feasible right now. Also saving manually the name from response data in custom pipeline is prohibited as well. Really want to know if there any other solution? Thanks.
Setting an attribute on object's __class__ attribute
22,169,632
3
1
83
0
python,django
Yes, that is setting the attribute on the class. But no, that would not necessarily make it available between requests, although it might. Your question shows a misunderstanding of how Django requests work. Django is not necessarily served using multiple threads: in fact, in most server configuration, it is hosted by multiple independent processes that are managed by the server. Again, depending on the configuration, each of those processes may or may not have multiple threads. But whether or not threads are involved, processes are started and killed by the server all the time. If you set an attribute on a class or module in Django during one request, any subsequent requests served by that same process will see that attribute. But there's no way to guarantee which process your next request will be served by. And there's certainly no way to know if the same user will be accessing the next request from that same process. Setting things at class or module level can be the source of some very nasty thread-safety bugs. My advice is generally not to do it. If you need to keep things across request, store it in the database, the cache, or (especially if it's specific to a particular user) in the session.
0
0
0
0
2014-03-04T10:30:00.000
2
1.2
true
22,169,372
1
0
1
1
I am a bit confused over the difference between setting an object's attribute and setting an attribute on an object's __class__ attribute. Roughly, obj.attr vs obj.__class__.attr. What's the difference between them? Is the former setting an attribute on an instance and the latter setting an attribute on an instance's class (making that attribute available to all instances)? If this is the case, then how are these new class attributes available in Django requests, since the framework works with multiple threads? Does setting a class variable make it persist between requests?
Traversing back to parent with lxml.html.xpath
22,177,986
2
2
405
0
python,lxml,lxml.html
This will select the parent element of the XPath expression you gave: //*[@id="titleStoryLine"]/div/h4[text()="Genres:"]/..
0
0
1
0
2014-03-04T16:45:00.000
2
1.2
true
22,177,872
0
0
1
1
How can we traverse back to parent in xpath? I am crawling IMDB, to obtain genre of films, I am using elem = hxs.xpath('//*[@id="titleStoryLine"]/div/h4[text()="Genres:"]') Now,the genres are listed as anchor links, which are siblings to this tag. how can this be achieved?
Check time since last request
22,181,923
0
0
76
1
python
I asked about a soft button earlier. If your computer program is password/access protected you could just store it all in a pickle/config file somewhere, I am unsure what the value of the sql file is: use last_push = time.time() and check the difference to current push if seconds difference less than x do not progress, if bigger than x reset last_push and progress.... or am I missing something
0
0
0
1
2014-03-04T17:13:00.000
2
0
false
22,178,513
0
0
1
2
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
Check time since last request
22,179,026
0
0
76
1
python
If this is the easiest solution for you to implement, go right ahead. Worst case scenario, it's too slow to be practical and you'll need to find a better way. Any other scenario, it's good enough and you can forget about it. Honestly, it'll almost certainly be efficient enough to serve your purposes. The number of users at any one time will very rarely exceed one. An SQL query to determine if the timestamp is over a day before the current time will be quick, enough so that even the most determined gas-hole(!) wouldn't be able to cause any damage by spam-clicking the button. I would be very surprised if you ran into any problems.
0
0
0
1
2014-03-04T17:13:00.000
2
0
false
22,178,513
0
0
1
2
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
Using php inside python code ,google app engine
22,183,262
2
1
361
0
php,google-app-engine,python-2.7
Those runtimes (Py, PHP, Java. etc.) are isolated from each other and are tightly sandboxed. So when you deploy a Python app, for example, it doesn't have access to the PHP or Java runtime. So, it's not possible to run PHP inside a python sandbox, at least not in the appengine platform.
0
1
0
1
2014-03-04T20:05:00.000
2
0.197375
false
22,181,860
0
0
1
1
I have a project that already written in php, and now i am using python in google app engine, actually i want to use the api that google support for python, for example : datastore, blobstore ... and also to save my time to re write the code again in python ! so, is it possible to run php script in python code ?
iPython notebook Websocket connection cannot be established
26,615,734
0
6
832
0
websocket,ipython,ipython-notebook
Try reinstalling your iPython server or creating a new profile for the server
0
0
1
0
2014-03-05T00:24:00.000
1
0
false
22,186,057
0
0
1
1
iPython was working fine until a few hours ago when I had to do a hard shutdown because I was not able to interrupt my kernel. Now opening any notebook gives me the following error: "WebSocket connection to could not be established. You will NOT be able to run code. Check your network connection or notebook server configuration." I have the latest version of Chrome and I am only trying to access local notebooks. The Javascript console gives me this: Starting WebSockets: [link not allowed by StackOverflow]/737c7279-7fab-467c-9e0f-cba16233e4b5 kernel.js:143 WebSocket connection failed: [link not allowed by StackOverflow]/737c7279-7fab-467c-9e0f-cba16233e4b5 notificationarea.js:129 Resource interpreted as Image but transferred with MIME type image/x-png: "[link not allowed by StackOverflow]/static/components/jquery-ui/themes/smoothness/images/ui-bg_glass_75_dadada_1x400.png". (anonymous function)
Celery worker-offline event not generated
22,226,475
0
1
150
0
python,celery
After a bit of research, I ended up answering my own question: it was a bug which has been fixed in the later versions of celery.
0
1
0
0
2014-03-05T22:55:00.000
1
0
false
22,211,193
0
0
1
1
I am trying to capture the worker-related events, but there is something weird going on in my application: all the task events are being generated and captured and the worker events as well, except for worker-offline event. Is there any specific setting that I need to make for this event to be generated? I am using Celery 3.0.23
Django CMS migrate pages
29,132,323
0
1
589
0
python,django,django-cms
This is what works for me: ./manage.py dumpdata cms.page cms.title > pages.json
0
0
0
0
2014-03-06T00:01:00.000
2
0
false
22,212,114
0
0
1
1
On my local machine I'm building a Django CMS site. I have about 50 CMS pages with page titles, custom slugs, and data. How do I dump just the CMS pages data on my local machine and load it into my staging environment? I've tried using a fixture with python manage.py dumpdata cms.page --indent=2 > cmspages.json, however, the page title, slug, and data are not in the json so when I load cmspages.json the pages are created but no data is loaded. How do I migrate my CMS pages to my staging environment?
UI Designed in Photoshop for Software
22,215,461
2
0
1,055
0
java,python,user-interface,photoshop
but how do I take what I made in Photoshop add some java or python code to it to make certain things happen No, you cannot expect things to happen magically, for that you need to learn front-end technologies like HTML, CSS, JavaScript etc and manually convert the UI which is in Photoshop to corresponding code. This applies for web applications. If you want to build desktop application, you need to use Swings, SWT etc to achieve the same. I have zero experience in this If this is the case, I recommend to read some basic tutorials, then you will get idea what to do
1
0
0
0
2014-03-06T04:57:00.000
2
0.197375
false
22,215,361
0
0
1
1
so I am actually trying to get into software development and I currently have just spent a few days making a GUI in Photoshop. Now I know how to code in Java and Python but I have never implemented a GUI before. I am stuck on this because I know I can write the code and everything but how do I take what I made in Photoshop add some java or python code to it to make certain things happen? I have zero experience in this and I have only written code to accomplish tasks without the need for a GUI.
Using Sphinx within a project using several programming languages
22,318,733
0
2
1,026
0
documentation,python-sphinx,documentation-generation,autodoc
The best way to combine different languages in one Sphinx project is to write docs without autodoc or other means of automatic generation. For the most part they are available only for Python and even if some extension out there does allow other languages, you will be buried under different workflows before you even notice. Salvage your docs from the code and write them in concise manner in a separate docs folder of your project or even separate repository. You could use the generic Sphinx directories like class or method with no attachment to the code and for virtually any major programming language. I for myself did a project like that, where I needed to combine C, C++ and Python code in one API and it was done manually. If you create this kind of detached project, the maintenance should be much of an issue. It's not much harder, than autodoc workflow. What for PDF and HTML - any Sphinx project allows that. See their docs for details on different builders like latexpdf or html.
0
0
0
0
2014-03-06T21:05:00.000
1
0
false
22,235,952
1
0
1
1
the project I am working on ship a package that contains API for different languages: Java, Python, C#, and others. All these APIs shared mostly the same documentation. The documentation should be available in PDF and HTML separately on our website. The user usually download/browse the one it is interested in. Currently we use sdocml, but we are not that satisfied and so we want to move to a more up to date tool and we are considering Sphinx. Looking at the Sphinx documentation I cannot clearly figure out how: 1- say to generate the docs for a certain API (for instance the Java one) 2- does autodoc works for any domain? 3- is there a c# extension? Any help is most welcome!
Django Deployment on Linux Ubuntu
22,241,285
0
0
117
0
python,linux,django,ubuntu,django-deployment
Here is my stack: Nignx + gunicon, Supervisor Deployment, If you are planing frequent releases you should be looking at something like Fabric. Even of not frequent Fabric is a very good tool to be aware of. People have preference in terms of stack, but this one has been working great for me.
0
1
0
0
2014-03-07T03:46:00.000
1
1.2
true
22,241,028
0
0
1
1
I am going to seploy my first Django application to a cloud server like Amazon EC2 and the system is Linux Ubuntu. But I cannot find a very good step-by-step tutorial for the deployment. Could you recommend one? And I also have the following questions: What is the most recommended environment? Gunicorn, Apache+mod_python or ohters? How to deploy my code? I am using mac and should I use ftp or check out from my github repository? Thank you!
Pyramid self.request.POST is empty - no post data available
22,251,970
2
1
1,795
0
python,forms,http-post,pyramid
I've managed to get it working. Silly me, coming from an ASP.NET background forgot the basics of POST form submissions, and that's each form field needs a name== attribute. As soon as I put them in, everything started working.
0
0
0
0
2014-03-07T06:32:00.000
3
0.132549
false
22,243,208
0
0
1
1
I'm currently working on a pyramid project, however I can't seem to submit POST data to the app from a form. I've created a basic form such as: <form method="post" role="form" action="/account/register"> <div class="form-group"> <label for="email">Email address:</label> <input type="email" class="form-control" id="email" placeholder="[email protected]"> <p class="help-block">Your email address will be used as your username</p> </div> <!-- Other items removed --> </form> and I have the following route config defined: # projects __init__.py file config.add_route('account', '/account/{action}', request_method='GET') config.add_route('account_post', '/account/{action}', request_method='POST') # inside my views file @view_config(route_name='account', match_param="action=register", request_method='GET', renderer='templates/account/register.jinja2') def register(self): return {'project': 'Register'} @view_config(route_name='account_post', match_param="action=register", request_method='POST', renderer='templates/account/register.jinja2') def register_POST(self): return {'project': 'Register_POST'} Now, using the debugger in PyCharm as well as the debug button in pyramid, I've confirmed that the initial GET request to view the form is being processed by the register method, and when I hit the submit button the POST request is processed by the *register_POST* method. However, my problem is that debugging from within the *register_POST* method, the self.request.POST dict is empty. Also, when I check the debug button on the page, the POST request is registered in the list, but the POST data is empty. Am I missing something, or is there some other way of access POST data? Cheers, Justin
ZeroMQ is too fast for database transaction
22,247,025
0
1
2,062
1
python,postgresql,sqlalchemy,zeromq
This comes close to your second solution: Create a buffer, drop the ids from your zeromq messages in there and let you worker poll regularly this id-pool. If it fails retrieving an object for the id from the database, let the id sit in the pool until the next poll, else remove the id from the pool. You have to deal somehow with the asynchronous behaviour of your system. When the ids arrive constantly before persisting the object in the database, it doesnt matter whether pooling the ids (and re-polling the the same id) reduces throughput, because the bottleneck is earlier. An upside is, you could run multiple frontends in front of this.
0
0
0
1
2014-03-07T08:48:00.000
2
0
false
22,245,407
0
0
1
1
Inside an web application ( Pyramid ) I create certain objects on POST which need some work done on them ( mainly fetching something from the web ). These objects are persisted to a PostgreSQL database with the help of SQLAlchemy. Since these tasks can take a while it is not done inside the request handler but rather offloaded to a daemon process on a different host. When the object is created I take it's ID ( which is a client side generated UUID ) and send it via ZeroMQ to the daemon process. The daemon receives the ID, and fetches the object from the database, does it's work and writes the result to the database. Problem: The daemon can receive the ID before it's creating transaction is committed. Since we are using pyramid_tm, all database transactions are committed when the request handler returns without an error and I would rather like to leave it this way. On my dev system everything runs on the same box, so ZeroMQ is lightning fast. On the production system this is most likely not an issue since web application and daemon run on different hosts but I don't want to count on this. This problem only recently manifested itself since we previously used MongoDB with a write_convern of 2. Having only two database servers the write on the entity always blocked the web-request until the entity was persisted ( which is obviously is not the greatest idea ). Has anyone run into a similar problem? How did you solve it? I see multiple possible solutions, but most of them don't satisfy me: Flushing the transaction manually before triggering the ZMQ message. However, I currently use SQLAlchemy after_created event to trigger it and this is really nice since it decouples this process completely and thus eliminating the risk of "forgetting" to tell the daemon to work. Also think that I still would need a READ UNCOMMITTED isolation level on the daemon side, is this correct? Adding a timestamp to the ZMQ message, causing the worker thread that received the message, to wait before processing the object. This obviously limits the throughput. Dish ZMQ completely and simply poll the database. Noooo!
Is there a way to use Mako templates for plain-text files where newlines matter?
27,262,018
4
2
855
0
python,template-engine,mako,plaintext
If you add a backslash at the end of the line like this: "<% %>\", you can suppress the newline.
0
0
0
1
2014-03-07T15:49:00.000
1
0.664037
false
22,254,612
0
0
1
1
In an existing application we are using Mako templates (unfortunately..). That works ok for HTML output since newlines do not matter. However, we now need to generate a text/plain email using a template - so any newlines introduced by control statements are not acceptable. Does Mako provide any options to make statement lines (i.e. those starting with %) not cause a newline in the output? I checked the docs but couldn't find anything so far...
Right way to manage a high traffic connection application
22,409,012
1
0
270
1
python,database,postgresql,gps,twisted
Databases do not just lose data willy-nilly. Not losing data is pretty much number one in their job description. If it seems to be losing data, you must be misusing transactions in your application. Figure out what you are doing wrong and fix it. Making and breaking a connection between your app and pgbouncer for each transaction is not good for performance, but is not terrible either; and if that is what helps you fix your transaction boundaries then do that.
0
0
0
0
2014-03-07T17:28:00.000
1
1.2
true
22,256,760
0
0
1
1
Introduction I am working on a GPS Listener, this is a service build on twisted python, this app receive at least 100 connections from gps devices, and it is working without issues, each GPS send data each 5 seconds, containing positions. ( the next week must be at least 200 gps devices connected ) Database I am using a unique postgresql connection, this connection is shared between all gps devices connected for save and store information, postgresql is using pgbouncer as pooler Server I am using a small pc as server, and I need to find a way to have a high availability application with out loosing data Problem According with my high traffic on my app, I am having issues with memory data after 30 minutes start to appear as no saved, however queries are being executed on postgres ( I have checked that on last activity ) Fake Solution I have amke a script that restart my app, postgres ang pgbouncer, however this is a wrong solution, because each time that I restart my app, gps get disconnected, and must to reconnected again Posible Solution I am thinking on a high availability solution based on a Data Layer, where each time when database have to be restarted or something happened, a txt file store data from gps devices. For get it, I am thing on a no unique connection, I am thinking on a simple connection each time one data must be saved, and then test database, like a pooler, and then if database connection is wrong, the txt file store it, until database is ok again, and the other process read txt file and send info to database Question Since I am thinking on a app data pooler and a single connection each time when this data must be saved for try to no lost data, I want to know Is ok making single connection each time that data is saved for this kind of app, knowing that connections will be done more than 100 times each 5 seconds? As I said, my question is too simple, which one is the right way on working with db connections on a high traffic app? single connections per query or shared unique connection for all app. The reason on looking this single question, is looking for the right way on working with db connections considering memory resources. I am not looking for solve postgresql issues or performance, just to know the right way on working with this kind of applications. And that is the reason on give as much of possible about my application Note One more thing,I have seen one vote to close this question, that is related to no clear question, when the question is titled with the word "question" and was marked on italic, now I have marked as gray for notice people that dont read the word "question" Thanks a lot
Duplicate App to an already existing ID on Google app engine
22,264,642
0
0
104
0
python,google-app-engine,blogs
You say you already used that id before. If you havent deleted that app just use that one to load your new code there. You will need to delete existing datastore data etc.
0
1
0
0
2014-03-07T23:04:00.000
2
0
false
22,262,420
0
0
1
1
I made a blog in Python and I am running it off of Google App Engine. When I started, I put a random ID, just because I was experimenting. Lately, my blog got a bit popular and I wanted to change the ID. I wanted to duplicate my app, but the problem is that I already registered that ID a while ago with google. How can I duplicate it even though the name already exists. Thanks, Liam
Real time output of script on web page using mod_wsgi
22,933,666
1
1
406
0
python,web,mod-wsgi
You'll need JavaScript to do this Possibility 1, data generated by server - Make a static HTML page with an empty div. - Place a piece of Javascript code onto it that is run after the page is loaded. - The JavaScript will contain a timer that downloads the output of your script say every 5 seconds, using AJAX ands sets your div's html to the result. - The easiest way to get this working is probably to use the AJAX facilities in JQuery. Possibility 2, data generated by client - If it is possible to have your dynamic output generated on the client by a piece of JavaScript code this will scale better (since it takes the burden off the server.) - You may still load the input data needed to compute the dynamic output formatted as JSON by means of AJAX.
0
0
0
1
2014-03-08T11:53:00.000
1
0.197375
false
22,268,968
0
0
1
1
I have a Python application that I launch from a form using mod_wsgi. I would like to display in real time the output of the script, while it is running, to a web page. Does anybody know how I can do this?
Add field to existing django app in production environment
22,275,172
3
2
215
0
python,django,django-south,database-migration
first is to fake the initial migration: python manage.py migrate [yourapp] --fake 0001 then you can apply the migration to the db python manage.py migrate [yourapp] I'm assuming you ran convert_to_south on development, in which case production still wouldn't be aware of the migrations yet. convert_to_south automatically fakes the initial migration for you! If you were to just run migrate on production without faking, it should error.
0
0
0
0
2014-03-08T21:05:00.000
1
1.2
true
22,275,083
0
0
1
1
I have an existing django app and would like to add a field to a model. But because the website is already in production, just deleting the database is not an option any more. These are the steps I took: pip install south added 'south' to INSTALLED_APPS python manage.py syncdb python manage.py convert_to_south [myapp] So now I have the initial migration and south will recognize the changes. Then I added the field to my model and ran: python manage.py schemamigration [myapp] --auto python manage.py migrate [myapp] Now I have the following migrations: 0001_initial.py 0002_auto__add_field_myapp_fieldname.py Which commands should I run on my production server now to migrate? Also should I install south first and then pull the code changes and migrations?
User session object is not available on POST in django?
22,282,298
0
0
123
0
python,django,forms,session
The user is never in request.session. It's directly on the request object as request.user.
0
0
0
0
2014-03-09T12:39:00.000
2
1.2
true
22,282,264
0
0
1
1
I have a form where users are submitting data. One of the fields is "author" which i automatically fill in by using the {{ user }} variable in the template, it will have the username if the user is logged in and AnonymousUser if not. This {{ user }} is not part of the form, just text. When a user submits the form i need to see which user, or if this was an anonymous user that submitted the data so i though i would use the request.session['user'] but this doesnt work since the user key is not available. I tried setting the request.session['user'] value to the user object but the session dictionary doesnt accept objects, it says its not JSON serializable. I though the context processors would add this user variable to it was also available to the view but it isnt. I need a user object and not just the user name to save to the database along with the form. Is there any way to extract the user object when its not part of the form and the user is logged in ? I need to associate the submitted data with a user or an anonymous user and the foreign key requires an object which i also think is must convient to work with when extracting the data from the DB again. I dont see it being helpful to post any code here since this is a question of how to extract a user object after a post and not specifically a problem with the code.
Restrict employees access to suppliers in OpenERP 7
22,312,919
0
0
296
0
python,openerp,openerp-7
You can't restrict all access to the partners table (contains suppliers and customers) as the system will probably not work at all. As of OpenERP 7, res.partners also contains contacts and each user has a contact so if you block all access you will probably break a lot of things (YMMV). You may be able to get away with allowing read access only. The easiest would be to alter the views of customers and suppliers to add a security group that most users don't belong to so they can't see the view at all. You will have to track down the form views but you can do this pretty easily through: Settings -> Technical -> User Interface -> Views and search for the object res.partner.
0
0
0
0
2014-03-09T15:19:00.000
2
0
false
22,284,018
0
0
1
2
Can anyone help me to restrict employees from accessing the suppliers and also restrict them from the notes of customers in OpenERP 7. I am trying to setup a Contact Centre platform using OpenERP 7, where i can have Service Requests. Thanks in Advance
Restrict employees access to suppliers in OpenERP 7
26,007,374
0
0
296
0
python,openerp,openerp-7
You could create a rule on the Partner object for your employee group - [('customer','=',True)] - that way only customers are shown, i.e. only those suppliers will be shown who are also customers. You could then also take away the Suppliers menu for the cosmetics.
0
0
0
0
2014-03-09T15:19:00.000
2
0
false
22,284,018
0
0
1
2
Can anyone help me to restrict employees from accessing the suppliers and also restrict them from the notes of customers in OpenERP 7. I am trying to setup a Contact Centre platform using OpenERP 7, where i can have Service Requests. Thanks in Advance
NDB validator argument vs extending base property classes
22,288,117
-1
0
138
0
python,google-app-engine,python-2.7,app-engine-ndb
It depends. Are the restrictions one-off or is any particular restriction going to be reused in many different fields/models? For one-off restrictions, the validator argument is simpler and involves less boilerplate. For reuse, subclassing lets you avoid having to repeatedly specify the validator argument.
0
1
0
0
2014-03-09T20:55:00.000
1
1.2
true
22,288,044
0
0
1
1
I'm using AppEngine NDB properties and I wonder what would be the best approach to: limit StringProperty to be not longer than 100 characters apply regexp validation to StringProperty prohibit IntegerProperty to be less than 0 Would it be best to use the validator argument or to subclass base ndb properties?
Why is "django.core.context_processors.request" not enabled by default?
22,292,916
1
5
160
0
python,django,django-1.6
This is a good question. The docs say Note that this processor is not enabled by default; you’ll have to activate it. but no explanation. My take on it is due to django's intense desire to separate view logic from the template. The request object is the gateway to all data that view logic is built from (given what the browser sent us, do X, Y, Z) - therefore allowing it in the templates is akin to giving the template huge amounts of control which should be placed in the view under normal circumstances. The idea is to populate the template context with specifics, not everything. Removing them is just some more encouragement that "most things should be done in the view". The common django.contrib apps mostly don't rely on it, if it's not required by default. And of course, that's further proof the request object isn't necessary in the template except for special use cases. That's my take, anyways.
0
0
0
0
2014-03-10T05:03:00.000
1
1.2
true
22,292,424
0
0
1
1
I was troubleshooting a problem with obtaining the request obj with a new project and realized "django.core.context_processors.request" was commented in vanilla installs of Django. Like the title suggests, why would this seemingly helpful context processor be turned off by default? Is it an issue with performance? Is it an issue with security? Is it somehow redundant? Some mild searching has not turned up anything for me, but I thought I'd ask here.
How can I upload a static HTML to templates folder of GAE app?
22,304,153
0
0
60
0
java,python,google-app-engine
No you cant if you want to store them in static storage. You can store them somewhere non-static but you will lose the many advantages of having it as static content.
0
1
0
0
2014-03-10T14:07:00.000
1
0
false
22,302,439
0
0
1
1
Can I upload a static HTML file to templates folder without re-deploying the app? Offline I create an HTML file which I want to upload to my Google app engine app,which displays the HTML as per URLs. But I don't want to deploy my site every time I am uploading a new file. Any suggestion would be helpful.
Encoding user input to be stored in MongoDB
22,315,740
4
7
2,232
1
javascript,python,ajax,mongodb,unicode
Does this need to be escaped to "&lt;script&gt;alert(&#x27;hi&#x27;);&lt;&#x2F;script&gt;" before sending it to the server? No, it has to be escaped like that just before it ends up in an HTML page - step (5) above. The right type of escaping has to be applied when text is injected into a new surrounding context. That means you HTML-encode data at the moment you include it in an HTML page. Ideally you are using a modern templating system that will do that escaping for you automatically. (Similarly if you include data in a JavaScript string literal in a <script> block, you have to JS-encode it; if you include data in in a stylesheet rule you have to CSS-encode it, and so on. If we were using SQL queries with data injected into their strings then we would need to do SQL-escaping, but luckily Mongo queries are typically done with JavaScript objects rather than a string language, so there is no escaping to worry about.) The database is not an HTML context so HTML-encoding input data on the way to the database is not the right thing to do. (There are also other sources of XSS than injections, most commonly unsafe URL schemes.)
0
0
0
0
2014-03-10T22:12:00.000
2
1.2
true
22,312,452
1
0
1
1
I'm trying to determine the best practices for storing and displaying user input in MongoDB. Obviously, in SQL databases, all user input needs to be encoded to prevent injection attacks. However, my understanding is that with MongoDB we need to be more worried about XSS attacks, so does user input need to be encoded on the server before being stored in mongo? Or, is it enough to simply encode the string immediately before it is displayed on the client side using a template library like handlebars? Here's the flow I'm talking about: On the client side, user updates their name to "<script>alert('hi');</script>". Does this need to be escaped to "&lt;script&gt;alert(&#x27;hi&#x27;);&lt;&#x2F;script&gt;" before sending it to the server? The updated string is passed to the server in a JSON document via an ajax request. The server stores the string in mongodb under "user.name". Does the server need to escape the string in the same way just to be safe? Would it have to first un-escape the string before fully escaping so as to not double up on the '&'? Later, user info is requested by client, and the name string is sent in JSON ajax response. Immediately before display, user name is encoded using something like _.escape(name). Would this flow display the correct information and be safe from XSS attacks? What about about unicode characters like Chinese characters? This also could change how text search would need to be done, as the search term may need to be encoded before starting the search if all user text is encoded. Thanks a lot!
Djangorestframework: is it possible to deserialize only specific fields of incoming JSON?
22,322,831
1
1
61
0
python,json,django,django-rest-framework
You need to look into using the required=False flag on the device field in your serializer class.
0
0
0
0
2014-03-11T02:32:00.000
1
1.2
true
22,315,311
0
0
1
1
I need to deserialize incoming JSON. The incoming JSON will be transformed to a Django model object called AdvancedUser. An AdvancedUser has a one to one with a Device model. When I POST my incoming JSON, I'm getting errors that say "Device field is required". The Device field is optional in my AdvancedUser model declaration code. How do I get rid of this error? It's OK if no Device field is passed in.
Django app - importing module issues
22,335,116
1
0
62
0
python,django,module
Your "Mod" folder appears to be missing a "init.py" which it will need if you want to import from it. I also don't recommend a capitalized folder in Python, kind of confusing. I'd also recommend you add "mod" to your python path so you're not having to do Mod.mod.module you can just do mod.module. I assume you have "mod" (lowercase) as an "INSTALLED_APP" in settings.py? Or is "module" the app? Either way you may want to check out Django's documentation on how to organize a project, especially if this is your first time.
0
0
0
0
2014-03-11T18:44:00.000
2
0.099668
false
22,333,873
0
0
1
1
I am on Django 1.6 with Python 2.7, getting an issue with importing some custom modules. On my views.py file, I have import Mod.mod.module.file, where the Mod folder is stored in the project directory, outside the folders with settings.py and views.py. The traceback gives me ImportError: No module named Mod.mod.module.file Thanks for any help! EDIT: directory structure-- projectFolder project (includes settings.py, urls.py, wsgi.py) Mod mod module file appFolder views.py
Storing song, artist and album data on App Engine
22,552,471
0
0
235
0
python,json,google-app-engine,app-engine-ndb
I would process the data from json, and place it in Model. As far as schema goes, you really need not worry about having redundancies as you cannot really think of the ndb as a relational database. So do not bother yourself too much about normalising the schema. But don't process on the client side, it is really not a good way to design it like that.
0
1
0
0
2014-03-11T22:27:00.000
2
0
false
22,338,014
0
0
1
1
I need to store information on artists, albums and songs in Google App Engine for a project I'm working on. The information is meta data taken from a directory of MP3s (using Python) that needs to be sent to App Engine for display to users. Along with the meta data, the song's path will need to be stored. Currently while scanning I'm storing the data in an list of dictionaries named Artists, each artist dictionary has a name and a list of Album dictionaries and each Album dictionary has a name and list of song dictionaries, each song then contains some meta data and the path to the MP3. I've been thinking of ways to store this data and have tried sending the data in JSON format to App Engine, then processing this into three models: Artist, containing the name and a repeated KeyProperty for each Album, Album then has a name and a repeated KeyProperty for each song and the song contains the rest of the meta data. Each of these will also contain a KeyProperty related the the Group that they belong to. The problems with this are: Lots of repeated data (Group Keys) and processing the data not only often exceeds the request deadline, but also uses an obscene amount of datastore writes. The only way I could think of to get around these problems would be to store the JSON provided after the scan as a JsonProperty and then pass this directly to the user for processing on the client side using JavaScript. The only issue I could see with that is that I don't particularly want to provide the path to the user (as this will need to be passed back and actioned on). Does anyone have experience using or storing this kind of data, or can provide any outside the box solutions?
Get redirected to a div automatically without user's intervention
22,348,762
0
1
68
0
javascript,python,html
I am not going to search the code for you. But most sites tell you you need an onclick event because that is needed to open a link. And example.com#idOfDiv, is the kind of link you would open. Howerver, there is another possibility. Find some javascript code to decide the position of an element in x and y coordinats. After you got it, make JS Scroll ;).
0
0
1
0
2014-03-12T10:25:00.000
2
0
false
22,348,501
0
0
1
1
I am new to JavaScript. The problem goes as follows: I have 10 div in my html file. I am using links to go from one to another. Now, I want to test a condition which if satisfied (I am using python for this), should redirect me to another div within the same html. But I want that to be automatic. For eg, I am in <div id="table1"> and inside it I am checking a condition. If that is true, I should be redirected automatically to <div id="table3">.Can anyone please help me find the way out?On google,when I am trying to search for it, it is giving me results where I have to click a button for redirection (which will invoke a JS function). But I don't want that. I want it to happen automatically. So, please tell. <div id="table5"> <div style="position:fixed;right:40px;top:65px;"> <a name="t10" href="#t10" onclick='show(10);'><button> GO TO MAIN PAGE </button></a> </div> % if not obj.i: <h2>REDIRECT ME TO id="table3"</h2> % else: <table border=1 cellpadding=7> <tr> <th>Row No. in Sheet</th> <th>Parameters</th> </tr> <tr> <th colspan=2>Total : ${value}</th> </tr> </table> % endif </div>
How can I share my website in progress to partner?
22,357,361
1
2
122
0
python,twitter-bootstrap,hosting
Well, then use your ubuntu system, forward the right ports in your router and give your customers a link to your IPadres. I assume you use your ubuntu system as an webserver already for testing your site?
0
0
0
0
2014-03-12T16:07:00.000
2
0.099668
false
22,357,298
0
0
1
1
I'm developing a website in python (with django and GIT) for an association, and I am to a point where I need to share my work for approval from the team. I have around 50 people who need to be able to access my "website" 24/7. Apparently, free hosting is not the best way to do it (see answers to my original question). I've never done such a thing, so I'm a bit lost. It looks like I can use without too many investment in effort my ubuntu computer. And apparently there is other tools for this application. I'm looking for advise and explanation on how to implement a working solution. EDIT: The 50 people are not in my local network. [ORIGINAL POST BELOW] What is the best way to share my website to partners? I'm developing a website for an association, and I want to know if there is a way to let them access to my work in progress. I was thinking of free hosting solutions? I'm not looking for a professional host, I'm just looking for a way to share my work with maximum 50 peoples. Is there another solution? I have an ubuntu pc that I could use as server (I have high speed connection). (I don't know if this is relevant, but I'm using python-django and bootstrap for the design)
can i serve PHP and python on a single project in app engine?
22,365,130
0
0
63
0
php,python,google-app-engine
You can (and many do) use a front-end like nginx or Apache that handles and forwards different paths differently. I do not see why you would want your application engine to be "bilingual" though.
0
1
0
1
2014-03-12T22:12:00.000
1
0
false
22,365,018
0
0
1
1
Can I serve PHP and python on a single project in app engine? for example /php/* will run php code, but the root / will run python code.
How to edit record in google datastore in AppEngine?
22,370,376
1
0
255
0
python,django,google-app-engine,gql
Fetch the record you want to edit (by key , id or any filter) , modify the field you want to edit and then put() it.
0
1
0
0
2014-03-13T04:56:00.000
1
1.2
true
22,369,384
0
0
1
1
I was create one app which have model and it was created but i facing problem how to edit data in google data store using python or Django. Please Help me.
What is a preferred way for installing Karma in Angular/Django project?
22,385,076
2
2
634
0
javascript,python,django,angularjs,karma-runner
I have found that you need to install a particular node module in a folder that encompasses all files that will use it. This is most easily accomplished by putting all node modules in the root folder of your website. This is by design of node's creator, though I'm not sure if he wants it that way or just does not want to change it. Either way, there is no way around this. As for karma, as it is a node module, it needs to be in a folder that includes all files that will use it; therefore, if your entire website uses it, you're better off putting it in the website's root folder. Of course, as node is open source, you could go in & change this requirement of node modules so they can be installed anywhere, maybe with a pointer from a file that uses it to that node module. Only you & your team (and your users) can determine if you want to push or ignore your website files, but in general with node_modules, if your users need them, send them. If only your developers need them, either install them individually on all developers' machines or make another branch for development work. Node also has a way to separate development modules from release modules, so you could look into that.
0
0
0
0
2014-03-13T15:31:00.000
1
1.2
true
22,383,464
1
0
1
1
I am just starting on integrating AngularJS into my Django project. After I installed Karma for testing following the tutorial I got bunch of Node.js modules installed in my root project folder. Should I check all of this files from node_modules folder into my repo? Or should I ignore them with .gitignore? Are there alternatives to installing Karma to root or is it required?
Where do I put the specific translation functions in Django App?
22,384,701
0
0
90
0
python,django
Try to move your app name up to admin app in your INSTALLED_APPS settings.py tuple.
0
0
0
0
2014-03-13T16:11:00.000
1
0
false
22,384,473
0
0
1
1
I have a Django project in which I have changed the default 'Django Administration' text in the header. Now I have implemented translation of strings that django knows about, but I cannot figure out how to translate this title. I put the translation function in models.py but it doesn't change when I change Language. I've edited the base.html template like so {% trans 'My Console' %} And added the msgstr in my .po files and ran makemessages and compilemessages I am running out of things to try. Can anyone shed some light on how to do this? I can supply code if it will help. Thanks for reading.
can I run django test but with out model define?
22,408,947
0
2
113
0
python,django,django-models,django-fixtures
The answer to your top question is yes, you should be able to use the Django test framework pieces that don't depend on models. The answer to your inner question about using fixtures (which sounds like it may be the real question) is, not without writing some additional test code. Django uses fixtures to populate Django models, and you don't have any Django models. So, write your tests using django.utils.unittest and deal with the fixture loading there.
0
0
0
0
2014-03-14T10:01:00.000
1
0
false
22,401,722
0
0
1
1
my problem is that. this is a old django project that I need to work on it. As unknown reason, the project don't use django model. Instead, it define some class to CRUD the database by pure sql. and the project has no tests at all. now, I want to add unittest for the project(views/models/and so on).but I wonder if this test can use fixture without model define? I don't have so much time to test this by my hand. So is there any one knows about this?
How to transfer html view data or (Python) server data to Angular or Javascript?
22,408,064
2
0
586
0
javascript,python,angularjs,google-app-engine,jinja2
Since you are already going to build an Angular app for the front-end, why not make the whole architecture RESTful? That way the front-end Angular app will be in charge of presentation and the server of just the data. You can pass data between the server and front-end through JSON which has the benefit of not needing to deal with html or templates in the back end? Angular already has Service and $http that can abstract away the two-way data binding, and using webapp2's RESTful nature you can make this happen fairly painlessly.
0
0
0
0
2014-03-14T11:58:00.000
2
0.197375
false
22,404,358
0
0
1
1
I am programming a small web app on GAE using python webapp2 framework. What I want to achieve is displaying server data to the html view through javascript or angularjs. Actually the app draws some graph using d3.js based on the server data. I know I can use $http.get to retrieve the data from server. But this way I need to create another page or handler. I am wondering if there is some way which I can do the below actions. On the server python handler, retrieve the stored data, then passing to the jinja2 template values. Render the html. Display some of the data on the html view via jinja2 template values. (The missing part) How to pass the data to js from the python handler? Or how to pass the data to js from html view? I know two ways from the html view. One is using embedded javascript code. var data = {{serverData}}; The other is using hidden input form with angular data bind. Both of them not so nicely. 4.Compute the data and draw back to the view using d3js or other js lib. Any idea about this? I reckon there might be some angular way to do this beautifully but didnt figure out.
How do I fill a form in a web site that calls javascript code and retrieve the results from a different frame using python?
22,408,435
0
0
41
0
javascript,python,forms,web
Yea. Reverse engineer the javascript using the chrome/firefox console, see what request it makes and mimic them in python using urllib2 or the requests library.
0
0
1
0
2014-03-14T14:20:00.000
1
1.2
true
22,407,628
0
0
1
1
I'm going to have to code a program in python that retrieves results after filling a web form (which in turn calls different javascript functions), and those results appear in a different frame of the website. I considered using the Selenium web engine, but I was wondering if anyone has any better idea? Thank you Daniel
How to install package not in site-packages of virtualenv and put command in requirements.txt that will install this package from local dir?
22,425,918
1
0
275
0
python,django,pip,easy-install,django-oscar
This isn't really what pip was designed to do. You should post your version of django-oscar to github, then reference that in your pip requirements.txt Or if you don't want to have it hosted remote you might as well just include it in your project directory as you would a Django app you are making.
0
0
0
0
2014-03-14T17:37:00.000
1
1.2
true
22,412,053
1
0
1
1
I have some django packages like django-oscar. I need to install it with pip and then edit code & revise. I'm tried to install it through setup.py deploy and to make .egg-info. Then I understand that pip doesn't have feature to install packages through .egg-info. I also tried to install package from local directory using -e /path/to/package, but pip doesn't allow me install from directory. It message me: --editable=src/django-oscar-master/oscar/ should be formatted with svn+URL, git+URL, hg+URL or bzr+URL Then I'm tried to install through pip install django-oscar --no-index --find-links=file://src/django-oscar-master/ and similar commands. It always message me: Could not find any downloads that satisfy the requirement django-oscar How to install package not in site-packages of virtualenv and put command in requirements.txt that will install this package from local dir?
How to manipulate a local text file from an HTML page
22,413,574
0
0
994
0
javascript,python,html,flask
There is many ways to do this Here is the easiest 3 Use JavaScript 2 install wampserver or similar and use php o modify the file 3 don't use te browser to delete and instead use a bat file to open the browser and remove the link from the text file
0
0
1
0
2014-03-14T18:57:00.000
2
0
false
22,413,513
0
0
1
1
I've generated an HTML file that sits on my local disk, and which I can access through my browser. The HTML file is basically a list of links to external websites. The HTML file is generated from a local text file, which is itself a list of links to the remote sites. When I click on one of the links in the HTML document, as well the browser loading the relevant site (in a new tab), I want to remove the site from the list of sites in the local text file. I've looked at Javascript, Flask (Python), and CherryPy (Python), but I'm not sure these are valid solutions. Could someone advise on where I should look next? I'd prefer to do this with Python somehow - because it's what I'm familar with - but I'm open to anything. Note that I'm running on a Linux box.