Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Returning the outputs from a CloudFormation template with Boto?
| 14,165,269 | 7 | 6 | 3,837 | 0 |
python,amazon-web-services,boto,amazon-cloudformation
|
If you do a describe_stacks call, it will return a list of Stack objects and each of those will have an outputs attribute which is a list of Output objects.
Is that what you are looking for?
| 0 | 0 | 0 | 0 |
2013-01-04T18:53:00.000
| 1 | 1.2 | true | 14,163,114 | 0 | 0 | 1 | 1 |
I'm trying to retrieve the list of outputs from a CloudFormation template using Boto. I see in the docs there's an object named boto.cloudformation.stack.Output. But I think this is unimplemented functionality. Is this currently possible?
|
Random test failures when code pushed to Jenkins
| 14,205,396 | 1 | 2 | 1,467 | 0 |
python,django,jenkins,django-testing,django-jenkins
|
This is a really tough question to answer.
It is possible there are some common pitfalls django developers falls into, but those I do not know.
Outside of that, this is just normal debugging:
Find a way to reproduce the failure. If you can make the test fail on your own laptop, great. If you cannot, you have to debug it on the machine where it fails.
Get more information. Asserts can be made to print a custom message when they fail. Print values of relevant variables. Add debug printouts into your code and tests. See where things are not the way they are supposed to be. Google how to use the Python debugger.
Keep an open mind. The bug can be anywhere: in the hardware, the software environment, your code or in the test code. But unless you are god, Linus Torvalds or Brian Kernighan it is a safe first hypothesis the bug originates somewhere between your keyboard and back of your seat. (And all the three hackers above have made bad bugs too.)
| 0 | 0 | 0 | 0 |
2013-01-04T21:51:00.000
| 2 | 0.099668 | false | 14,165,573 | 0 | 0 | 1 | 1 |
I have asked this one earlier too but am not satisfied with the answer.
What I use:
Working on Django/python website.
Development done on python virtual envs locally.
using GIT as my SCM
have separate virtual servers deployed for Developer and Production branches for GIT
Using Jenkins CI for continuous Integration. Separate Virtual server deployed for Jenkins
Working:
I have Unit tests, smoke tests and Integration tests for the website. Jenkins has been setup so that whenever code is pushed from my local git branch to Developer and Production branch on git repo, a build is triggered in Jenkins.
Issue:
My tests are passing locally when I do a 'python manage.py test'
Random tests (mostly unit tests) FAIL in Jenkins when code is pushed to other branches (Developer and Production).
After a test failure, if I do do a build manually by pressing the 'Build Now' button in Jenkins, the tests usually pass and the build is successful.
Sometimes, when no changes are made to the code and code is still pushed to these branches, the tests are randomly failing in Jenkins.
Some common Errors:
AssertionError: 302 != 200
TypeError: 'NoneType' object is not subscriptable
IndexError: list index out of range
AssertionError: datetime.datetime(2012, 12, 5, 0, 0, 27, 218397) != datetime.datetime(2012, 12, 5, 0, 0, 27, 239884)
AssertionError: Response redirected to 'x' expected 'y'
Troubleshooting till date:
Ran all the tests locally on my machine and also on the virtual server. They are running fine.
Ran the individual failing tests locally and also on the virtual server. They are running fine.
Tried to recreate the failing conditions but as of now, the tests are passing.
The only problem I see is that whenever the code is pushed to the developer and production brnaches, the random test failure kicks in. Some tests fail repeatedly.
Can anyone tell me what more I can do to troubleshoot this problem. I tried googling the issue but in vain. I know xunitpatterns website has some good insights on the erratic tests behaviour but it is not helping since I tried most of the stuff there.
|
Explain search (Sphinx/Haystack) in simple context?
| 14,166,303 | 1 | 0 | 135 | 0 |
python,django,search,nosql,full-text-search
|
Firstly, Haystack isn't a search engine, it's a library that provides a Django API to existing search engines like Solr and Whoosh.
That said, your example isn't really a very good one. You wouldn't use a separate search engine to search by ISBN, because your database would already have an index on the Book table which would efficiently do that search. Where a search engine would come in could be in two places. Firstly, you could index some or all of the book's contents to search on: databases are not very good at full-text search, but this is an area where search engines shine. Secondly, you could provide a search against multiple fields - say, author, title, publisher and description - in one go.
Also, search engines provide useful functionality like suggestions, faceting and so on that you won't get from a database.
| 0 | 0 | 0 | 0 |
2013-01-04T22:39:00.000
| 1 | 1.2 | true | 14,166,161 | 0 | 0 | 1 | 1 |
Could you explain how search engines like Sphinx, Haystack, etc fit in to a web framework. If you could explain in a way that someone new to web development could understand that would help.
One example use case I made up for this question is a book search feature. Lets say I have a noSQL database that contains book objects, each containing author, title, ISBN, etc.; how does something like Sphinx/Haystack/other search engine fit in with my database to search for a books with a given ISBN?
|
Gunicorn logging from multiple workers
| 14,305,123 | 1 | 8 | 3,257 | 0 |
python,flask,gunicorn
|
We ended up changing our application to send logs to stdout and now rely on supervisord to aggregate the logs and write them to a file. We also considered sending logs directly to rsyslog but for now this is working well for us.
| 0 | 1 | 0 | 0 |
2013-01-05T14:00:00.000
| 2 | 0.099668 | false | 14,172,470 | 0 | 0 | 1 | 1 |
I have a flask app that runs in multiple gunicorn sync processes on a server and uses TimedRotatingFileHandler to log to a file from within the flask application in each worker. In retrospect this seems unsafe. Is there a standard way to accomplish this in python (at high volume) without writing my own socket based logging server or similar? How do other people accomplish this? We do use syslog to aggregate across servers to a logging server already but I'd ideally like to persist the log on the app node first.
Thanks for your insights
|
Get publicly accessible contents of S3 bucket without AWS credentials
| 70,791,596 | 0 | 3 | 1,214 | 0 |
python-3.x,amazon-web-services,amazon-s3
|
Using AWS CLI,
aws s3 ls s3://*bucketname* --region *bucket-region* --no-sign-request
| 0 | 0 | 1 | 0 |
2013-01-05T23:11:00.000
| 2 | 0 | false | 14,177,436 | 0 | 0 | 1 | 2 |
Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with.
|
Get publicly accessible contents of S3 bucket without AWS credentials
| 14,199,730 | 4 | 3 | 1,214 | 0 |
python-3.x,amazon-web-services,amazon-s3
|
If the bucket's permissions allow Everyone to list it, you can just do a simple HTTP GET request to http://s3.amazonaws.com/bucketname with no credentials. The response will be XML with everything in it, whether those objects are accessible by Everyone or not. I don't know if boto has an option to make this request without credentials. If not, you'll have to use lower-level HTTP and XML libraries.
If the bucket itself does not allow Everyone to list it, there is no way to get a list of its contents, even if some of the objects in it are publicly accessible.
| 0 | 0 | 1 | 0 |
2013-01-05T23:11:00.000
| 2 | 0.379949 | false | 14,177,436 | 0 | 0 | 1 | 2 |
Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with.
|
What are the other alternatives to using django-kombu?
| 14,181,856 | 2 | 3 | 590 | 0 |
python,django,celery
|
The stable version of kombu is production ready, same for celery.
kombu takes care of the whole messaging between consumers, producers and the message broker which in order are the celery workers, webworkers (or more in general scripts that put tasks in the queue) and the message broker you are using.
You need kombu to run celery (it is actually in the requirements if you look at its setup)
With kombu you can use different message brokers (rabbitmq, redis ...) so the choice is not between using kombu or rabbitmq as they do different things, but between kombu and redis or kombu and rabbitmq etc etc..
If you are ok with redis as message broker, you just have to install:
celery-with-redis and django-celery packages
| 0 | 1 | 0 | 0 |
2013-01-06T09:44:00.000
| 1 | 0.379949 | false | 14,180,944 | 0 | 0 | 1 | 1 |
I am using Django-kombu with Celery but have read at a quite few places that it isn't production ready.
Basically, I want to create a multiple master - multiple slaves architecture using Celery and pass messages in between them and and back to the main program that did the call.
I am not able to understand where does Kombu fit in there. Why not RabbitMQ? The tutorials are all very messy with one person suggesting something and the other something else.
Can someone give me clearer picture of what is a production stack look like when dealing Celery + Django?
Also, do I have to use Dj-Celery?
|
python project setting for production and developement
| 14,191,085 | 1 | 0 | 150 | 0 |
python,django
|
What we usually do in our Django projects is create versions of all configuration files for each platform (dev, prod, etc...) and use symlinks to select the correct one. Now that Windows supports links properly, this solution fits everybody.
If you insist on another configuration file, try making it a Python file that just imports the proper configuration file, so instead of name="development" you'll have something like execfile('development_settings.py')
| 0 | 0 | 0 | 0 |
2013-01-07T06:34:00.000
| 1 | 1.2 | true | 14,191,034 | 0 | 0 | 1 | 1 |
I'm building project with Django, I can make two setting files for Django: production_setting.py and development_setting.py, however, I need some configure file for my project and I'm using ConfigParser to parse that files. e.g.
[Section]
name = "development"
version = "1.0"
how to split this configure file to production and development?
|
Outlining a Solution Stack
| 14,192,830 | 1 | 2 | 98 | 0 |
python,ruby-on-rails,ruby,node.js,webserver
|
It all depends on what you are actually trying to do and what your requirements are.
There is no real "right" language for things like these, it's mostly determined by the Frameworks you'll be using on those language (since all are general-purpose programming languages) and your personal preference/experience.
I can't comment too much on Python as I never tried it really, but from what I heard/saw it can be used for all things Ruby is also used, although the Community around Python is a bit smaller with Python being used a lot more in the Scientific community (that may be good if your app may be doing any crayz calculations).
That leads us to Ruby. Ruby and the Ruby on Rails framework is mostly used to write Web-Applications and Services.
Ruby is a very elegant language to program in and the tools are very mature and easy to work with.
Rails is a framework on Ruby that makes Web-Development very simple in providing you with a very good set of tools especially suited to write data-driven web-apps.
Very flexible and a joy to work with.
There are however some drawbacks to Ruby at the moment, mostly related to poor threading.
Node.JS is a new language that is focused on paralellism and supports all things Ruby and Python can do, although it's documentation is lacking compared to what Ruby will give you. It's also not the most beginner-friendly choice as JavaScript with all it's quirks and the callback-oriented async model is not of the simplest thing around.
That said, Node is very bare metal and makes it very very easy to write arbitrary TCP/UDP Servers that don't necessary work over HTTP. Custom streaming protocols or any custom protocol in fact are almost trivially done in Node.. (I don't advise you do that, but maybe that's important to your task).
To be fair there are frameworks that facilitate writing of Web-Apps for node, but the coices are a) not as mature as Rails or Django, and b) you have to pick your framework choices.
This means: Where Rails does come with a lot of defaults that guide you, (Rails for example has a default Database stack it's optimized around), Node with Frameworks like Express only provide you with a bare-bones HTTP server where you have to bring in the Database of your choice etc...
In closing: All languages and frameworks you asked about are mostly used for writing Web-Applications. They all can however be used to write a client that consumes the service too - it mostly comes down to general preference.
| 0 | 0 | 0 | 0 |
2013-01-07T07:08:00.000
| 2 | 1.2 | true | 14,191,410 | 0 | 0 | 1 | 1 |
please excuse my ignorance as I'm an Aerospace Engineer going headfirst into the software world.
I'm building a web solution that allows small computers (think beagleboard) to connect to a server that sends and receives data to these clients. The connection will be over many types including GPRS/3G/4G.
The user will interact with clients in real time through webpages served by this central server. The solution must scale well.
I've been using python for the client side and some simple ruby code for the servers with Heroku. I have also tried a bit of NodeJS and Ruby on Rails. With so many options i'm struggling to see the forest from the trees and wondering where these languages will fit into my stack.
Your help is appreciated; I'm happy to give more details.
|
Does a Flask Request Run to Completion before another Request Starts?
| 14,202,887 | 2 | 3 | 1,304 | 0 |
python,flask
|
I'm fairly certain that there is no guarantee of that. However, it depends on how you're running the application. If you're using Heroku+gunicorn for example, all files on Heroku that are changed during a request are not kept, i.e., the files are ephemeral. So if you were to change the text file, the changes would not persist through to the next request. Another provider, PythonAnywhere, is not so strict about their filesystem, but again, the requests would have no guarantee of one finishing before the next could start. Moreover, for a modern web server, that would be a useless application (or more accurately server).
Also if you want a small database, just use sqlite. As long as it is installed on the system, python comes with a library for interacting with it (if I remember correctly).
| 0 | 0 | 0 | 0 |
2013-01-07T19:38:00.000
| 2 | 0.197375 | false | 14,202,844 | 0 | 0 | 1 | 1 |
I ask because I'm wondering if I can get away with using a text file as a data store for a simple app. If each handler runs to completion, then it seems like I should be able to modify the text file during that request without worrying about conflicts, assuming I close the file at the end of each request.
Is this feasible? Is there anything special I need to do in order to use a text file as a data store in a Flask app?
|
CSS won't work with Django powered site
| 14,210,824 | 0 | 0 | 126 | 0 |
python,css,django,web
|
Usually the easiest way to do this is adding a folder static in the root of the project. Then look for and make settings attributes
STATICFILES_DIRS = ('static',) if you want you can make this absolute path(recommended by django)
STATIC_URL = '/static/'
Since you're using the dev-server this should be able to work. Now to link to them in your templates what you have to do is
<link href={{ STATIC_URL }}css/css.css> for the css assuming the is a file project_root/static/css/css.css you basically do the same for javascript
| 0 | 0 | 0 | 0 |
2013-01-08T06:17:00.000
| 2 | 0 | false | 14,209,258 | 0 | 0 | 1 | 2 |
My CSS will not work when I run my site. Only the HTML displays. I have the right link. I'm confused as to what put in the MEDIA_ROOT, MEDIA_URL, STATIC_ROOT, and STATIC_URL. Every site tell me something different. I'm not using the (file) directory. I the above mentioned setting refer to where the files are placed and where they are hosted. I'm not hosting my files anywhere as of right now. I'm in dev mode. I know django has something to view static files in dev mode but it won't work!!!! My questions: 1. Should I host my files? 2. What should i put in the above mentioned settings? Keep in mind I'm in dev mode!
Thanks
|
CSS won't work with Django powered site
| 14,209,324 | 0 | 0 | 126 | 0 |
python,css,django,web
|
Give the link of your CSS in STATIC_ROOT.
In html Do: {{ STATIC_URL }}/css in link.
| 0 | 0 | 0 | 0 |
2013-01-08T06:17:00.000
| 2 | 0 | false | 14,209,258 | 0 | 0 | 1 | 2 |
My CSS will not work when I run my site. Only the HTML displays. I have the right link. I'm confused as to what put in the MEDIA_ROOT, MEDIA_URL, STATIC_ROOT, and STATIC_URL. Every site tell me something different. I'm not using the (file) directory. I the above mentioned setting refer to where the files are placed and where they are hosted. I'm not hosting my files anywhere as of right now. I'm in dev mode. I know django has something to view static files in dev mode but it won't work!!!! My questions: 1. Should I host my files? 2. What should i put in the above mentioned settings? Keep in mind I'm in dev mode!
Thanks
|
Can I use open cv with python on Google app engine?
| 43,981,159 | 3 | 4 | 2,599 | 0 |
python,google-app-engine,opencv,python-2.7
|
Now it is possible. The app should be deployed using a custom runtime in the GAE flexible environment. OpenCV library can be installed by adding the instruction RUN apt-get update && apt-get install -y python-opencv in the Dockerfile.
| 0 | 1 | 0 | 0 |
2013-01-08T15:03:00.000
| 3 | 0.197375 | false | 14,217,858 | 0 | 0 | 1 | 1 |
HI actually I was working on a project which i intended to deploy on the google appengine.
However I found that google app engine is supported by python. Can I run openCV with python scripts on Google app engine?
|
Celery 3.0.12 countdown not working
| 14,246,789 | 1 | 0 | 497 | 0 |
python,celery
|
This is a bug in celery 3.0.12, reverting to celery 3.0.11 did the job.
Hope this helps someone
| 0 | 1 | 0 | 0 |
2013-01-08T23:30:00.000
| 1 | 0.197375 | false | 14,225,865 | 0 | 0 | 1 | 1 |
When I run my task: my_task.apply_async([283], countdown=5) It runs immediately when it should be running 5 seconds later as the ETA says
[2013-01-08 15:15:21,600: INFO/MainProcess] Got task from broker: web.my_task[4635f997-6232-4722-9a99-d1b42ccd5ab6] eta:[2013-01-08 15:20:51.580994]
[2013-01-08 15:15:22,095: INFO/MainProcess] Task web.my_task[4635f997-6232-4722-9a99-d1b42ccd5ab6] succeeded in 0.494245052338s: None
here is my installation:
software -> celery:3.0.12 (Chiastic Slide) kombu:2.5.4 py:2.7.3
billiard:2.7.3.19 py-amqp: N/A
platform -> system:Darwin arch:64bit imp:CPython
loader -> djcelery.loaders.DjangoLoader
settings -> transport:amqp results:mongodb
Is this a celery bug? or I am missing something?
|
Pass Python scripts for mapreduce to HBase
| 14,675,698 | -2 | 3 | 3,776 | 0 |
python,hadoop,mapreduce,hbase
|
You can very easily do map reduce programming with Python which would interact with thrift server. Hbase client in python would be a thrift client.
| 0 | 1 | 0 | 0 |
2013-01-09T16:27:00.000
| 2 | -0.197375 | false | 14,241,729 | 0 | 0 | 1 | 1 |
We have a HBase implementation over Hadoop. As of now all our Map-Reduce jobs are written as Java classes. I am wondering if there is a good way to use Python scripts to pass to HBase for Map-Reduce.
|
OpenERP 6.1 web client - set default value (Configuration parameters)
| 14,257,259 | 0 | 0 | 762 | 0 |
python,openerp,erp
|
I find another solution. When I click on button Create for res.users in right side bar, in section Customize action Set Default appears. Here you can choose default value which appears when Create button was pressed.
UPDATE:
All these values you can see in
Settings --> Customization --> Low Level Objects --> Actions --> User-defined Defaults
Of course, here you can create new default values.
| 0 | 0 | 0 | 0 |
2013-01-10T08:59:00.000
| 2 | 0 | false | 14,254,053 | 0 | 0 | 1 | 2 |
I need to set up default Home Action for res.user. Currently is Home Page but I want set my custom action. So, I tried create new record for Settings --> Configuration --> Configuration Parameters , but when I set field for Home Action in field Field and set type Many2One in field Type, field Value remains empty list. I can't choose my custom action for new users! Please, correct me if I'm doing something wrong. Is this a bug or normal behavior? Any other solution is welcome.
Cheers
|
OpenERP 6.1 web client - set default value (Configuration parameters)
| 14,424,152 | 0 | 0 | 762 | 0 |
python,openerp,erp
|
just an additional note:you can also apply this user defined dafaults with many2many fields like taxes_id field in product model.
however there is a small bug, if you set a default value for many2many fields when you create a new record, many2many field is shown empty, when you save the record you will see it is well recorded with your default value, so if you want to make a record different than the default you have to first save than edit again.
| 0 | 0 | 0 | 0 |
2013-01-10T08:59:00.000
| 2 | 0 | false | 14,254,053 | 0 | 0 | 1 | 2 |
I need to set up default Home Action for res.user. Currently is Home Page but I want set my custom action. So, I tried create new record for Settings --> Configuration --> Configuration Parameters , but when I set field for Home Action in field Field and set type Many2One in field Type, field Value remains empty list. I can't choose my custom action for new users! Please, correct me if I'm doing something wrong. Is this a bug or normal behavior? Any other solution is welcome.
Cheers
|
How can I upload images with text parameters using Flask
| 14,270,654 | 1 | 0 | 925 | 0 |
python,flask,http-post
|
Access them from request.data just like any other form data.
| 0 | 0 | 0 | 0 |
2013-01-10T14:43:00.000
| 1 | 1.2 | true | 14,260,447 | 0 | 0 | 1 | 1 |
I want to send a image together with some text parameters to a Flask server (HTTP POST).
How can I use Flask to receive both (e.g. save an image and print the text)?
|
Sometimes getting "API requires authorization" from intuit anywhere api after a fresh oAuth handshake
| 27,932,175 | 0 | 2 | 377 | 0 |
python,intuit-partner-platform
|
I received this error as well and am posting this as pointer for other who stumble upon this. Error Code 22 (Authentication required) for me meant that the OAuth signature was wrong. This was confusing because I couldn't find this error listed in the Quickbooks documents for reconnect.
I was signing the request as a "POST" request instead of a "GET" request which is what Quickbooks requires for calls to the reconnect endpoint.
| 0 | 0 | 1 | 0 |
2013-01-10T15:36:00.000
| 1 | 0 | false | 14,261,512 | 0 | 0 | 1 | 1 |
After completing the oAuth handshake with Intuit Anywhere (AI), I use the API to get the HTML for the blue dot menu. Sometimes, the expected HTML is returned. Other times, I get this message
This API requires Authorization. 22 2013-01-10T15:32:33.43741Z
Typically, this message is returned when the oAuth token is expired. However, on the occasions when I get it, I can click around in my website for a bit or do a refresh, and the expected HTML is returned. I checked the headers being sent and, in both cases (i.e., when the expected HTML is returned, and an error is returned), the request is exactly the same. I wouldn't be surprised if this was a bug in Intuit's API, but I'm trying to rule out any other possibilities first. Please let me know if you have any thoughts on how to fix this. Thanks.
Update: It seems the problem is occurring only when I do a refresh. This seems to be the case both in Firefox and Safari on OSX. It sounds to be like a Javascript caching issue.
|
Huge Django project
| 14,273,720 | -1 | 3 | 228 | 0 |
python,django
|
If you use FireFox you can install FireBug on it and when you for example submit ajax form you can see at which url send you request after what you can easily find controller which work with this form data. At chrome this utility embedded by default and call by F12 key.
| 0 | 0 | 0 | 0 |
2013-01-11T07:29:00.000
| 5 | -0.039979 | false | 14,273,593 | 0 | 0 | 1 | 3 |
I have a new job and a huge django project (15 apps, more than 30 loc). It's pretty hard to understand it's architecture from scratch. Are there any techniques to simplify my work in the beginning? sometimes it's even hard to understand where to find a form or a view that I need... thnx in advance.
|
Huge Django project
| 14,273,986 | 4 | 3 | 228 | 0 |
python,django
|
When I come to this kind of problem I open up a notebook and answer the following:
1. Infrastructure
Server configuration, OS etc
Check out the database type (mysql, postgres, nosql)
External APIS (e.g Facebook Connect)
2. Backend
Write a simple description
Write its input/output from user (try to be thorough; which fields are required and which aren't)
Write its FK and its relation to any other apps (and why)
List down each plugin the app is using. And for what purpose. For example in rails I'd write: 'gem will_paginate - To display guestbook app results on several pages'
3. Frontend
Check out the JS framework
Check the main stylesheet files (for the template)
The main html/haml (etc) files for creating a new template based page.
When you are done doing that. I think you are much more prepared and able go deeper developing/debugging the app. Good luck.
| 0 | 0 | 0 | 0 |
2013-01-11T07:29:00.000
| 5 | 1.2 | true | 14,273,593 | 0 | 0 | 1 | 3 |
I have a new job and a huge django project (15 apps, more than 30 loc). It's pretty hard to understand it's architecture from scratch. Are there any techniques to simplify my work in the beginning? sometimes it's even hard to understand where to find a form or a view that I need... thnx in advance.
|
Huge Django project
| 14,274,066 | 2 | 3 | 228 | 0 |
python,django
|
1) Try to install the site from scratch. You will find what external apps are needed for the site to run.
2) Reverse engineer. Browse through the site and try to find out what you have to do to change something to that page. Start with the url, look up in urls.py, read the view, check the model. Are there any hints to other processes?
3) Try to write down everything you don't understand, and document the answers for future reference.
| 0 | 0 | 0 | 0 |
2013-01-11T07:29:00.000
| 5 | 0.07983 | false | 14,273,593 | 0 | 0 | 1 | 3 |
I have a new job and a huge django project (15 apps, more than 30 loc). It's pretty hard to understand it's architecture from scratch. Are there any techniques to simplify my work in the beginning? sometimes it's even hard to understand where to find a form or a view that I need... thnx in advance.
|
How to make a WSGI app configurable by any server?
| 14,285,052 | 0 | 0 | 253 | 0 |
python,wsgi
|
in uWSGI (if using the uwsgi protocol) you can pass additional variables with uwsgi_param key value in nginx, SetEnv in apache (both mod_uwsgi and mod_proxy_uwsgi), cgi vars with Cherokee and --http-var with the uwsgi http router.
For the http protocol (in gunicorn or uWSGI http-socket) the only solution popping in my mind is adding special headers in the proxy configuration that you will parse in your wsgi app (http headers are rewritten as cgi vars prefixed with HTTP_)
| 0 | 0 | 0 | 0 |
2013-01-11T11:24:00.000
| 2 | 0 | false | 14,277,172 | 0 | 0 | 1 | 1 |
Is there a way to distribute a WSGI application that will work out of the box with any server and that will be configurable using the server's built-in features only?
This means that the only configuration file the administrator would have to touch would be the server's configuration file. It wouldn't be necessary to write a custom WSGI script in Python.
mod_wsgi adds configuration variables set with SetEnv to the WSGI environ dictionary that gets passed to the app, but I didn't find a way to do something similar with Gunicorn or uWSGI. Using os.environ works with Gunicorn and uWSGI but not with mod_wsgi because SetEnv doesn't affect os.environ.
|
Which setup is more efficient? Flask with pypy, or Flask with gevent?
| 14,294,862 | 1 | 25 | 15,866 | 0 |
python,performance,gevent,pypy
|
Builtin flask server is a BaseHTTPServer or so, never use. The best scenario is very likely tornado + pypy or something like that. Benchmark before using though. It also depends quite drastically on what you're doing. The web server + web framework benchmarks are typically hello world kind of benchmarks. Is your application really like that?
Cheers, fijal
| 0 | 0 | 0 | 1 |
2013-01-12T15:10:00.000
| 3 | 0.066568 | false | 14,294,643 | 0 | 0 | 1 | 1 |
Both 'pypy' and 'gevent' are supposed to provide high performance. Pypy is supposedly faster than CPython, while gevent is based on co-routines and greenlets, which supposedly makes for a faster web server.
However, they're not compatible with each other.
I'm wondering which setup is more efficient (in terms of speed/performance):
The builtin Flask server running on pypy
or:
The gevent server, running on CPython
|
Django-nonrel broke after installing new version of Google App Engine SDK
| 14,368,275 | 1 | 0 | 191 | 1 |
python,google-app-engine,django-nonrel
|
Did you update djangoappengine without updating django-nonrel and djangotoolbox?
While I haven't upgraded to GAE 1.7.4 yet, I'm running 1.7.2 with no problems. I suspect your problem is not related to the GAE SDK but rather your django-nonrel installation has mismatching pieces.
| 0 | 1 | 0 | 0 |
2013-01-13T20:03:00.000
| 2 | 0.099668 | false | 14,307,581 | 0 | 0 | 1 | 2 |
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things:
I removed old version of GAE and installed GAE 1.7. Along with that I have
also changed my djangoappengine folder with latest version.
I have copied new version of GAE to /usr/local directory since my ~/bashrc file PATH variable pointing to GAE to this directory.
Now, I am getting error
django.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend.
Try using django.db.backends.XXX, where XXX is one of:
'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3'
Error was: No module named utils
I don't think there is any problem of directory structure since earlier it was running fine.
Does anyone has any idea ?
Your help will be highly appreciated.
-Sunil
.
|
Django-nonrel broke after installing new version of Google App Engine SDK
| 14,382,654 | 0 | 0 | 191 | 1 |
python,google-app-engine,django-nonrel
|
Actually I changed the google app engine path in /.bashrc file and restarted the system. It solved the issue. I think since I was not restarting the system after .bashrc changes, hence it was creating problem.
| 0 | 1 | 0 | 0 |
2013-01-13T20:03:00.000
| 2 | 1.2 | true | 14,307,581 | 0 | 0 | 1 | 2 |
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things:
I removed old version of GAE and installed GAE 1.7. Along with that I have
also changed my djangoappengine folder with latest version.
I have copied new version of GAE to /usr/local directory since my ~/bashrc file PATH variable pointing to GAE to this directory.
Now, I am getting error
django.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend.
Try using django.db.backends.XXX, where XXX is one of:
'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3'
Error was: No module named utils
I don't think there is any problem of directory structure since earlier it was running fine.
Does anyone has any idea ?
Your help will be highly appreciated.
-Sunil
.
|
Count vs len on a Django QuerySet
| 14,327,315 | 29 | 113 | 92,515 | 0 |
python,django,performance
|
I think using len(qs) makes more sense here as you need to iterate over the results. qs.count() is a better option if all that you want to do it print the count and not iterate over the results.
len(qs) will hit the database with select * from table whereas qs.count() will hit the db with select count(*) from table.
also qs.count() will give return integer and you cannot iterate over it
| 0 | 0 | 0 | 0 |
2013-01-14T21:29:00.000
| 5 | 1 | false | 14,327,036 | 0 | 0 | 1 | 1 |
In Django, given that I have a QuerySet that I am going to iterate over and print the results of, what is the best option for counting the objects? len(qs) or qs.count()?
(Also given that counting the objects in the same iteration is not an option.)
|
sub domains in tornado web app for SAAS
| 14,342,790 | 0 | 2 | 791 | 0 |
python,webserver,subdomain,tornado,saas
|
Tornado itself does not handle subdomains.
You will need to something like NGNIX to control subdomain access.
| 0 | 1 | 0 | 0 |
2013-01-15T09:03:00.000
| 2 | 0 | false | 14,334,222 | 0 | 0 | 1 | 2 |
I have a web app which runs at www.mywebsite.com.
I am asking user to register and enter a subdomain name for their login. e.g. if user enters subdomain as "demo", then his login url should be "www.demo.mywebsite.com".
How this can be done in tornado web app, as tornado itself is a web server.
Or serving the app with nginx or other web serving services is the only way ?
Thanks for you help in advance.
|
sub domains in tornado web app for SAAS
| 14,419,302 | 3 | 2 | 791 | 0 |
python,webserver,subdomain,tornado,saas
|
self.request.host under tornado.web.RequestHandler will contain subdomain so you can change application logic according to subdomain eg. load current_user based on cookie + subdomain.
| 0 | 1 | 0 | 0 |
2013-01-15T09:03:00.000
| 2 | 0.291313 | false | 14,334,222 | 0 | 0 | 1 | 2 |
I have a web app which runs at www.mywebsite.com.
I am asking user to register and enter a subdomain name for their login. e.g. if user enters subdomain as "demo", then his login url should be "www.demo.mywebsite.com".
How this can be done in tornado web app, as tornado itself is a web server.
Or serving the app with nginx or other web serving services is the only way ?
Thanks for you help in advance.
|
Change CharField value after init
| 14,334,871 | 0 | 0 | 232 | 0 |
python,django,django-forms,django-widget
|
This seems overly complicated. Apart from anything else, tying up an entire process waiting for someone to fill in a form is a bad idea.
Although I can't really understand exactly what you want to do, it seems likely that there are better solutions. Here's a few possibilities:
Page A redirects to Page B before initializing the form, and B redirects back to A on submit;
Page A loads the popup then loads the form via Ajax;
Page B dynamically fills in the form fields in A on submit via client-side Javascript;
and so on.
| 0 | 0 | 0 | 0 |
2013-01-15T09:31:00.000
| 1 | 0 | false | 14,334,667 | 0 | 0 | 1 | 1 |
I have a page "A" with some CharField to fill programmatically. The value to fill come from another page "B", opened by javascript code executed only when the page is showed (after the init). This is the situation:
page A __init__
during the init, start a thread listening on the port 8080
page A initialized and showed --> javascript in the template is executed
the javascript tag opens a new webpage, that sends data to the 8080
the thread reads data sent by page B, and try to fill CharFields
Is there a way to do this? I don't know...a refresh method..
If it is not possible...
I need a way to call the javascript function before the init of the
form
OR
A way to modify the HTML code of the page created
|
Restrict so users only view a certain part of website-Django
| 14,335,895 | 1 | 1 | 1,610 | 0 |
python,django,user-interface,django-views
|
This is a very wide-ranging question. One solution would be to store a trial flag on each user. On an authenticated request, check for User.trial in your controller (and probably view) and selectively allow/deny access to the endpoint or selectively render parts of the page.
If you wish to use built-in capabilities of Django, you could view 'trial' as a permission, or a user group.
| 0 | 0 | 0 | 0 |
2013-01-15T10:34:00.000
| 3 | 0.066568 | false | 14,335,832 | 0 | 0 | 1 | 1 |
I am using Django, and require certain 'trial' users only to activate a certain part of the website-any ideas on an efficient way to do this?
I was thinking about giving a paying customer a certain ID and linking this to the URL of the sites for permission.
Thanks,
Tom
|
Best strategy for storing precomputed sunrise/sunset data?
| 14,365,980 | 0 | 1 | 537 | 1 |
python,google-app-engine,python-2.7
|
I would say precompute those structures and output them into hardcoded python structures that you save in a generated python file.
Just read those structures into memory as part of your instance startup.
From your description, there's no reason to compute these values at runtime, and there's no reason to store it in the datastore since that has a cost associated with it, as well as some latency for the RPC.
| 0 | 1 | 0 | 0 |
2013-01-15T17:59:00.000
| 3 | 0 | false | 14,343,871 | 0 | 0 | 1 | 2 |
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are:
To precompute a year's worth of sunrises into datetime objects, put
them into a list, pickle the list and put it into a PickleProperty
, but put the list into a JsonProperty
Go with DateTimeProperty and set repeated=True
Now, I'd like the very next sunrise/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed.
Does anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing?
Thanks,
Dave
|
Best strategy for storing precomputed sunrise/sunset data?
| 14,345,283 | 1 | 1 | 537 | 1 |
python,google-app-engine,python-2.7
|
For 2000 immutable data points - just calculate them when instance starts or on first use, then keep it in memory. This will be the cheapest and fastest.
| 0 | 1 | 0 | 0 |
2013-01-15T17:59:00.000
| 3 | 0.066568 | false | 14,343,871 | 0 | 0 | 1 | 2 |
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are:
To precompute a year's worth of sunrises into datetime objects, put
them into a list, pickle the list and put it into a PickleProperty
, but put the list into a JsonProperty
Go with DateTimeProperty and set repeated=True
Now, I'd like the very next sunrise/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed.
Does anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing?
Thanks,
Dave
|
add data to table periodically in mysql
| 27,122,957 | 0 | 1 | 214 | 1 |
python,mysql,django
|
Used django-celery package and created job in it to update the data periodically
| 0 | 0 | 0 | 0 |
2013-01-15T18:33:00.000
| 2 | 0 | false | 14,344,473 | 0 | 0 | 1 | 2 |
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
|
add data to table periodically in mysql
| 14,344,610 | 1 | 1 | 214 | 1 |
python,mysql,django
|
As I know there is no such function in MySQL. Even if MySQL could do it, this should not be its job. Such functions should be part of the business logic in your application.
The normal way is to setup the cron job in server. The cron job will wake up at the time you set, and then call your python script or SQL to fulfil the adding data work. And scripts are much better than direct SQL.
| 0 | 0 | 0 | 0 |
2013-01-15T18:33:00.000
| 2 | 1.2 | true | 14,344,473 | 0 | 0 | 1 | 2 |
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
|
How to implement a 'Vote up' System for posts in my blog?
| 14,347,324 | 1 | 4 | 1,105 | 1 |
python,mysql,google-app-engine,jinja2
|
If voting is only for subscribed users, then enable voting after members log in to your site.
If not, then you can track users' IP addresses so one IP address can vote once for a single article in a day.
By the way, what kind of security do you need?
| 0 | 0 | 0 | 0 |
2013-01-15T21:27:00.000
| 2 | 0.099668 | false | 14,347,244 | 0 | 0 | 1 | 2 |
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine.
How can I make the voting secure? I was thinking of sending a POST/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.
|
How to implement a 'Vote up' System for posts in my blog?
| 14,349,144 | 4 | 4 | 1,105 | 1 |
python,mysql,google-app-engine,jinja2
|
First, keep in mind that there is no such thing as "secure", just "secure enough for X". There's always a tradeoff—more secure means more annoying for your legitimate users and more expensive for you.
Getting past these generalities, think about your specific case. There is nothing that has a 1-to-1 relationship with users. IP addresses or computers are often shared by multiple people, and at the same time, people often have multiple addresses or computers. Sometimes, something like this is "good enough", but from your question, it doesn't sound like it would be.
However, with user accounts, the only false negatives come from people intentionally creating multiple accounts or hacking others' accounts, and there are no false positives. And there's a pretty linear curve in the annoyance/cost vs. security tradeoff, all the way from ""Please don't create sock puppets" to CAPTCHA to credit card checks to web of trust/reputation score to asking for real-life info and hiring an investigator to check it out.
In real life, there's often a tradeoff between more than just these two things. For example, if you're willing to accept more cheating if it directly means more money for you, you can just charge people real money to vote (as with those 1-900 lines that many TV shows use).
How do Reddit and Digg check multiple voting from a single registered user?
I don't know exactly how Reddit or Digg does things, but the general idea is simple: Keep track of individual votes.
Normally, you've got your users stored in a SQL RDBMS of some kind. So, you just add a Votes table with columns for user ID, question ID, and answer. (If you're using some kind of NoSQL solution, it should be easy to translate appropriately. For example, maybe there's a document for each question, and the document is a dictionary mapping user IDs to answers.) When a user votes, just INSERT a row into the database.
When putting together the voting interface, whether via server-side template or client-side AJAX, call a function that checks for an existing vote. If there is one, instead of showing the vote controls, show some representation of "You already voted Yes." You also want to check again at vote-recording time, to make sure someone doesn't hack the system by opening 200 copies of the page, all of which allow voting (because the user hasn't voted yet), and then submitting 200 Yes votes, but with a SQL database, this is as simple as making Question, User into a multi-column unique key.
If you want to allow vote changing or undoing, just add more controls to the interface, and handle them with UPDATE and DELETE calls. If you want to get really fancy—like this site, which allows undoing if you have enough rep and if either your original vote was in the past 5 minutes or the answer has been edited since your vote (or something like that)—you may have to keep some extra info, like record a row for each voting action, with a timestamp, instead of just a single answer for each user.
This design also means that, instead of keeping a count somewhere, you generate the vote tally on the fly by, e.g., SELECT COUNT(*) FROM Votes WHERE Question=? GROUP BY Answer. But, as usual, if this is too slow, you can always optimize-by-denormalizing and keep the totals along with the actual votes. Similarly, if your user base is huge, you may want to archive votes on old questions and get them out of the operational database. And so on.
| 0 | 0 | 0 | 0 |
2013-01-15T21:27:00.000
| 2 | 1.2 | true | 14,347,244 | 0 | 0 | 1 | 2 |
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine.
How can I make the voting secure? I was thinking of sending a POST/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.
|
How can I run Bitnami OSQA with other web service (wordpress, joomla)?
| 14,351,798 | 0 | 0 | 133 | 0 |
php,python,bitnami,osqa
|
You can install BitNami modules for each one those stacks, just go to the stack download page, select the module for your platform, execute it in the command line and point to the existing installation. Then you will need to configure httpd.conf to point each domain to each app.
| 0 | 0 | 0 | 1 |
2013-01-16T02:45:00.000
| 1 | 0 | false | 14,350,605 | 0 | 0 | 1 | 1 |
I installed OSQA Bitnami on my VPS. And I also point 3 domains to this VPS. Now I want each domain point to different web service (I have another PHP website I want to host here).
How can I run php service along with OSQA Bitnami (it's python stack)?
|
Saving models in django
| 14,354,584 | 0 | 1 | 62 | 0 |
python,django,django-models,multiple-sites
|
I would caution against writing the same thing twice. Some thing will definitely go wrong, and you will have two nonmatching db's. Why don't you make the site_id-product relationship M2M, so that you can have more than one site_id's?
| 0 | 0 | 0 | 0 |
2013-01-16T05:24:00.000
| 1 | 1.2 | true | 14,351,879 | 0 | 0 | 1 | 1 |
I am working on a multisite project and i am using mezzanine+cartridge for this. I want to use same inventory for both sites. But there are some issues with this: there is a field site_id in the product table which stores the ID of the current site. Thus, I cannot reuse product over sites.
Is there any way (like with the help of signals or anything) that I can save an entry twice in the database, with changes to some field's values?
If this is possible then I have to overwrite only site_id: the rest of the things remain the same as it was in the previous entry. Thereby it decreases the workload of entering products twice for different sites.
Thanks.
|
Adding external Ids to Partners in OpenERP withouth a new module
| 14,356,856 | 0 | 6 | 4,853 | 1 |
python,xml-rpc,openerp
|
Add an integer field in res partner table for storing external id on both database. When data is retrived from the external server and adding to your openerp database, store the external id in the record of res partner in local server and also save the id of the newly created partner record in the external server's partner record. So next time when the external partner record is updated, we can search the external id in our local server and update that record.
Please check the openerp module base_synchronization and read the codes, which will be helpful for you.
| 0 | 0 | 0 | 0 |
2013-01-16T10:27:00.000
| 3 | 0 | false | 14,356,218 | 0 | 0 | 1 | 1 |
My Question is a bit complex and iam new to OpenERP.
I have an external database and an OpenERP. the external one isn't PostgreSQL.
MY job is that I need to synchronize the partners in the two databases.
External one being the more important. This means that if the external one's data change so does the OpenERp's, but if OpenERP's data changes nothing changes onthe external one.
I can access to the external database, and using XML RCP I have acces
to OpenERP's as well.
I can import data from the external database simply with XML RCP but
the problem is the sync.
I can't just INSERT the modified partner and delete the old one
because i have no way to identify the old one.
I need to UPDATE it. But then i need an id that says which is which.
and external ID.
To my knowledge OpenERP can handle external IDs.
How does this work? and how can i add an external ID to my res.partner using this?
I was told that I cant create a new module for this alone I need to use the internal ID works.
|
How to approach updating an database-driven application after release?
| 14,364,804 | 2 | 1 | 120 | 1 |
python,database,migration,sqlalchemy
|
Some thoughts for managing databases for a production application:
Make backups nightly. This is crucial because if you try to do an update (to the data or the schema), and you mess up, you'll need to be able to revert to something more stable.
Create environments. You should have something like a local copy of the database for development, a staging database for other people to see and test before going live and of course a production database that your live system points to.
Make sure all three environments are in sync before you start development locally. This way you can track changes over time.
Start writing scripts and version them for releases. Make sure you store these in a source control system (SVN, Git, etc.) You just want a historical record of what has changed and also a small set of scripts that need to be run with a given release. Just helps you stay organized.
Do your changes to your local database and test it. Make sure you have scripts that do two things, 1) Scripts that modify the data, or the schema, 2) Scripts that undo what you've done in case things go wrong. Test these over and over locally. Run the scripts, test and then rollback. Are things still ok?
Run the scripts on staging and see if everything is still ok. Just another chance to prove your work is good and that if needed you can undo your changes.
Once staging is good and you feel confident, run your scripts on the production database. Remember you have scripts to change data (update, delete statements) and scripts to change schema (add fields, rename fields, add tables).
In general take your time and be very deliberate in your actions. The more disciplined you are the more confident you'll be. Updating the database can be scary, so don't rush things, write out your plan of action, and test, test, test!
| 0 | 0 | 0 | 0 |
2013-01-16T17:29:00.000
| 2 | 1.2 | true | 14,364,214 | 0 | 0 | 1 | 1 |
I'm working on a web-app that's very heavily database driven. I'm nearing the initial release and so I've locked down the features for this version, but there are going to be lots of other features implemented after release. These features will inevitably require some modification to the database models, so I'm concerned about the complexity of migrating the database on each release. What I'd like to know is how much should I concern myself with locking down a solid database design now so that I can release quickly, against trying to anticipate certain features now so that I can build it into the database before release? I'm also anticipating finding flaws with my current model and would probably then want to make changes to it, but if I release the app and then data starts coming in, migrating the data would be a difficult task I imagine. Are there conventional methods to tackle this type of problem? A point in the right direction would be very useful.
For a bit of background I'm developing an asset management system for a CG production pipeline. So lots of pieces of data with lots of connections between them. It's web-based, written entirely in Python and it uses SQLAlchemy with a SQLite engine.
|
Differentiating between debug and production environments in WSGI app
| 14,367,674 | 2 | 5 | 213 | 0 |
python,web-applications,wsgi,dev-to-production
|
If your setup allows it, consider using environment variables. Your production servers could have one value for an environment variable while dev servers have another. Then on execution of your application you can detect the value of the environment variable and set "debug" accordingly.
| 0 | 0 | 0 | 0 |
2013-01-16T20:30:00.000
| 2 | 1.2 | true | 14,367,237 | 0 | 0 | 1 | 2 |
We need to load different configuration options in our Python WSGI application, according to production or debugging environments (particularly, some server configuration information relating to the task server to which the application needs to post jobs). The way we have implemented it so far is to have a global debug variable that gets set in our deployment script - which does modify the deployment setup correctly. However, when we execute the application, we need to set the debug variable to True - since its default value is False.
So far, it is hard to correctly determine the way the debug variable works, because it is set at deploy time, not execute time. We can set it before calling the serve_forever method of our debug WSGI server, but I am not aware of the implications of this and how good that solution is.
What is the usual pattern for differentiating between debug and production environments in WSGI applications? If I need to pass it in system arguments, or if there is any other, different way, please let me know. Thanks very much!
|
Differentiating between debug and production environments in WSGI app
| 14,589,970 | 1 | 5 | 213 | 0 |
python,web-applications,wsgi,dev-to-production
|
I don't like using environment variables. Try to make it working with your application configuration, which can be overwrite by:
importing non versioned files (in dev environment), wrapped by try-except with proper log notification.
command line arguments (argparse from standard library)
| 0 | 0 | 0 | 0 |
2013-01-16T20:30:00.000
| 2 | 0.099668 | false | 14,367,237 | 0 | 0 | 1 | 2 |
We need to load different configuration options in our Python WSGI application, according to production or debugging environments (particularly, some server configuration information relating to the task server to which the application needs to post jobs). The way we have implemented it so far is to have a global debug variable that gets set in our deployment script - which does modify the deployment setup correctly. However, when we execute the application, we need to set the debug variable to True - since its default value is False.
So far, it is hard to correctly determine the way the debug variable works, because it is set at deploy time, not execute time. We can set it before calling the serve_forever method of our debug WSGI server, but I am not aware of the implications of this and how good that solution is.
What is the usual pattern for differentiating between debug and production environments in WSGI applications? If I need to pass it in system arguments, or if there is any other, different way, please let me know. Thanks very much!
|
Django: How to query objects immediately after they have been saved from a separate view?
| 14,367,745 | 0 | 0 | 100 | 0 |
python,django,django-models,django-forms,django-views
|
What other fields are there in the model db?
You could add an upload_id, which is set automatically after uploading, send it to the next view, and query for that in the db.
| 0 | 0 | 0 | 0 |
2013-01-16T20:55:00.000
| 2 | 0 | false | 14,367,670 | 0 | 0 | 1 | 1 |
Suppose I want to query and display all the images that the user just uploaded through a form on the previous page (multiple images are uploaded at once, each is made into a separate object in the db).
What's the best way to do this?
Since the view for uploading the images is different from the view for displaying the images, how does the second view know which images were part of that upload? I thought about creating and saving the image objects in the first view, gathering the pks, and passing them to the second view, but I understand that it is bad practice. So how should I make sure the second view knows which primary keys to query for?
|
Custom Django Database Frontend
| 14,371,043 | 1 | 3 | 2,045 | 1 |
python,database,django,frontend
|
Exporting the excel sheet in Django and have the them rendered as text fields , is not as easy as 2 step process.
you need to know how Django works.
First you need to export the data in mysql in database using either some language or some ready made tools.
Then you need to make a Model for that table and then you can use Django admin to edit them
| 0 | 0 | 0 | 0 |
2013-01-17T01:00:00.000
| 2 | 0.099668 | false | 14,370,576 | 0 | 0 | 1 | 1 |
I have been trying to get my head around Django over the last week or two. Its slowly starting to make some sense and I am really liking it.
My goal is to replace a fairly messy excel spreadsheet with a database and frontend for my users. This would involve pulling the data out of a table, presenting it in a web tabular format, and allowing changes to be made through text fields and drop down menus, with a simple update button that will update all changes to the DB.
My question is, will the built in Django Forms functionality be the best solution? Or would I create some sort of for loop for my objects and wrap them around html form syntax in my template? I'm just not too sure how to approach the solution.
Apologies if this seems like an simple question, I just feel like there is maybe a few ways to do it but maybe there is one perfect way.
Thanks
|
Google App Engine Instances keep quickly shutting down
| 14,377,741 | 4 | 8 | 2,656 | 0 |
python,google-app-engine,python-2.7
|
My solution to this was to increase the Pending Latency time.
If a webpage fires 3 ajax requests at once, AppEngine was launching new instances for the additional requests. After configuring the Minimum Pending Latency time - setting it to 2.5 secs, the same instance was processing all three requests and throughput was acceptable.
My project still has little load/traffic... so in addition to raising the Pending Latency, I openend an account at Pingdom and configured it to ping my Appengine project every minute.
The combination of both, makes that I have one instance that stays alive and is serving up all requests most of the time. It will scale to new instances when really necessary.
| 0 | 1 | 0 | 0 |
2013-01-17T03:47:00.000
| 3 | 1.2 | true | 14,371,920 | 0 | 0 | 1 | 3 |
So I've been using app engine for quite some time now with no issues. I'm aware that if the app hasn't been hit by a visitor for a while then the instance will shut down, and the first visitor to hit the site will have a few second delay while a new instance fires up.
However, recently it seems that the instances only stay alive for a very short period of time (sometimes less than a minute), and if I have 1 instance already up and running, and I refresh an app webpage, it still fires up another instance (and the page it starts is minimal homepage HTML, shouldn't require much CPU/memory). Looking at my logs its constantly starting up new instances, which was never the case previously.
Any tips on what I should be looking at, or any ideas of why this is happening?
Also, I'm using Python 2.7, threadsafe, python_precompiled, warmup inbound services, NDB.
Update:
So I changed my app to have at least 1 idle instance, hoping that this would solve the problem, but it is still firing up new instances even though one resident instance is already running. So when there is just the 1 resident instance (and I'm not getting any traffic except me), and I go to another page on my app, it is still starting up a new instance.
Additionally, I changed the Pending Latency to 1.5s as koma pointed out, but that doesn't seem to be helping.
The memory usage of the instances is always around 53MB, which is surprising when the pages being called aren't doing much. I'm using the F1 Frontend Instance Class and that has a limit of 128, but either way 53MB seems high for what it should be doing. Is that an acceptable size when it first starts up?
Update 2: I just noticed in the dashboard that in the last 14 hours, the request /_ah/warmup responded with 24 404 errors. Could this be related? Why would they be responding with a 404 response status?
Main question: Why would it constantly be starting up new instances (even with no traffic)? Especially where there are already existing instances, and why do they shut down so quickly?
|
Google App Engine Instances keep quickly shutting down
| 14,416,358 | 1 | 8 | 2,656 | 0 |
python,google-app-engine,python-2.7
|
1 idle instance means that app-engine will always fire up an extra instance for the next user that comes along - that's why you are seeing an extra instance fired up with that setting.
If you remove the idle instance setting (or use the default) and just increase pending latency it should "wait" before firing the extra instance.
With regards to the main question I think @koma might be onto something in saying that with default settings app-engine will tend to fire extra instances even if the requests are coming from the same session.
In my experience app-engine is great under heavy traffic but difficult (and sometimes frustrating) to work with under low traffic conditions. In particular it is very difficult to figure out the nuances of what the criteria for firing up new instances actually are.
Personally, I have a "wake-up" cron-job to bring up an instance every couple of minutes to make sure that if someone comes on the site an instance is ready to serve. This is not ideal because it will eat at my quote, but it works most of the time because traffic on my app is reasonably high.
| 0 | 1 | 0 | 0 |
2013-01-17T03:47:00.000
| 3 | 0.066568 | false | 14,371,920 | 0 | 0 | 1 | 3 |
So I've been using app engine for quite some time now with no issues. I'm aware that if the app hasn't been hit by a visitor for a while then the instance will shut down, and the first visitor to hit the site will have a few second delay while a new instance fires up.
However, recently it seems that the instances only stay alive for a very short period of time (sometimes less than a minute), and if I have 1 instance already up and running, and I refresh an app webpage, it still fires up another instance (and the page it starts is minimal homepage HTML, shouldn't require much CPU/memory). Looking at my logs its constantly starting up new instances, which was never the case previously.
Any tips on what I should be looking at, or any ideas of why this is happening?
Also, I'm using Python 2.7, threadsafe, python_precompiled, warmup inbound services, NDB.
Update:
So I changed my app to have at least 1 idle instance, hoping that this would solve the problem, but it is still firing up new instances even though one resident instance is already running. So when there is just the 1 resident instance (and I'm not getting any traffic except me), and I go to another page on my app, it is still starting up a new instance.
Additionally, I changed the Pending Latency to 1.5s as koma pointed out, but that doesn't seem to be helping.
The memory usage of the instances is always around 53MB, which is surprising when the pages being called aren't doing much. I'm using the F1 Frontend Instance Class and that has a limit of 128, but either way 53MB seems high for what it should be doing. Is that an acceptable size when it first starts up?
Update 2: I just noticed in the dashboard that in the last 14 hours, the request /_ah/warmup responded with 24 404 errors. Could this be related? Why would they be responding with a 404 response status?
Main question: Why would it constantly be starting up new instances (even with no traffic)? Especially where there are already existing instances, and why do they shut down so quickly?
|
Google App Engine Instances keep quickly shutting down
| 14,742,365 | 0 | 8 | 2,656 | 0 |
python,google-app-engine,python-2.7
|
I only started having this type of issue on Monday February 4 around 10 pm EST, and is continuing until now. I first started noticing then that instances kept firing up and shutting down, and latency increased dramatically. It seemed that the instance scheduler was turning off idle instances too rapidly, and causing subsequent thrashing.
I set minimum idle instances to 1 to stabilize latency, which worked. However, there is still thrashing of new instances. I tried the recommendations in this thread to only set minimum pending latency, but that does not help. Ultimately, idle instances are being turned off too quickly. Then when they're needed, the latency shoots up while trying to fire up new instances.
I'm not sure why you saw this a couple weeks ago, and it only started for me a couple days ago. Maybe they phased in their new instance scheduler to customers gradually? Are you not still seeing instances shutting down quickly?
| 0 | 1 | 0 | 0 |
2013-01-17T03:47:00.000
| 3 | 0 | false | 14,371,920 | 0 | 0 | 1 | 3 |
So I've been using app engine for quite some time now with no issues. I'm aware that if the app hasn't been hit by a visitor for a while then the instance will shut down, and the first visitor to hit the site will have a few second delay while a new instance fires up.
However, recently it seems that the instances only stay alive for a very short period of time (sometimes less than a minute), and if I have 1 instance already up and running, and I refresh an app webpage, it still fires up another instance (and the page it starts is minimal homepage HTML, shouldn't require much CPU/memory). Looking at my logs its constantly starting up new instances, which was never the case previously.
Any tips on what I should be looking at, or any ideas of why this is happening?
Also, I'm using Python 2.7, threadsafe, python_precompiled, warmup inbound services, NDB.
Update:
So I changed my app to have at least 1 idle instance, hoping that this would solve the problem, but it is still firing up new instances even though one resident instance is already running. So when there is just the 1 resident instance (and I'm not getting any traffic except me), and I go to another page on my app, it is still starting up a new instance.
Additionally, I changed the Pending Latency to 1.5s as koma pointed out, but that doesn't seem to be helping.
The memory usage of the instances is always around 53MB, which is surprising when the pages being called aren't doing much. I'm using the F1 Frontend Instance Class and that has a limit of 128, but either way 53MB seems high for what it should be doing. Is that an acceptable size when it first starts up?
Update 2: I just noticed in the dashboard that in the last 14 hours, the request /_ah/warmup responded with 24 404 errors. Could this be related? Why would they be responding with a 404 response status?
Main question: Why would it constantly be starting up new instances (even with no traffic)? Especially where there are already existing instances, and why do they shut down so quickly?
|
WSGI apps with python 2 and python 3 on the same server?
| 14,375,870 | -1 | 5 | 1,947 | 0 |
python,apache,wsgi
|
Its quite possible. This is what virtualenv as all about. Set up the second app in a virtualenv , with python3 .
You an add it in a virtualhost configuration in apache.
| 0 | 1 | 0 | 0 |
2013-01-17T09:09:00.000
| 2 | -0.099668 | false | 14,375,520 | 0 | 0 | 1 | 1 |
I already have a web application in written in Python 2 that runs over WSGI (specifically, OpenERP web server).
I would like to write a new web application that would run on the same server (Apache 2 on Ubuntu), but using WSGI and Python 3. The two applications would be on different ports.
Is that possible?
|
Jinja2 substring integer
| 14,386,224 | 2 | 0 | 6,415 | 0 |
python,substring,jinja2
|
Or you can create a filter for phonenumbers like {{ phone_number|phone }}
| 0 | 0 | 0 | 0 |
2013-01-17T10:30:00.000
| 2 | 0.197375 | false | 14,377,002 | 0 | 0 | 1 | 1 |
I have a question and I tried to solve diff way with Jinja2. I have a number that is saved in database. When I print the original number is for ex: 907333-5000. I want that number to be printed in this format: (907) 333-5000 but I don't know exactly how to do with Jinja2. Thank you
|
Python 3 - SQL Result - where to store it
| 14,377,893 | 0 | 1 | 105 | 1 |
python,database,multithreading
|
This would be a good use for something like memcached.
| 0 | 0 | 0 | 0 |
2013-01-17T10:43:00.000
| 1 | 1.2 | true | 14,377,250 | 0 | 0 | 1 | 1 |
I'm wrinting a webapp in bottle.
I have a small interface that lets user run sql statements.
Sometimes it takes about 5 seconds until the user get's a result because the DB is quite big and old.
What I want to do is the following:
1.Starte the query in a thread
2.Give the user a response right away and have ajax poll for the result
There is one thing that I'm not sure of....Where do I store the result of the query?
Should I store it in a DB ?
Should I store it in a variable inside my webapp ?
What do you guys think would be best ?
|
Python framework to create pure backend project
| 14,391,975 | 5 | 0 | 2,258 | 0 |
python,django,pylons,web-frameworks
|
Any of them will work. Arguably the most popular Python web frameworks these days are Django, Flask, and Pyramid.
| 0 | 0 | 0 | 0 |
2013-01-17T13:48:00.000
| 4 | 1.2 | true | 14,380,519 | 0 | 0 | 1 | 1 |
Some details about the project:
pure backend project, no front
expose a rest api (maybe custom routes?)
connect to other rest apis
query MySQL & MongoDB using an ORM
have unit tests
What Python framework would you recomend me for it?
|
Pydev: How to import a gae project to eclipse Pydev gae project?
| 14,387,118 | 0 | 1 | 712 | 0 |
python,google-app-engine,pydev
|
If you want to use Eclipse's Import feature, go with General -> File system.
| 0 | 1 | 0 | 0 |
2013-01-17T15:58:00.000
| 3 | 0 | false | 14,383,025 | 0 | 0 | 1 | 2 |
Created a gae project with the googleappengine launch and have been building it with textmate.
Now, I'd like to import it to the Eclipse PyDev GAE project. Tried to import it, but it doesn't work.
Anyone know how to do that?
Thanks in advance.
|
Pydev: How to import a gae project to eclipse Pydev gae project?
| 14,383,720 | 2 | 1 | 712 | 0 |
python,google-app-engine,pydev
|
You could try not using the eclipse import feature. Within Eclipse, create a new PyDev GAE project, and then you can copy in your existing files.
| 0 | 1 | 0 | 0 |
2013-01-17T15:58:00.000
| 3 | 1.2 | true | 14,383,025 | 0 | 0 | 1 | 2 |
Created a gae project with the googleappengine launch and have been building it with textmate.
Now, I'd like to import it to the Eclipse PyDev GAE project. Tried to import it, but it doesn't work.
Anyone know how to do that?
Thanks in advance.
|
Interacting with a flash application using python
| 14,387,317 | 1 | 0 | 1,778 | 0 |
python,flash,testing,automated-tests,jython
|
There are a number of ways I've seen around the net, but most seem to involve exposing Flash through JS and then using the JS interface, which is a bit of a problem if you are trying to test things that you don't have dev access to, or need to be in a prod-like state for your tests. Of course, even if you do that, you aren't really simulating user interaction, since you are working through an API.
If you can reliably model your Flash components with fixed pixel positions relative to the page element the Flash component is running in, you should be able to use Selenium Webdriver to position the mouse cursor and send click commands without actually cracking Flash itself. I'm not 100% sure that would work, but it seems at least worth a shot. Validation will be a bit trickier, but I think you should be able to do it with some form of image comparison. A few of the Flash automators I saw are actually using image processing under the hood to control both input and output, so it seems like a legitimate way to interact with it.
| 0 | 0 | 0 | 0 |
2013-01-17T19:53:00.000
| 1 | 0.197375 | false | 14,387,018 | 0 | 0 | 1 | 1 |
I'm contemplating using python for some functional testing of flash ad-units for work. Currently, we have an ad (in flash) that has N locations (can be defined as x,y) that need to be 'clicked'. I'd like to use python, but I know Java will do this.
I also considered Jython + Sikuli, but wanted to know if there is a python only library or tool to do this. I'd prefer to not run Jython + Sikuli if there is a native python option.
TIA.
@user1929959 From the pyswftools page, "At the moment, the library can be used in Python applications (including WebBased applications) to generate Flash animations on the fly.". And from the bottle-flash page, "This plugin enables flash messages in bottle.". Neither help me, unless I'm overlooking something ...
|
How to delete or reset a search index in Appengine
| 14,390,379 | 7 | 5 | 2,747 | 0 |
python,google-app-engine,full-text-search
|
If you empty out your index and call index.delete_schema() (index.deleteSchema() in Java) it will clear the mappings that we have from field name to type, and you can index your new documents as expected. Thanks!
| 0 | 1 | 0 | 0 |
2013-01-17T21:14:00.000
| 1 | 1.2 | true | 14,388,251 | 0 | 0 | 1 | 1 |
The Situation
Alright, so we have our app in appengine with full text search activated. We had an index set on a document with a field named 'date'. This field is a DateField and now we changed the model of the document so the field 'date' is now a NumericField.
The problem is, on the production server, even if I cleared all the document from the index, the server responds with this type of error: Failed to parse search request ""; SortSpec numeric default value does not match expression type 'TEXT' in 'date'
The Solution
The problem is, "I think", the fact that the model on the server doesn't fit the model of the search query. So basically, one way to do it, would be to delete the whole index, but I don't know how to do it on the production server.
The dev server works flawlessly
|
Why request.get_full_path is not available in the templates?
| 14,392,480 | 4 | 2 | 2,821 | 0 |
python,django
|
Try {{ request.get_full_path }} in your template.
| 0 | 0 | 0 | 0 |
2013-01-18T04:30:00.000
| 2 | 0.379949 | false | 14,392,423 | 0 | 0 | 1 | 1 |
I am currently passing 'request_path': request.get_full_path() through render_to_response. I was wondering is this is really nessccary since I was told the it's unnesscary and the context processor takes care of that but {{ get_full_path }} is empty.
Please advise.
|
changing GAE db.Model schema on dev_server with ListProperties? BadValueError
| 14,401,694 | 0 | 3 | 94 | 0 |
python,google-app-engine,google-cloud-datastore
|
here's my workaround to get it working on dev_server:
1) update your model in production and deploy it
2) use appcfg.py download_data and grab all entities of the type you've updated
3) use appcfg.py upload_data and push all the entities into your local datastore
voila.. your local datastore entities can now be retrieved without generating BadValueError
| 0 | 1 | 0 | 0 |
2013-01-18T13:11:00.000
| 1 | 1.2 | true | 14,399,722 | 0 | 0 | 1 | 1 |
My understanding of changing db.Model schemas is that it 'doesn't matter' if you add a property and then try and fetch old entities without that property.
Indeed, adding the following property to my SiteUser db.Model running on dev_server:
category_subscriptions = db.StringProperty()
Still allows me to retrieve an old SiteUser entity doesn't have this property (via a GQL query).
However, changing the property to a list property, (either StringListProperty, ListProperty):
category_subscriptions = db.StringListProperty()
results in the following error when I try and retrieve the user:
BadValueError: Property category_subscriptions is required
This is on the SDK dev server version 1.7.4. Why is that and how would I work around it?
|
Django Gunicorn Long Polling
| 14,403,008 | 2 | 0 | 1,026 | 0 |
python,django,event-handling,gunicorn
|
Gunicorn by default will spawn regular synchronous WSGI processes. You can however tell it to spawn processes that use gevent, eventlet or tornado instead. I am only familiar with gevent which can certainly be used instead of Node.js for long polling.
The memory footprint per process is about the same for mod_wsgi and gunicorn (in my limited experience), but you get more bells-and-whistles with gunicorn. If you change the default worker class to gevent (or eventlet or tornado) you also get a LOT more performance out of each process.
| 0 | 1 | 0 | 0 |
2013-01-18T13:28:00.000
| 1 | 1.2 | true | 14,399,958 | 0 | 0 | 1 | 1 |
Is using Django with gunicorn is considered to be a replacement for using evented/async servers like Tornado, Node.js, and similar ? Additionally, Will that be helpful in handling long-polling/cometed services?
Finally, is Gunicorn only replacing the memory consuming Apache threads (in case of Apache/mod-wsgi) with lightweight threads, or there are an additional benefits?
|
Accessing a Java RMI API from Python program
| 14,403,624 | 2 | 5 | 5,222 | 0 |
java,python,rmi
|
You're going to have a very hard time i would imagine. RMI and Java serialization are very Java specific. I don't know if anyone has already attempted to implement this in python (i'm sure google knows), but your best bet would be to find an existing library.
That aside, i would look at finding a way to do the RMI in some client side java shim (maybe some sort of python<->java bridge library?). Or, maybe you could run your python in Jython and leverage the underlying jvm to handle the RMI stuff.
| 0 | 0 | 0 | 1 |
2013-01-18T16:41:00.000
| 2 | 0.197375 | false | 14,403,472 | 0 | 0 | 1 | 2 |
I have a Python program that needs to access a Java RMI API from a third party system in order to fetch some data.
I have no control over the third party system so it MUST be done using RMI.
What should be my approach here? I have never worked with RMI using Python so I'm kind of lost as to what I should do..
Thanks in advance!
|
Accessing a Java RMI API from Python program
| 14,403,646 | 2 | 5 | 5,222 | 0 |
java,python,rmi
|
How about a little java middle ware piece that you can talk to via REST and the piece in turn can to the remote API?
| 0 | 0 | 0 | 1 |
2013-01-18T16:41:00.000
| 2 | 1.2 | true | 14,403,472 | 0 | 0 | 1 | 2 |
I have a Python program that needs to access a Java RMI API from a third party system in order to fetch some data.
I have no control over the third party system so it MUST be done using RMI.
What should be my approach here? I have never worked with RMI using Python so I'm kind of lost as to what I should do..
Thanks in advance!
|
Is Brython entirely client-side?
| 14,418,885 | 5 | 17 | 5,868 | 0 |
python,brython
|
Brython itself seems to be completely client side, but whether that will be enough really depends on the code you wrote. It is not a full blown Python interpreter and does not have the libraries. You might want a backend to support it or use another client side solution as suggested in the comments.
Given how few real web hosters support Python, I think it is very unlikely that Dropbox would be suitable for this, in case you do need processing on the server as well.
| 0 | 0 | 0 | 0 |
2013-01-19T20:57:00.000
| 4 | 1.2 | true | 14,418,774 | 0 | 0 | 1 | 2 |
I have a piece of code written in Python. I would like to put that code in a webpage. Brython seems like the simplest way to glue the two things together, but I don't have a server that can actually run code on the server side.
Does Brython require server-side code, or can I host a page using it on the cheap with (say) Dropbox?
|
Is Brython entirely client-side?
| 17,833,373 | 2 | 17 | 5,868 | 0 |
python,brython
|
Brython doesn't always work with python code, I've learned.
Something I think needs to be clarified is that while you can run brython in a very limited capacity by accessing the files locally, (because of the AJAX requirement) you can't import libraries - not even the most basic (e.g., html, time). You really need a basic web server in order to run brython.
I've found it's good for basic scripts, since my python is better than my JS. It seems to break with more complicated syntax, though.
| 0 | 0 | 0 | 0 |
2013-01-19T20:57:00.000
| 4 | 0.099668 | false | 14,418,774 | 0 | 0 | 1 | 2 |
I have a piece of code written in Python. I would like to put that code in a webpage. Brython seems like the simplest way to glue the two things together, but I don't have a server that can actually run code on the server side.
Does Brython require server-side code, or can I host a page using it on the cheap with (say) Dropbox?
|
Initializing a class at Django startup and then referring to it in views
| 14,419,559 | 2 | 2 | 771 | 0 |
python,django
|
Try to use the singleton design pattern.
| 0 | 0 | 0 | 0 |
2013-01-19T22:25:00.000
| 2 | 1.2 | true | 14,419,532 | 0 | 0 | 1 | 1 |
I am trying to do some pre-processing at Django startup (I put a startup script that runs once in urls.py) and then use the created instance of an object in my views. How would I go about doing that?
|
Learning Python for web development. Python 2 or 3?
| 14,425,489 | 6 | 4 | 766 | 0 |
web-applications,python-3.x,python-2.x
|
If you are looking for a Python framework for web development, I would highly recommend using Django. Regarding Python 2 or 3 issue, Python 3 is of course the future. Django will soon be migrated to Python 3. For now you can stick with Django and Python 2.7. But avoid using those features of Python 2.7 which are removed from Python 3.
| 0 | 0 | 0 | 0 |
2013-01-20T14:17:00.000
| 1 | 1.2 | true | 14,425,379 | 0 | 0 | 1 | 1 |
I am a quite experienced PHP developer (using OOP features where possible) and currently searching for another language to play with. I've used Ruby, Python and Node and found Python to be the best choice so far (in terms of maturity, ease of use and the learning curve). As I am mostly focusing on web centric development Django seems to be the obvious framework of choice. But here is my question: Django is still based on Python 2, as is Flask but nearly every Python tutorial out there suggests you to start learning Python 3. But it seems that version 2 is the one you have to start with if you want to use the more popular web frameworks and that doesn't seem to change any time soon. Is this true? Do you know alternatives?
I know there are similar questions like that but they are either outdated or not focused on web development so I guess this is a valid one. Thanks in advance!
|
echoprint - stopping the service Solr, I lose the database
| 14,444,869 | 2 | 0 | 696 | 0 |
python,solr,audio-fingerprinting,echoprint
|
well I found my mistake and if the ttserver. Thanks Alexandre for that data. Well the right way to make it work would be this
/usr/local/tokyotyrant-1.1.33/bin/ttserver casket.tch
there indicated the name of the on-disk hash, that will make persistent. Then start Solr normally and I can enter and view songs without problems :)
| 0 | 1 | 0 | 0 |
2013-01-20T17:28:00.000
| 1 | 0.379949 | false | 14,427,160 | 0 | 0 | 1 | 1 |
As I can do to stop the service and tt solr correctly. What I do is restart the PC and then wake up services, but to perform validation of a song, I get a message as if the database has been damaged. I wonder what is the right way to close the service to run and test after songs but not the database is damaged. Greetings and thanks.
Start the tts / usr/local/tokyotyrant-1.1.33/bin/ttservercd echoprint-server/solr/solr
java -Dsolr.solr.home=/home/user01/echoprint-server/solr/solr/solr/
-Djava.awt.headless=true -DSTOP.PORT=8079 -DSTOP.KEY=mykey -jar start.jar
Ingest new song
I stop Solr java -DSTOP.PORT=8079 -DSTOP.KEY=mykey -jar start.jar
--stop
Now, when I start the service and I want to make a song compracion some that I have in the database sends me an error.
Traceback (most recent call last): File "lookup.py", line 51, in lookup (sys.argv [1]) File "lookup.py", line 35, in lookup result = fp.best_match_for_query (decoded) File ".. / API / fp.py ", line 194, in best_match_for_query get_tyrant tcodes = (). multi_get (trackids) File".. / API / pytyrant.py ", line 296, in multi_get raise KeyError (" Missing a result, unusable response in 1.1.10 ") KeyError: 'Missing a result, unusable response in 1.1.10'
How should initiate and terminate service without losing any information.?
|
How to set up a django test server when using gunicorn?
| 20,411,022 | 1 | 11 | 1,308 | 0 |
python,django,selenium,gunicorn
|
Off top of my head, you can try to override LiveServerTestCase.setUpClass and wind up gunicorn instead of LiveServerThread
| 0 | 0 | 0 | 0 |
2013-01-21T02:18:00.000
| 2 | 0.099668 | false | 14,431,639 | 0 | 0 | 1 | 2 |
I am running an app in django with gunicorn. I am trying to use selenium to test my app but have run into a problem.
I need to create a test server like is done with djangos LiveServerTestCase that will work with gunicorn.
Does anyone have any ideas of how i could do this?
note: could also someone confirm me that LiveServerTestCase is executed as a thread not a process
|
How to set up a django test server when using gunicorn?
| 20,448,450 | 2 | 11 | 1,308 | 0 |
python,django,selenium,gunicorn
|
I've read the code. Looking at LiveServerTestCase for inspiration makes sense but trying to cook up something by extending or somehow calling LiveServerTestCase is asking for trouble and increased maintenance costs.
A robust way to run which looks like what LiveServerTestCase does is to create from unittest.TestCase a test case class with custom setUpClass and tearDownClass methods. The setUpClass method:
Sets up an instance of the Django application with settings appropriate for testing: a database in a location that won't interfere with anything else, logs recorded to the appropriate place and, if emails are sent during normal operations, with emails settings that won't make your sysadmins want to strangle you, etc.
[In effect, this is a deployment procedure. Since we want to eventually deploy our application, the process above is one which we should develop anyway.]
Load whatever fixtures are necessary into the database.
Starts a Gunicorn instance run this instance of the Django application, using the usual OS commands for this.
The tearDownClass:
Shuts down the Gunicorn instance, again using normal OS commands.
Deletes the database that was created for testing, deletes whatever log files may have been created, etc.
And between the setup and teardown our tests contact the application on the port assigned to Gunicorn and they load more fixtures if needed, etc.
Why not try to use a modified LiveServerTestCase?
LiveServerTestCase includes the entire test setup in one process: the tests, the WSGI server and the Django application. Gunicorn is not designed for operating like this. For one thing, it uses a master process and worker processes.
If LiveServerTestCase is modified to somehow start the Django app in an external process, then a good deal of the benefits of this class go out the window. LiveServerTestCase relies on the fact that it can just modifies settings or database connections in its process space and that these modifications will carry over to the Django app, because it lives in the same process. If the app is in a different process, these tricks can't work. Once LiveServerTestCase is modified to take care of this, the end result is close to what I've outlined above.
Additional: Could someone get Gunicorn and Django to run in the same process?
I'm sure someone could glue them together, but consider the following. This would certainly mean changing the core code of Gunicorn since Gunicorn is designed to use master and worker processes. Then, this person who created the glue would be responsible for keeping this glue up to date when Gunicorn or Django's internals change in such a way that makes the glue break. At the end of the day doing this requires more work than using the method outlined at the start of this answer.
| 0 | 0 | 0 | 0 |
2013-01-21T02:18:00.000
| 2 | 0.197375 | false | 14,431,639 | 0 | 0 | 1 | 2 |
I am running an app in django with gunicorn. I am trying to use selenium to test my app but have run into a problem.
I need to create a test server like is done with djangos LiveServerTestCase that will work with gunicorn.
Does anyone have any ideas of how i could do this?
note: could also someone confirm me that LiveServerTestCase is executed as a thread not a process
|
How can I check if a checkbox is checked in Selenium Python WebDriver?
| 18,076,075 | 6 | 40 | 62,290 | 0 |
python,selenium-webdriver
|
I'm using driver.find_element_by_name("< check_box_name >").is_selected()
| 0 | 0 | 1 | 0 |
2013-01-21T16:09:00.000
| 6 | 1 | false | 14,442,636 | 0 | 0 | 1 | 1 |
I've been searching a week how check if a checkbox is checked in Selenium WebDriver with Python, but I find only algorithms from JAVA. I've read the WebDriver docs and it doesn't have an answer for that.
Anyone have a solution?
|
Communicate between two servers - Amazon SQS / SNS?
| 14,500,253 | 1 | 2 | 479 | 0 |
python,linux,amazon-web-services,debian
|
Assuming Server #1 is running a script through cron, then there should be no reason you can't just use ssh to remotely change Server #2. I believe if you use the elastic-ip addresses it might not count as bandwidth usage.
Barring that, I'd use SNS. The model would instead be something like:
Server #1 notifies Server #2 (script starting)
Server #1 starts running script
(optional) Server #1 notifies Server #2 of progress
Server #1 notifies Server #2 (script complete), starting Server #2's scripts
Server #2 notifies Server #1 when it's complete
In this case you'd set up some sort of simple webserver to accept the notifications. Simple CGI scripts would cut it though aren't the most secure option.
I'd only bring SQS into the picture if two many scripts were trying to run at once. If you are chunking it "all of Server #1, then Server #2" it's a level you don't really need.
| 0 | 0 | 1 | 0 |
2013-01-24T11:09:00.000
| 2 | 0.099668 | false | 14,499,893 | 0 | 0 | 1 | 1 |
I have two servers. Basically Server #1 runs a script throughout the day and when its done I need to send a notification to Server #2 to get it to run some other scripts.
I am currently leaning on Amazon AWS tools and using Python so I was wondering if someone could recommend a simple, secure and easy to program way of:
Setting up a flag on Server #1 when it is finished running its script
Polling this flag from Server #2
Run scripts on Server #2 when the flag is active
Remove the flag from Server #1 when the scripts have finished running on Server #2
Should I be using Amazon SNS or SQS? Alternatively, are these both a poor choice, and if so can you recommend anything better? I am leaning towards AWS tools because I already have boto installed and I like the ease of use.
|
Make a Python app package/install for Mac
| 14,504,112 | 0 | 1 | 1,071 | 0 |
python,macos,installation,py2app,dmg
|
I believe what you are looking for is to add: #!/usr/bin/python to the first line of your code will allow your friend to just double click on the file and it should open. Just as a warning osx does not tell us what version and as such what version of python and what standard libraries are going to be present.
Also, just make sure that if they have played around with their settings to much and they double click on python it does not work they will have to choose to open the file in "terminal.app" in the Utilities Applications folder (/Applications/Utilities/terminal.app)
The other idea is borrow a mac and compile it with the py2app program that you already mentioned. Otherwise there is no generic binary file that you will be able to compile in windows and have run on mac.
| 0 | 0 | 0 | 0 |
2013-01-24T11:24:00.000
| 1 | 1.2 | true | 14,500,185 | 1 | 0 | 1 | 1 |
I have developed an application for a friend. Aplication is not that complex, involves only two .py files, main.py and main_web.py, main being the application code, and _web being the web interface for it. As the web was done later, it's kept in this format, I know it can be done with one app but not to complicate it too much, I kept it that way. Two two communicate with some files, and web part uses Flask so there's "templates" directory too.
Now, I want to make a package or somehow make this easier for distribution, on a OSX system. I see that there is a nice py2app thingy, but I am running Windows and I can't really use it since it won't work on Win. I also don't know will py2app make problems since some configs are in text files in the directory, and they change during the runtime.
So, I am wondering, is there any other way to make a package of this, some sort of setup like program, or maybe some script or something? Some simple "way" of doing this would be to just copy the files in the directory in the "Documents", and add some shortcuts to the desktop to run those two apps, and that would be it, no need for anything else. DMG would be fine, but not mandatory.
|
How to crawl a web site where page navigation involves dynamic loading
| 14,503,228 | 1 | 3 | 4,658 | 0 |
python,web-crawler
|
if you are using google chrome, you can check the url which is dynamically being called in
network->headers of the developer tools
so based on that you can identify whether it is a GET or POST request.
If it is a GET request you can find the parameters straight away from the url.
If it is a POST request you can find the parameters from form data in network->headers
of the developer tools.
| 0 | 0 | 1 | 0 |
2013-01-24T13:58:00.000
| 6 | 0.033321 | false | 14,503,078 | 0 | 0 | 1 | 1 |
I want to crawl a website having multiple pages and when a page number is clicked it is dynamically loaded.How to screen scrape it?
i.e as the url is not present as href or a how to crawl to other pages?
Would be greatful if someone helped me on this.
PS:URL remains the same when different page is clicked.
|
Implementing a Flask blueprint so that it can be safely mounted more than once?
| 14,527,032 | 2 | 3 | 636 | 0 |
python,flask
|
It's hard to answer a question like this, because it is so general.
First, your Blueprint needs to be implemented in a way that makes no assumptions about the state of the app object it will be registered with. Second, you'll want to use a configurable url scheme to prevent route conflicts.
There are far more nuanced components of this, but without seeing your specific code and problem it's about as specific as I feel I can get.
| 0 | 0 | 0 | 0 |
2013-01-24T14:15:00.000
| 1 | 0.379949 | false | 14,503,387 | 0 | 0 | 1 | 1 |
The Flask documentation says :
that you can register blueprints multiple times though not every blueprint might respond properly to that. In fact it depends on how the blueprint is implemented if it can be mounted more than once.
But I can't seem to find out what must be done to mount a blueprint safely more than once.
|
Exception while performing a SOAP request using suds in python
| 14,530,273 | 0 | 0 | 189 | 0 |
soap-client,python-2.6,suds
|
This may not exactly be a suds issue. Could you pasted your full code?
Have you validate that the same Service call returns the results when you use a Tool (SOAPUI may be) to invoke the service?
| 0 | 0 | 0 | 0 |
2013-01-24T21:00:00.000
| 1 | 0 | false | 14,510,670 | 0 | 0 | 1 | 1 |
I am accessing our jira system to get some information through SOAP. They updated the jira system lately and I started seeing some problems. The problem is very well known but I couldn't wrap head around the solution which was posted.
File "C:\CRMAPPS\APPS\PYTHON~1\lib\xml\sax\handler.py", line 38, in fatalError
raise exception
xml.sax._exceptions.SAXParseException: :756:14: no element found
Query function –
JQLStr = """ project=%s and ( summary ~ "\"%s"" OR description ~ "\"%s"" ) and key >= %s ORDER BY key ASC"""%(self.proj, buildID, buildID, ind)
issues = self.client.service.getIssuesFromJqlSearch (self.auth , JQLStr , 500)
It use to work properly with a quick and dirty fix I had, changing the 500 results to 1.
issues = self.client.service.getIssuesFromJqlSearch (self.auth , JQLStr , 1)
But pretty lately It's not return me the proper number tickets. I wanted to know If I need to update SUDS / Python or Complain to IT people responsible to keep Jira up and Running.
I am responsible for more than 10 reporting scripts which are currently functionally failing or crashing due to above 2 reasons.. Using a different soap client is not something I can afford with such a tight schedule..
|
Django : Call a method only once when the django starts up
| 14,516,949 | 1 | 6 | 8,360 | 0 |
python,django,django-views,django-database
|
there are some cheats for this. The general solution is trying to include the initial code in some special places, so that when the server starts up, it will run those files and also run the code.
Have you ever tried to put print 'haha' in the settings.py files :) ?
Note: be aware that settings.py runs twice during start-up
| 0 | 0 | 0 | 0 |
2013-01-25T06:45:00.000
| 2 | 0.099668 | false | 14,516,737 | 0 | 0 | 1 | 1 |
I want to initialize some variables (from the database) when Django starts.
I am able to get the data from the database but the problem is how should I call the initialize method . And this should be only called once.
Tried looking in other Pages, but couldn't find an answer to it.
The code currently looks something like this ::
def get_latest_dbx(request, ....):
#get the data from database
def get_latest_x(request):
get_latest_dbx(request,x,...)
def startup(request):
get_latest_x(request)
|
Python 2.7 or 3.3 for learning Django
| 14,519,663 | 7 | 8 | 6,488 | 0 |
python,django,python-3.x,python-2.7
|
Django only has experimental support for Python 3, so you'll have to go with Python 2.7 for now.
| 0 | 0 | 0 | 0 |
2013-01-25T10:13:00.000
| 4 | 1.2 | true | 14,519,625 | 1 | 0 | 1 | 2 |
I am interested in learning Python but I don't know which version I should chose. When I Googled, I got answers posted over a year ago. If I want to learn Django, which version will be useful and will get support?
Note that I know C, C++, Java and C#.
|
Python 2.7 or 3.3 for learning Django
| 14,519,922 | 0 | 8 | 6,488 | 0 |
python,django,python-3.x,python-2.7
|
If you already know several languages then learn both Python 2 and 3. The difference is small enough to allow many projects to support both versions using single (the same) source code.
For actual deployment you might prefer Python 2.7 if you need to use dependencies that are not ported to Python 3.
| 0 | 0 | 0 | 0 |
2013-01-25T10:13:00.000
| 4 | 0 | false | 14,519,625 | 1 | 0 | 1 | 2 |
I am interested in learning Python but I don't know which version I should chose. When I Googled, I got answers posted over a year ago. If I want to learn Django, which version will be useful and will get support?
Note that I know C, C++, Java and C#.
|
In Pyramid Framework what is the difference between default Unencrypted Session Factory and setting cookies manually?
| 14,539,402 | 8 | 2 | 451 | 0 |
python,python-2.7,pyramid
|
The UnencryptedCookieSessionFactory manages one cookie, that is signed. This means that the client can read1 what is in the cookie, but cannot change the values in the cookie.
If you set cookies directly using response.set_cookie(), the client can not only read the cookie, they can change the value of the cookie and you won't be able to detect that the contents have been tampered with.
Moreover, the UnencryptedCookieSessionFactory let's you store any python structure and it'll take care of encoding these to fit within the limitations of a cookie; you'd have to do the same work manually with .set_cookie().
1 You'd have to base64-decode the cookie, then use the pickle module to decode the contents. Because the cookie is cryptographically signed, the usual security concerns that apply to pickle are mitigated.
| 0 | 0 | 0 | 1 |
2013-01-25T22:31:00.000
| 1 | 1.2 | true | 14,531,396 | 0 | 0 | 1 | 1 |
I do not understand the difference between setting up a Unencrypted Session Factory in order to set cookies, as compared to using request.response.set_cookie(..) and request.cookies[key].
|
Flask: Heroku with custom domain name breaks sessions?
| 14,571,069 | 1 | 1 | 768 | 0 |
python,session,cookies,flask
|
After much testing and many permutations of SESSION_COOKIE_DOMAIN and SERVER_NAME, I concluded that the problem was with Heroku. Something about the way Heroku currently routes/hooks up to custom domains breaks domain cookies.
I verified this by moving to EC2...now everything works.
| 0 | 0 | 0 | 0 |
2013-01-27T03:48:00.000
| 1 | 1.2 | true | 14,544,225 | 0 | 0 | 1 | 1 |
I have a Flask application hosted on Heroku, and the Heroku instance (say, "helloworld.herokuapp.com") has a custom domain name, say "www.helloworld.com".
When I access the app at the native heroku URL, sessions work perfectly fine. When I access it at www.helloworld.com, they don't work. I assume that this is because the session cookie that Flask is signing is for the wrong domain.
I tried assigning app.SESSION_COOKIE_DOMAIN and app.SERVER_NAME to 'helloworld.com', but it still only signs the session cookies for helloworld.herokuapp.com.
Is there any way I can force the session cookies to sign as my custom domain?
|
web2py - setting up my own environment
| 14,554,639 | 3 | 1 | 95 | 0 |
python,web2py
|
You can use the admin interface to install (i.e., unpack) the app. From that point, the app is just a bunch of files in folders, so you can use any editor, IDE, and version control system on those files as you see fit.
| 0 | 0 | 0 | 0 |
2013-01-28T02:14:00.000
| 1 | 1.2 | true | 14,554,551 | 0 | 0 | 1 | 1 |
I've created a web2py app using the admin interface, but I want to use my own editor and version control. I've downloaded the packed app, but what do I do with it?
|
Distributing a local Flask app
| 14,559,747 | 1 | 5 | 1,761 | 0 |
python,flask,distribution
|
Why distribute it at all? If the user you want to use it is on the same local network as the Flask application, just give them the IP address and they can access it via a browser just as you are doing, and no access to the source code either!
| 0 | 0 | 0 | 0 |
2013-01-28T04:23:00.000
| 2 | 0.099668 | false | 14,555,393 | 0 | 0 | 1 | 1 |
I've made a simple Flask app which is essentially a wrapper around sqlite3. It basically runs the dev server locally and you can access the interface from a web browser. At present, it functions exactly as it should.
I need to run it on a computer operated by someone with less-than-advanced computing skills. I could install Python on the computer, and then run my .py file, but I am uncomfortable with the files involved being "out in the open". Is there a way I can put this app into an executable file? I've attempted to use both py2exe and cx_freeze, but both of those raised an ImportError on "image". I also tried zipping the file (__main__.py and all that) but was greeted with 500 errors attempting to run the file (I am assuming that the file couldn't access the templates for some reason.)
How can I deploy this Flask app as an executable?
|
where I host apps developed using tornado webserver
| 14,639,967 | 1 | 2 | 1,996 | 0 |
python,google-app-engine,webserver,hosting,tornado
|
At heroku the WebSockets protocol is not yet supported on the Cedar stack.
| 0 | 1 | 0 | 0 |
2013-01-28T06:42:00.000
| 3 | 0.066568 | false | 14,556,744 | 0 | 0 | 1 | 1 |
Is It there any hosting service for hosting simple apps developed using tornado.(Like we hosting in Google App Engine). Is it possible to host on Google App Engine?.The Apps is just like some student datas(adding,removing,searching etc).I'm devoloped using python.
Thanks in advance
|
OpenERP 7 with modules from OpenERP 6.1
| 14,564,692 | 6 | 2 | 3,217 | 1 |
python,openerp,erp
|
Openerp 6.1 modules directly can not be used in openerp 7. You have to do some basic changes
in openerp 6.1 modules. Like tree, form tag compulsory string and verision="7" include in form. If you have inherited some basic modules like sale, purchase then you have to do changes in inherit xpath etc. Some objects res.parter.address removed then you have take care of this and replace with res.partner.
Thanks
| 0 | 0 | 0 | 0 |
2013-01-28T14:06:00.000
| 1 | 1.2 | true | 14,563,801 | 0 | 0 | 1 | 1 |
I have couple OpenERP modules implemented for OpenERP 6.1 version. When I installed OpenERP 7.0, i copied these modules into addons folder for OpenERP 7. After that, I tried to update modules list trough web interface, but nothings changed. Also, I started server again with options --database=mydb --update=all, but modules list didn't change. Did I miss something? Is it possible in OpenERP version 7, usage of modules from version 6.1?
Thanks for advice.
UPDATE:
I already exported my database from version 6.1 in *.sql file. Will it OpenERP 7 work, if I just import these data in new database, which I created with OpenERP 7?
|
How to retrieve the real SQL from the Django logger?
| 14,567,526 | 0 | 0 | 189 | 1 |
python,sql,django,django-database
|
select * from app_model where name = %s is a prepared statement. I would recommend you to log the statement and the parameters separately. In order to get a wellformed query you need to do something like "select * from app_model where name = %s" % quote_string("user") or more general query % map(quote_string, params).
Please note that quote_string is DB specific and the DB 2.0 API does not define a quote_string method. So you need to write one yourself. For logging purposes I'd recommend keeping the queries and parameters separate as it allows for far better profiling as you can easily group the queries without taking the actual values into account.
| 0 | 0 | 0 | 0 |
2013-01-28T17:00:00.000
| 4 | 0 | false | 14,567,172 | 0 | 0 | 1 | 1 |
I am trying to analyse the SQL performance of our Django (1.3) web application. I have added a custom log handler which attaches to django.db.backends and set DEBUG = True, this allows me to see all the database queries that are being executed.
However the SQL is not valid SQL! The actual query is select * from app_model where name = %s with some parameters passed in (e.g. "admin"), however the logging message doesn't quote the params, so the sql is select * from app_model where name = admin, which is wrong. This also happens using django.db.connection.queries. AFAIK the django debug toolbar has a complex custom cursor to handle this.
Update For those suggesting the Django debug toolbar: I am aware of that tool, it is great. However it does not do what I need. I want to run a sample interaction of our application, and aggregate the SQL that's used. DjDT is great for showing and shallow learning. But not great for aggregating and summarazing the interaction of dozens of pages.
Is there any easy way to get the real, legit, SQL that is run?
|
Search index for flat HTML pages
| 14,570,969 | 1 | 1 | 349 | 0 |
python,html,search,indexing,django-flatpages
|
With Solr, you would write code that retrieves content to be indexed, parses out the target portions from the each item then sends it to Solr for indexing.
You would then interact with Solr for search, and have it return either the entire indexed document an ID or some other identifying information about the original indexed content, using that to display results to the user.
| 0 | 0 | 0 | 0 |
2013-01-28T20:44:00.000
| 1 | 1.2 | true | 14,570,901 | 0 | 0 | 1 | 1 |
I'm looking to add search capability into an existing entirely static website. Likely, the new search functionality itself would need to be dynamic, as the search index would need to be updated periodically (as people make changes to the static content), and the search results will need to be dynamically produced when a user interacts with it. I'd hope to add this functionality using Python, as that's my preferred language, though am open to ideas.
The Google Web Search API won't work in this case because the content being indexed is on a private network. Django haystack won't work for this case, as that requires that the content be stored in Django models. A tool called mnoGoSearch might be an option, as I think it can spider a website like Google does, but I'm not sure how active that project is anymore; the project site seems a bit dated.
I'm curious about using tools like Solr, ElasticSearch, or Whoosh, though I believe that those tools are only the indexing engine and don't handle the parsing of search content. Does anyone have any recommendations as to how one may index static html content for retrieving as a set of search results? Thanks for reading and for any feedback you have.
|
How can I detect my robot from an overhead webcam image?
| 14,572,614 | 1 | 0 | 684 | 0 |
python,opencv,computer-vision,robot
|
I'd do the following, and I'm pretty sure it would work:
I assume that the background of the video stream (the robots vicinity) is pretty static, so the firs step is:
1. background subtraction
2. detect movement in the foreground, this is your robot and everything else that changes from the background model, you'll need some thresholding here
3. connected-component detection to get the blobs
4. identify the blob corresponding to the robot (biggest?)
5. now you can get the coordinates of the blob
6. you can compute the heading if you track your blob through multiple frames
you can find good examples by googling the keywords
Distinctive color would work with color filtering and template matching and the likes, but the above method is more general.
| 0 | 0 | 0 | 1 |
2013-01-28T21:55:00.000
| 2 | 0.099668 | false | 14,571,975 | 0 | 0 | 1 | 1 |
Here's my problem:
Suppose there's a course for robots to go through, and there's an overhead webcam that can see the whole of it, and which the robot can use to navigate. Now the question is, what's the best way to detect the robot (position and heading) on the image of this webcam? I was thinking about a few solutions, like putting leds on it, or two separate colored circles, but those doesn't seem to be the best way to do it.
Is there a better solution to this, and if yes, I would really appreciate some opencv2 python code example of it, as I'm new to computer vision.
|
Async URL Fetch and Memcache on Appengine
| 14,630,265 | 0 | 1 | 150 | 0 |
python,google-app-engine
|
No, there is no automated way where async Url Fetch would store data automatically to memcache on completion. You have to do it in your code, but this defeats what you are trying to do.
Also remember that memcache is volatile and it's content can be purged at any time.
| 0 | 1 | 0 | 0 |
2013-01-31T13:50:00.000
| 1 | 0 | false | 14,627,334 | 0 | 0 | 1 | 1 |
Is it possible to make a async url fetch on appengine and to store the rpc object in the memcache?
What I try to do is to start the asynch url fetch within a task, but I don't want the task to wait until the fetch has finished.
Therefore I tought I would just write it to memcache and access it later from outside the task, which has created the fetch.
|
Unit testing GAE Blobstore (with nose)
| 29,110,829 | 0 | 1 | 123 | 0 |
python,google-app-engine
|
I had the same question so I dug into the nosegae code, and then into the actual testbed code.
All you need to do is set nosegae_blobstore = True where you're setting up all the other stubs. This sets up a dict-backed blobstore stub.
| 0 | 1 | 0 | 0 |
2013-01-31T17:09:00.000
| 2 | 0 | false | 14,631,306 | 0 | 0 | 1 | 1 |
We're using nose with nose-gae for unit testing our controllers and models. We now have code that hits the blob store and files API. We are having a hard time testing those due to lack of testing proxies/mocks. Is there a good way to unit tests these services or lacking unit testing is there a way to automated acceptance test those APIs? TIA.
|
Django: How do I extend the same functionality to many views?
| 14,639,666 | 0 | 1 | 159 | 0 |
python,django,django-templates,django-views,template-inheritance
|
I believe separating out your upload related functionality into separate views will be better way to go about it. That way all your templates (inheriting from base.html) will refer to appropriate view for uploads.
You can use HTTP_REFERER header to redirect to appropriate page from the upload views.
| 0 | 0 | 0 | 0 |
2013-02-01T02:31:00.000
| 5 | 0 | false | 14,638,799 | 0 | 0 | 1 | 3 |
I have a parent template that I'm using in many parts of the site, called base.html. This template holds a lot of functional components, such as buttons that trigger different forms (inside modal windows) allowing users to upload different kinds of content, etc. I want users to be able to click these buttons from almost any part of the site (from all the templates that inherit from base.html).
I've written a view that handles the main page of the site, HomeView (it renders homepage.html, which inherits from base.html). I've written a bunch of functionality into this view, which handles all the uploads.
Since many templates are going to inherit from base.html, and therefore have all that same functionality, do I have to copy-and-paste the hundreds of lines of code from the HomeView into the views that render all the other pages??
There's got to be a better way, right?
How do I make sure that the functionality in a parent base template holds true for all views which call child templates that inherit from this base template?
|
Django: How do I extend the same functionality to many views?
| 14,645,465 | 0 | 1 | 159 | 0 |
python,django,django-templates,django-views,template-inheritance
|
You can render many templates in just one view by requiring unique value in each or use request session.
| 0 | 0 | 0 | 0 |
2013-02-01T02:31:00.000
| 5 | 0 | false | 14,638,799 | 0 | 0 | 1 | 3 |
I have a parent template that I'm using in many parts of the site, called base.html. This template holds a lot of functional components, such as buttons that trigger different forms (inside modal windows) allowing users to upload different kinds of content, etc. I want users to be able to click these buttons from almost any part of the site (from all the templates that inherit from base.html).
I've written a view that handles the main page of the site, HomeView (it renders homepage.html, which inherits from base.html). I've written a bunch of functionality into this view, which handles all the uploads.
Since many templates are going to inherit from base.html, and therefore have all that same functionality, do I have to copy-and-paste the hundreds of lines of code from the HomeView into the views that render all the other pages??
There's got to be a better way, right?
How do I make sure that the functionality in a parent base template holds true for all views which call child templates that inherit from this base template?
|
Django: How do I extend the same functionality to many views?
| 14,646,308 | 0 | 1 | 159 | 0 |
python,django,django-templates,django-views,template-inheritance
|
Load the functionality part with ajax on your base.html.
That way you have a view_method that deals exclusively with those funcionalities.
| 0 | 0 | 0 | 0 |
2013-02-01T02:31:00.000
| 5 | 0 | false | 14,638,799 | 0 | 0 | 1 | 3 |
I have a parent template that I'm using in many parts of the site, called base.html. This template holds a lot of functional components, such as buttons that trigger different forms (inside modal windows) allowing users to upload different kinds of content, etc. I want users to be able to click these buttons from almost any part of the site (from all the templates that inherit from base.html).
I've written a view that handles the main page of the site, HomeView (it renders homepage.html, which inherits from base.html). I've written a bunch of functionality into this view, which handles all the uploads.
Since many templates are going to inherit from base.html, and therefore have all that same functionality, do I have to copy-and-paste the hundreds of lines of code from the HomeView into the views that render all the other pages??
There's got to be a better way, right?
How do I make sure that the functionality in a parent base template holds true for all views which call child templates that inherit from this base template?
|
Python read microphone
| 40,968,039 | 0 | 6 | 19,017 | 0 |
python,audio,random,microphone
|
u can refer speech_recognition module or PyAudio module for recording speech.I used PyAudio module with Microsoft cognitive service.It worked fine for me.
| 0 | 0 | 0 | 0 |
2013-02-01T08:40:00.000
| 2 | 0 | false | 14,642,443 | 0 | 0 | 1 | 1 |
I am trying to make python grab data from my microphone, as I want to make a random generator which will use noise from it.
So basically I don't want to record the sounds, but rather read it in as a datafile, but realtime.
I know that Labview can do this, but I dislike that framework and am trying to get better at python.
Any help/tips?
|
Django-Storages not being installed properly
| 15,756,378 | 0 | 0 | 173 | 0 |
python,django,amazon-ec2,python-django-storages
|
If you're using virtualenv I find you don't need to add in sudo. So just try doing a pip install django-storages?
| 0 | 0 | 0 | 0 |
2013-02-01T16:41:00.000
| 1 | 0 | false | 14,650,989 | 0 | 0 | 1 | 1 |
I'm working on deploying my first django app to an EC2 server. I'm serving my static files from an S3 server, so I'm using the django-storages app.
I installed it using sudo pip install django-storages on the EC2 server. However, I keep getting the error "no module found" when I try to import it. Yet, when I run pip freeze django-storages shows up as installed.
I followed the exact same procedure on my development machine and everything works perfectly. Any ideas?
I should also mention that the EC2 server is running the bitnami ubunutu 64 bit django stack.
|
Capturing PDF files using Python Selenium Webdriver
| 14,760,698 | 0 | 2 | 1,728 | 0 |
python,pdf,selenium,webdriver,selenium-webdriver
|
We ultimately accomplished this by clearing firefox's temporary internet files before the test, then looking for the most recently created file after the report was generated.
| 0 | 0 | 1 | 1 |
2013-02-01T17:40:00.000
| 1 | 1.2 | true | 14,651,973 | 0 | 0 | 1 | 1 |
We test an application developed in house using a python test suite which accomplishes web navigations/interactions through Selenium WebDriver. A tricky part of our web testing is in dealing with a series of pdf reports in the app. We are testing a planned upgrade of Firefox from v3.6 to v16.0.1, and it turns out that the way we captured reports before no longer works, because of changes in the directory structure of firefox's temp folder. I didn't write the original pdf capturing code, but I will refactor it for whatever we end up using with v16.0.1, so I was wondering if there' s a better way to save a pdf using Python's selenium webdriver bindings than what we're currently doing.
Previously, for Firefox v3.6, after clicking a link that generates a report, we would scan the "C:\Documents and Settings\\Local Settings\Temp\plugtmp" directory for a pdf file (with a specific name convention) to be generated. To be clear, we're not saving the report from the webpage itself, we're just using the one generated in firefox's Temp folder.
In Firefox 16.0.1, after clicking a link that generates a report, the file is generated in "C:\Documents and Settings\ \Local Settings\Temp\tmp*\cache*", with a random file name, not ending in ".pdf". This makes capturing this file somewhat more difficult, if using a technique similar to our previous one - each browser has a different tmp*** folder, which has a cache full of folders, inside of which the report is generated with a random file name.
The easiest solution I can see would be to directly save the pdf, but I haven't found a way to do that yet.
To use the same approach as we used in FF3.6 (finding the pdf in the Temp folder directory), I'm thinking we'll need to do the following:
Figure out which tmp*** folder belongs to this particular browser instance (which we can do be inspecting the tmp*** folders that exist before and after the browser is instantiated)
Look inside that browser's cache for a file generated immedaitely after the pdf report was generated (which we can by comparing timestamps)
In cases where multiple files are generated in the cache, we could possibly sort based on size, and take the largest file, since the pdf will almost certainly be the largest temp file (although this seems flaky and will need to be tested in practice).
I'm not feeling great about this approach, and was wondering if there's a better way to capture pdf files. Can anyone suggest a better approach?
Note: the actual scraping of the PDF file is still working fine.
|
< > changed to < and > while parsing html with beautifulsoup in python
| 55,544,880 | 0 | 7 | 8,456 | 0 |
python,html,parsing,beautifulsoup
|
It can be due to an invalid character (due to charset encoding/decoding), therefor BeautifulSoup has issues to parse the input.
I solve it by passing my string directly to BeautifulSoup without doing any encoding/decoding.
In my case, I was trying to convert UTF-16 to UTF-8 myself.
| 0 | 0 | 1 | 0 |
2013-02-03T03:42:00.000
| 2 | 0 | false | 14,669,283 | 0 | 0 | 1 | 1 |
While processing html using Beautifulsoup, the < and > were converted to <and >, since the tag anchor were all converted, the whole soup lost its structure, any suggestion?
|
Is google app engine right for me (hosting a few rapidly updating text files created w/ python)
| 14,670,069 | 0 | 0 | 108 | 0 |
python,google-app-engine
|
Yes and no.
Appengine is great in terms of reliability, server speed, features, etc. However, it has two main drawbacks: You are in a sandboxed environment (no filesystem access, must use datastore), and you are paying by instance hour. Normally, if you're just hosting a small server accessed once in a while, you can get free hosting; if you are running a cron job all day every day, you must use at least one instance at all times, thus costing you money.
Your concerns about speed and propagation on google's servers is moot; they have a global time server pulsating through their datacenters ensuring your operations are atomic; if you request data with consistency=STRONG, so long as your get begins after the put, you will see the updated data.
| 0 | 1 | 0 | 0 |
2013-02-03T05:38:00.000
| 2 | 0 | false | 14,669,819 | 0 | 0 | 1 | 1 |
I have a python script that creates a few text files, which are then uploaded to my current web host. This is done every 5 minutes. The text files are used in a software program which fetches the latest version every 5 min. Right now I have it running on my web host, but I'd like to move to GAE to improve reliability. (Also because my current web host does not allow for just plain file hosting, per their TOS.)
Is google app engine right for me? I have some experience with python, but none related to web technologies. I went through the basic hello world tutorial and it seems pretty straightforward for a website, but I don't know how I would implement my project. I also worry about any caching which could cause the latest files not to propagate fast enough across google's servers.
|
subprocess.call() of 'sort' command within Django script is adding \M to the end of my files
| 14,671,017 | 0 | 0 | 129 | 0 |
python,django,linux,unix,sorting
|
It is highly unlikely that sort would on its own volition change line endings from Unix to Windows. It is more likely that A.csv already contains Windows line endings, and sort merely preserves them. If it is your script that's creating A.csv in the first place, double-check the newline convention that's being used.
| 0 | 0 | 0 | 0 |
2013-02-03T09:12:00.000
| 1 | 0 | false | 14,670,990 | 0 | 0 | 1 | 1 |
I am messing around with Django. I have a custom admin script in one of my apps (inside the management/commands folder) that has a subprocess.call() line. I am doing a 'sort A.csv -o A_sorted.csv' call. The sorted file that gets written is full of '^M' at the end of every line. I find this doesn't happen when running the sort command from the command line or calling the same command through subprocess.call() from within a normal python script not running in Django.
Any ideas on why this is happening and what I can do to keep this from happening?
Thanks.
|
Handling multiple requests in Flask
| 14,673,087 | 3 | 65 | 72,055 | 0 |
python,flask
|
For requests that take a long time, you might want to consider starting a background job for them.
| 0 | 0 | 1 | 0 |
2013-02-03T13:02:00.000
| 2 | 0.291313 | false | 14,672,753 | 0 | 0 | 1 | 1 |
My Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation.
Is there any way that I can make my Flask application accept requests from multiple users?
|
ndb.query.count() failed with 60s query deadline on large entities
| 14,713,169 | 2 | 4 | 2,669 | 1 |
python,google-app-engine,app-engine-ndb,bigtable
|
This is indeed a frustrating issue. I've been doing some work in this area lately to get some general count stats - basically, the number of entities that satisfy some query. count() is a great idea, but it is hobbled by the datastore RPC timeout.
It would be nice if count() supported cursors somehow so that you could cursor across the result set and simply add up the resulting integers rather than returning a large list of keys only to throw them away. With cursors, you could continue across all 1-minute / 10-minute boundaries, using the "pass the baton" deferred approach. With count() (as opposed to fetch(keys_only=True)) you can greatly reduce the waste and hopefully increase the speed of the RPC calls, e.g., it takes a shocking amount of time to count to 1,000,000 using the fetch(keys_only=True) approach - an expensive proposition on backends.
Sharded counters are a lot of overhead if you only need/want periodic count statistics (e.g., a daily count of all my accounts in the system by, e.g., country).
| 0 | 1 | 0 | 0 |
2013-02-03T14:41:00.000
| 3 | 0.132549 | false | 14,673,642 | 0 | 0 | 1 | 1 |
For 100k+ entities in google datastore, ndb.query().count() is going to cancelled by deadline , even with index. I've tried with produce_cursors options but only iter() or fetch_page() will returns cursor but count() doesn't.
How can I count large entities?
|
How to properly manage application configurations
| 15,259,314 | 0 | 2 | 175 | 0 |
python,configuration
|
Create a default settings module which contains your desired default settings. Create a second module intended to be used by the the user with a from default_settings import * statement at the top, and instructing the user to write any replacements into this module instead.
Python is rather expressive, so in most cases, if you can expect the user to understand it on any level, you can use a Python module itself as the configuration file.
| 0 | 0 | 0 | 0 |
2013-02-03T16:14:00.000
| 1 | 0 | false | 14,674,487 | 1 | 0 | 1 | 1 |
What is the most universal and best application configurations management method? I want to have these properties in order to have "good configuration management":
A list of all available properties and their default values in one
place.
A list of properties which can be changed by an app user, also in one
place.
When I retrieve a specific property, it's value is returned from the
2nd list (user changeable configs) or if it's not there, from the
first list.
So far, what I did was hard coding the 1st list as an object (more specific as a dict), wrote .conf file used by ConfigParser to make an app user to easily change some of the properties (2nd list), and wrote a public method on the config object to retrieve a property by it's name or if it's not there, raise an exception. In the end, one object was responsible for managing all the stuff (parsing file, raising exception, overriding properties etc.) But I was wondering, if there's a built-in library which does more or less the same thing, or even a better way to manage configuration, which takes into account all the KISS, DRY and other principles (I'm not always successful to do that with this method)?
Thanks in advance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.