Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Running google app engine locally tbehind a proxy
| 22,313,654 | 0 | 1 | 426 | 0 |
google-app-engine,python-2.7,ubuntu
|
reset your proxy environment variables (http_proxy and https_proxy) while running the app server locally. You need them only when you are deploying your app to actual google servers.
| 0 | 1 | 0 | 0 |
2013-12-22T10:32:00.000
| 1 | 0 | false | 20,728,436 | 0 | 0 | 1 | 1 |
I have been trying to run a small app using google app engine (python) on 8080. I am behind my college proxy which requires a username and password to login
here is what i get
INFO 2013-12-22 10:16:19,516 sdk_update_checker.py:245] Checking for updates to the SDK.
INFO 2013-12-22 10:16:19,518 init.py:94] Connecting through tunnel to: appengine.google.com:443
INFO 2013-12-22 10:16:19,525 sdk_update_checker.py:261] Update check failed:
WARNING 2013-12-22 10:16:19,527 api_server.py:331] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-12-22 10:16:19,529 api_server.py:138] Starting API server at: >localhost:35152
INFO 2013-12-22 10:16:19,545 dispatcher.py:171] Starting module "default" running at: >localhost:8080
INFO 2013-12-22 10:16:19,552 admin_server.py:117] Starting admin server at: >localhost:8000
but when i go to my browser to go to 8080...i get:
HTTPError()
HTTPError()
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
INFO 2013-12-22 10:22:05,095 module.py:617] default: "GET / HTTP/1.1" 500 -
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
HTTPError()
HTTPError()
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call
return app(environ, start_response)
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call
self._flush_logs(response.get('logs', []))
File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send
f = self.opener.open(req)
File "/usr/local/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden
INFO 2013-12-22 10:22:05,141 module.py:617] default: "GET /favicon.ico HTTP/1.1" 500 -
I have set my proxy connections (with username and password) as environment variables in apt.conf files and my terminal works fine with it...
i use ubuntu 12.04
|
Automating web tasks?
| 20,749,267 | 1 | 2 | 406 | 0 |
python,selenium,automation,web-scraping,capybara
|
You may want to check out CasperJS. I use Python to fire CasperJS scripts to do web scraping and return data to Python to parse further or store to a database etc...
Python itself has BeautifulSoup and Mechanize but the combination is not great with Ajax based sites.
Python and CasperJS is perfect.
| 0 | 0 | 1 | 0 |
2013-12-23T18:25:00.000
| 3 | 1.2 | true | 20,749,102 | 0 | 0 | 1 | 1 |
I play on chess.com and I'd like to download a history of my games. Unfortunately, they don't make it easy: I can access 100 pages of 50 games one at a time, click "Select All" and "Download" and then they e-mail it to me.
Is there a way to write a script, in python or another language, that helps me automate any part of the process? Something that simulates clicking a link? Is Capybara useful for things like this outside of unit testing? Selenium?
I don't have much experience with web development yet. Thanks for your help!
|
AJAX and Django using the polls app from the tutorial: 2 problems
| 20,750,187 | 1 | 0 | 356 | 0 |
jquery,python,ajax,django
|
This is a frequent mistake when writing JavaScript. You haven't disabled the default actions on click or submit. This means that the JS execute, calling the ajax, but then immediately the normal browser submit is also executed, causing a refresh.
voteBehavior should accept an event parameter, and you should call event.preventDefault() at the start of the function.
| 0 | 0 | 0 | 0 |
2013-12-23T19:42:00.000
| 2 | 0.099668 | false | 20,750,119 | 0 | 0 | 1 | 1 |
I'm a newbie here, and much of what I have learned about django and python have come from this website. Thank you all for being so helpful! This is my first question post.
I've got 2 problems as I try to extend what I've learned from the Django tutorial (1.6) and try to get the Polls app to load via AJAX. I want to use the main mysite app as a home page, and pull in content from other apps in the mysite project using ajax. The tutorial doesn't really cover integrating content from different apps on a single page.
I have 2 ajax elements already working on the main mysite page (a "trick or treat" button that retrieves some silly text, and a small dns lookup form/button) but those are part of the mysite app, so all of the logic is handled using the mysite app urlconf, views and templates.
There is another div on the page which is for a "Featured App" that will get pulled in, also via ajax. Basically, mysite.views builds a list of apps that have a 'ajaxFeaturedAppView', and then chooses one at random to display in the "Featured App" section on the mysite page. This is my novice attempt at decoupling the mysite app from the other apps as much as possible.
Problem 1) The initial poll question and choices and vote button all appear correctly on page load, but the vote button just loads another poll question. It should display poll results.
Problem 2) The other ajax elements on the page get triggered when I hit the Vote button, also. I think this is because the Vote button action triggers the document ready() event, which initializes the ajax elements. But the other ajax elements don't do that; they do not trigger the document ready() event.
I think that it may be one problem with two symptoms, actually. So, how do I get the vote button to not trigger a document ready event, and will that allow me to see the poll results? Or am I doing something else wrong?
EDIT:
Okay, there were a few problems with that pieced-together code. Thanks for the help.
|
peewee vs django for DB processing
| 32,690,398 | 1 | 1 | 3,159 | 0 |
python,mysql,django,peewee
|
Use of peewee in Django is totally OK. Actually I recently did a project like that. But I still recommend using Django ORM if you do not have some particular reasons to use peewee.
Here are some issues that may occur when you use Peewee in Django:
You may need to write your own db middleware and testcase base class to make django work with peewee.
A lot of django open source apps will not work any more because they depend on Django ORM.
Migrating tables in Peewee is more difficult than Django ORM.
But Obviously Peewee also brings something good:
Stand alone Database processing module, if one day for some reason you do not want to use django any more in a project, it could be very easy to reuse all these peewee code.
It is much easier to make two or more projects use the same database if you use peewee. And the table structures are totally under your control.
And maybe more. So as conclusion, I would say Peewee is great, but it is still not working perfectly with django now.
| 0 | 0 | 0 | 0 |
2013-12-24T10:02:00.000
| 4 | 0.049958 | false | 20,758,839 | 0 | 0 | 1 | 2 |
Working on a python & Django project with mysql (newbie)
Trying to figure out if it is preferable to use peewee in the python DB part & Django models in the Django forms or go ahead and use Django for the entire thing
Related answers claim that Django is high overhead but could not find a base for that assumption
Thanks,
Shimon
|
peewee vs django for DB processing
| 20,759,485 | 3 | 1 | 3,159 | 0 |
python,mysql,django,peewee
|
This is entirely opinionated, but I think you should use Django for the entire thing. It's not that I don't like peewee. On the contrary, it might very well be a better ORM. But I have a few reasons I think you'd prefer the Django ORM:
I think the Django ORM is more intuitive for beginners, and covers most use-cases pretty well. In the future, when you feel comfortable with Django in general and the Djagno ORM, it will be easier to learn how to use peewee and see if you prefer it over the default
There's a larger community of Django users that don't use peewee over those who do. That means more people being able to help you (and here at SO) and an easier time finding out the answers to any question you will have.
I think peewee is more SQL-y in it's syntax, which I find is easier to understand after you learn a little SQL, while using Django's ORM doesn't require vast SQL knowledge besides the very basic stuff
So peewee is a very viable option, but I think you shouldn't start using it straight away, not before you have any problems with the default.
| 0 | 0 | 0 | 0 |
2013-12-24T10:02:00.000
| 4 | 0.148885 | false | 20,758,839 | 0 | 0 | 1 | 2 |
Working on a python & Django project with mysql (newbie)
Trying to figure out if it is preferable to use peewee in the python DB part & Django models in the Django forms or go ahead and use Django for the entire thing
Related answers claim that Django is high overhead but could not find a base for that assumption
Thanks,
Shimon
|
Apache restart when developing python wsgi apps
| 20,779,815 | 0 | 0 | 217 | 0 |
python,windows,apache,wsgi
|
Accordning to the links in the comments above, restarts after source changes are always necessary on windows. On linux you still need to touch the wsgi file after source changes. Is it only me that finds this being a major drawback, compared to PHP?
| 0 | 1 | 0 | 0 |
2013-12-24T16:14:00.000
| 1 | 0 | false | 20,763,775 | 0 | 0 | 1 | 1 |
I am evaluating python for web development (mod_wsgi) and have noticed that on windows I have to restart Apache after changing my python source code. On Ubuntu the problem doesn't exists, probably because linux supports wsgi daemon mode.
Are there any way to have hot deployment during web development on windows, like configuring apache, replacing web server, some IDE, etc?
|
Django ModelForm not saving NULL values
| 20,789,744 | 3 | 0 | 277 | 0 |
python,django,model,django-forms
|
This behavior is by design. Django would rather have empty strings than NULLs, for various reasons. Google will tell you, but long story short, Django finds NULL and "" overlapping in meaning, and does away with the former.
What you can do, is intercept the value coming from the database driver and change empty strings to NULL and the other way round. Implementing a custom CharField subclass will get you there.
In doing so, you'll experience the ambiguity between NULL and "" for yourself :).
| 0 | 0 | 0 | 0 |
2013-12-26T18:52:00.000
| 1 | 1.2 | true | 20,789,687 | 0 | 0 | 1 | 1 |
I've looked all around the documentation and internet, but I can't find an answer for this. I have a model with several fields that have (blank=True, null=True). When I save the ModelForm, the columns get set as empty strings in the database. I would like it to save them as NULL in the database (if they are empty).
How can I tell ModelForm to save empty values as NULL?
|
Strip unnecessary whitespace and html comments from Tornado-rendered Pages
| 20,817,404 | 1 | 0 | 662 | 0 |
python,tornado
|
Tornado doesn't know anything about html comments, so any html comments will be passed through as-is. (You can use {# #} to add comments to your templates). There is limited support for stripping whitespace, which is enabled by default based on file extension (.html and .js). There's also a half-implemented compress_whitespace setting, although there is no clean way to set it unless you implement your own template loader.
| 0 | 0 | 0 | 0 |
2013-12-27T02:44:00.000
| 2 | 1.2 | true | 20,794,033 | 0 | 0 | 1 | 1 |
I am using tornado and the tornado templating engine. Even when debug is set to False, tornado-rendered pages still have HTML comments and unnecessary whitespace in them. Is there a setting to automatically strip this out when rendering pages (essentially minifying the rendered pages)?
|
gae-boilerplate and gcs client
| 20,847,501 | 0 | 0 | 156 | 0 |
python,google-app-engine,google-cloud-storage
|
Stupid question now that i found the solution.
It was because i was running old_dev_appserver.py in my server startup script.
GCS is only supported from the 1.8.1 and greater.
| 0 | 1 | 0 | 0 |
2013-12-29T19:23:00.000
| 1 | 1.2 | true | 20,829,163 | 0 | 0 | 1 | 1 |
I'm trying to work with gae-boilerplate on google app engine and I want to
communicate with the cloud on local development server (for now).
I took the test app example and it runs perfectly but when trying to integrate with
gae-boilerplate it falls apart.
If I extend my class with webapp2.request it will work but I can't call it from routes.py,
when I extend it with boilerplate BaseHandler, I can call it but ` get a deadlineexceeded exception:
TimeoutError: ('Request to Google Cloud Storage timed out.', DeadlineExceededError('Deadline exceeded while waiting for HTTP response from URL: http:// localhost: 8080/_ah/gcs/yey-cloud-storage-trial/demo-testfile',))
|
Google Finance Lock Out - Robot
| 20,929,338 | 0 | 0 | 447 | 0 |
python,captcha,bots,google-finance
|
Yahoo YQL works fairly well, but throws numerous HTTP 500 errors that need to be handled, they are all benign. TradeKing is an option, however, the oauth2 package is required and that is very difficult to install properly
| 0 | 0 | 1 | 0 |
2013-12-30T00:43:00.000
| 2 | 1.2 | true | 20,831,821 | 0 | 0 | 1 | 1 |
So I have run into the issue of getting data from Google Finance. They have an html access system that you can use to access webpages that give stock data in simple text format (ideal for minimizing parsing). However, if you access this service too frequently, Google locks you out and you need to enter a captcha. I currently have a list of about 50 stocks and I want to update my price data every 15 seconds, but I soon get locked out (after about 3-4 minutes).
Does anyone have any solutions to this/understand the nature of how often is the max I could ping Google for this information?
Not sure why a feature like this would be on a service designed to give data like this... but similar alternative services with realtime data would also be accepted.
|
How to achieve realtime updates on my website (with Flask)?
| 31,151,979 | 1 | 3 | 5,285 | 0 |
javascript,python
|
In my opinion, the best option for achieving real time data streaming to a frontend UI is to use a messaging service like pubnub. They have libraries for any language you are going to want to be using. Basically, your user interfaces subscribe to a data channel. Things which create data then publish to that channel, and all subscribers receive the publish very quickly. It is also extremely simple to implement.
| 0 | 0 | 0 | 0 |
2013-12-30T20:36:00.000
| 4 | 0.049958 | false | 20,847,122 | 0 | 0 | 1 | 1 |
I am using Flask and I want to show the user how many visits that he has on his website in realtime.
Currently, I think a way is to, create an infinite loop which has some delay after every iteration and which makes an ajax request getting the current number of visits.
I have also heard about node.js however I think that running another process might make the computer that its running on slower (i'm assuming) ?
How can I achieve the realtime updates on my site? Is there a way to do this with Flask?
Thank you in advance!
|
PyDev won't start in Aptana Studio3
| 21,073,033 | 0 | 0 | 666 | 0 |
python,aptana,pydev
|
Aptana 3.5.0 and PyDev 3.0 does not work under Mac OS X 10.9 Mavericks yet. PyDev reports builtin symbols such as None could not be recognized.
I rolled back to 3.4.2 as well.
| 0 | 1 | 0 | 0 |
2013-12-30T21:11:00.000
| 3 | 0 | false | 20,847,649 | 0 | 0 | 1 | 2 |
Aptana Studio is my primary Python IDE and I have been using it for years with much joy and success! Recently, when I start Aptana Studio it fails to recognize any PyDev projects that I have previously created. I noticed that this was happening after installing a recent update of the IDE. I tried uninstalling Aptana and resinstalling the latest version from the website. Nada...I updated Java thinking there might be a misalignment between Java versions or something like that. Nada...The latest version of Eclipse works fine and Aptana seems to be functioning correctly for everything except for PyDev (Python).
I am running a current version of Windows 8. Does anyone know how to fix this or maybe trouble shoot the problem? PyDev worked perfectly in Aptana Studio until I installed the update. Has anyone come across this and know how to fix it?
|
PyDev won't start in Aptana Studio3
| 20,851,621 | 0 | 0 | 666 | 0 |
python,aptana,pydev
|
I went back to the Aptana website and this time around it gave me Aptana Studio 3, build: 3.4.2.201308081805 which works fine. 3.5.0 does still not appear to work for Python development at the moment.
| 0 | 1 | 0 | 0 |
2013-12-30T21:11:00.000
| 3 | 0 | false | 20,847,649 | 0 | 0 | 1 | 2 |
Aptana Studio is my primary Python IDE and I have been using it for years with much joy and success! Recently, when I start Aptana Studio it fails to recognize any PyDev projects that I have previously created. I noticed that this was happening after installing a recent update of the IDE. I tried uninstalling Aptana and resinstalling the latest version from the website. Nada...I updated Java thinking there might be a misalignment between Java versions or something like that. Nada...The latest version of Eclipse works fine and Aptana seems to be functioning correctly for everything except for PyDev (Python).
I am running a current version of Windows 8. Does anyone know how to fix this or maybe trouble shoot the problem? PyDev worked perfectly in Aptana Studio until I installed the update. Has anyone come across this and know how to fix it?
|
Practical use of Python as Chrome Native Client
| 20,912,809 | 2 | 1 | 779 | 0 |
python,google-nativeclient
|
The interpreter is currently the only python example in naclports. However, it should be possible to link libpython into any nacl binary, and use it just as you would embed python in any other C/C++ application. A couple of caveats: you must initialize nacl_io before making any python calls, and as you should not make python calls on the main (PPAPI) thread.
In terms of interacting with the HTML page, as with all NaCl applications this must be done by sending messages back and forth between native and javascript code using PostMessage(). There is no way to directly access the HTML or JavaScript from native code.
| 0 | 0 | 1 | 0 |
2013-12-31T08:24:00.000
| 1 | 1.2 | true | 20,854,222 | 0 | 0 | 1 | 1 |
There is a Python interpreter in naclports (to run as Google Chrome Native Client App).
Are there any examples for bundling the interpreter with a custom Python application and how to integrate this application with a HTML page?
|
Update apk file on Google Play
| 20,856,571 | 0 | 4 | 487 | 1 |
android,python-2.7,sqlite,apk,kivy
|
if you're using local sqlite then you have to embed the database file within the app as failure to do so it means there's no database, in case for updates database have version numbers where as it can not upgrade the database provided the version number is the same as the previous app updates
| 1 | 0 | 0 | 0 |
2013-12-31T11:14:00.000
| 2 | 0 | false | 20,856,465 | 0 | 0 | 1 | 2 |
I want to publish an Android application that I have developed but have a minor concern.
The application will load with a database file (or sqlite3 file). If updates arise in the future and these updates are only targeting the application's functionality without the database structure, I wish to allow users to keep their saved entries in their sqlite3 files.
So what is the best practice to send updates? Compile the apk files with the new updated code only and without the database files? Or is there any other suggestion?
PS: I am not working with Java and Eclipse, but with python for Android and the Kivy platform which is an amazing new way for developing Android applications.
|
Update apk file on Google Play
| 46,767,741 | 0 | 4 | 487 | 1 |
android,python-2.7,sqlite,apk,kivy
|
I had the same issue when I started my app but since kivy has no solution for this I tried to create a directory outside my app directory in android with a simple os.mkdir('../##') and I put all the files there. Hope this helps!
| 1 | 0 | 0 | 0 |
2013-12-31T11:14:00.000
| 2 | 0 | false | 20,856,465 | 0 | 0 | 1 | 2 |
I want to publish an Android application that I have developed but have a minor concern.
The application will load with a database file (or sqlite3 file). If updates arise in the future and these updates are only targeting the application's functionality without the database structure, I wish to allow users to keep their saved entries in their sqlite3 files.
So what is the best practice to send updates? Compile the apk files with the new updated code only and without the database files? Or is there any other suggestion?
PS: I am not working with Java and Eclipse, but with python for Android and the Kivy platform which is an amazing new way for developing Android applications.
|
auth is not available in module
| 54,032,848 | 0 | 1 | 2,882 | 0 |
python,web2py
|
I was getting a very similar error ("name 'auth' is not defined"). Had to add from django.contrib import auth at the top of views.py and it worked.
| 0 | 0 | 0 | 1 |
2013-12-31T11:44:00.000
| 2 | 0 | false | 20,856,854 | 0 | 0 | 1 | 1 |
I have a web2py application where I have written various modules which hold business logic and database related stuff. In one of the files I am trying to access auth.settings.table_user_name but it doesn't work and throws and error as global name 'auth' is not defined. If I write the same line in controller, it works. But I want it to be accessed in module file. Please suggest how do I do that.
|
Django Multiple Views for index/dashboard template
| 20,860,360 | 0 | 1 | 693 | 0 |
python,django
|
I think you're better of with pulling the data by using ajax from your dashboard, it will be a better UX when you have a lot of data to fetch. For that you can use one of the known 3rd party apps for creating REST API or change your existing views to deliver json response as well.
| 0 | 0 | 0 | 0 |
2013-12-31T16:10:00.000
| 2 | 0 | false | 20,860,045 | 0 | 0 | 1 | 1 |
So im trying to use my django installation to create a dashboard a combination of all the data from the 4 other models and views. For our use of django we mainly use it for stats so it's generally just pulling numbers out onto the main index page. Right now I have my index template set up as a redirect_to_template and it goes straight to a template (since everything is still static). Im trying to figure out if im going to have to create another app and pull in all the data to a new view & model for this dashboard page, or if I should create sub-templates if that would work to pull the data.
Thanks again!
|
In Powershell django-install.py not recognized. Already added Scripts directory to PATH
| 27,697,984 | -1 | 1 | 725 | 0 |
python,django
|
In Windows where you can edit PATH, under this you have PATHEXT. At the end, add .PY so PowerShell will know that .py files are executable.
| 0 | 0 | 0 | 0 |
2013-12-31T20:31:00.000
| 3 | -0.066568 | false | 20,862,898 | 0 | 0 | 1 | 2 |
This happened to me on multiple machines already... typing django-admin.py startproject test yields the The term 'django-admin.py' is not recognized as the name of a cmdlet.... error, while trying to call on any other module or script in the python Scripts folder works...
Typing python [Scripts path]\django-admin.py startproject test works perfectly, so does copying django-admin.py to my working directory... it just won't call it straight up.
I've been googling for a while and it seems like this problem is always people not having Scripts added in their PATH. I did, however. Is there something else I am missing? Much appreciated.
|
In Powershell django-install.py not recognized. Already added Scripts directory to PATH
| 23,613,376 | 0 | 1 | 725 | 0 |
python,django
|
Seems to be related to django not being in your PATH variable properly. I'll play around with it more later, but in the mean time a quick fix is just to call it directly:
C:\Python27\Scripts\django-admin.py startproject YourProjectName
(this will be different with Python3 obviously).
Worked for me, and so I hope it helps.
| 0 | 0 | 0 | 0 |
2013-12-31T20:31:00.000
| 3 | 0 | false | 20,862,898 | 0 | 0 | 1 | 2 |
This happened to me on multiple machines already... typing django-admin.py startproject test yields the The term 'django-admin.py' is not recognized as the name of a cmdlet.... error, while trying to call on any other module or script in the python Scripts folder works...
Typing python [Scripts path]\django-admin.py startproject test works perfectly, so does copying django-admin.py to my working directory... it just won't call it straight up.
I've been googling for a while and it seems like this problem is always people not having Scripts added in their PATH. I did, however. Is there something else I am missing? Much appreciated.
|
Establishing connection between client and Google App Engine server
| 20,874,529 | 3 | 0 | 98 | 0 |
python,google-app-engine,http,rest,client-server
|
What you suggest is the right way. 1&2 is a single post. Then you post again to the server.
| 0 | 1 | 1 | 0 |
2014-01-01T20:16:00.000
| 1 | 1.2 | true | 20,872,804 | 0 | 0 | 1 | 1 |
I have a need for my client(s) to send data to my app engine application that should go something like this:
Client --> Server (This is the data that I have)
Server --> Client (Based on what you've just given me, this is what I'm going to need)
Client --> Server (Here's the data that you need)
I don't have much experience working with REST interfaces, but it seems that GET and POST are not entirely appropriate here. I'm assuming that the client needs to establish some kind of persistent connection with the server so they can both have a proper "conversation". My understanding is that sockets are reserved for paid apps, and I'd like to keep this on the free tier. However, I'm not sure of how to go about this. Is it the Channel API I should be using? I'm a bit confused by the documentation.
The app engine app is Python, as is the client. The solution that I'm leaning towards right now is that the client does a POST to the server (here's what I have), and subsequently does a GET (tell me what you need) and lastly does a POST (here's the data you wanted). But it seems messy.
Can anyone point me in the right direction please?
EDIT:
I didn't realize that you could get the POST response with Pythons urllib using the 'read' function of the object returned by urlopen. That makes things a lot nicer, but if anyone has any other suggestions I'd be glad to hear them.
|
Autocomplete a specific set of possibilities in Django-admin
| 21,414,137 | 0 | 0 | 125 | 0 |
python,django,autocomplete,django-admin,django-autocomplete-light
|
The only way to do this is to have a separate model with only those possibilities and to make the field we want to limit into a foreign key field.
| 0 | 0 | 0 | 0 |
2014-01-01T23:00:00.000
| 2 | 1.2 | true | 20,874,180 | 0 | 0 | 1 | 1 |
I have a field in a Django model, and I want there to be a small (~20) set of possibilities (which are strings) which can be autocompleted (preferably with django-autocomplete-light, which I am using already) in django-admin. Should I make this a foreign key field and create a model containing just these 20 possibilities? or is there a better way?
|
How to get the path of the function call in Python?
| 20,877,968 | 4 | 3 | 1,301 | 0 |
python,django,introspection,traceback
|
The Zen of Python says: Explicit is better than implicit.
Why not call it like this: your.autodiscover(__file__), or even your.autodiscover(dirname(__file__)). That way, someone who reads your code doesn't have to look for the magic in your autodiscover function, or look it up in the documentation.
| 0 | 0 | 0 | 0 |
2014-01-02T06:52:00.000
| 2 | 1.2 | true | 20,877,808 | 1 | 0 | 1 | 1 |
I'm designing something similar to Admin.Autodiscover() of Django.
The first hurdle that I'm facing is getting the path of the file from which admin.autodiscover() is called, so that I can traverse the apps/libraries in that folder and figure out which models should be kept in admin.
How do I do that?
|
Lazy psql connection with Django
| 21,235,393 | 2 | 1 | 380 | 1 |
python,django,django-models
|
The original confusion is that Django tries to connect to its databases on startup. This is actually not true. Django does not connect to database, until some app tries to access the database.
Since my web application uses auth and site apps, it looks like it tries to connect on startup. But its not tied to startup, its tied to the fact that those app access the database "early".
If one defines second database backend (non-default), then Django will not try connecting to it unless application tries to query it.
So the solution was very trivial - originally I had one database that hosted both auth/site data and also "real" data that I've exposed to users. I wanted to make "real" database connection to be volatile. So I've defined separate psql backend for it and switched default backend to sqlite.
Now when trying to access "real" database through Query, I can easily wrap it with try/except and handle "Sorry, try again later" over to the user.
| 0 | 0 | 0 | 0 |
2014-01-02T08:01:00.000
| 1 | 0.379949 | false | 20,878,709 | 0 | 0 | 1 | 1 |
I have a Django app that has several database backends - all connected to different instances of Postgresql database. One of them is not guaranteed to be always online. It even can be offline when application starts up.
Can I somehow configure Django to use lazy connections? I would like to:
Try querying
return "sorry, try again later" if database is offline
or return the results if database is online
Is this possible?
|
ReadOnly field saved with NULL value
| 20,882,835 | 1 | 1 | 2,529 | 0 |
python,openerp,crm
|
We have two values like client side and server side. In Server side coding done like float value have 0.0 etc. Read only field doesn't take value from the Client side because it's read only. In View, we see that 0.0 for float value because of server side coding. If you remove read only attribute, you can get value from the Client side and that value pass to the Server and store into the Database. Field with read only attribute, can't get value from the client side and store NULL into the Database.
Hope this will help you.
| 0 | 0 | 0 | 0 |
2014-01-02T10:03:00.000
| 4 | 0.049958 | false | 20,880,422 | 0 | 0 | 1 | 3 |
On CRM opportunity form view, i added readonly="1" for probability field. When i saved, whatever the value of my probability, it's stored with NULL value.
Is it a bug on OpenERP ?
|
ReadOnly field saved with NULL value
| 21,577,500 | 1 | 1 | 2,529 | 0 |
python,openerp,crm
|
In openerp Readonly field is use to just display the content but it will not store any data in database. So it is displaying Null value.
Readonly is just for informative purpose only.
| 0 | 0 | 0 | 0 |
2014-01-02T10:03:00.000
| 4 | 0.049958 | false | 20,880,422 | 0 | 0 | 1 | 3 |
On CRM opportunity form view, i added readonly="1" for probability field. When i saved, whatever the value of my probability, it's stored with NULL value.
Is it a bug on OpenERP ?
|
ReadOnly field saved with NULL value
| 35,739,858 | 0 | 1 | 2,529 | 0 |
python,openerp,crm
|
change your probability field to function field, and write a function (ex _get_probability). And keep the current probability calculating function as it is. Now default probability calculate function display the value and second function (_get_probability) will save the value.
| 0 | 0 | 0 | 0 |
2014-01-02T10:03:00.000
| 4 | 0 | false | 20,880,422 | 0 | 0 | 1 | 3 |
On CRM opportunity form view, i added readonly="1" for probability field. When i saved, whatever the value of my probability, it's stored with NULL value.
Is it a bug on OpenERP ?
|
How to configure Jenkins to run Nosetests as build action in Windows
| 20,882,158 | 1 | 0 | 2,371 | 0 |
python,jenkins,nosetests
|
It's hard to help you when you don't provide more information. What's the error message you get? Check "Console Output" and add it to your question.
It sounds like you're using the build step "Execute shell". On windows you should use "Execute Windows batch command" instead.
| 0 | 1 | 0 | 0 |
2014-01-02T11:35:00.000
| 1 | 1.2 | true | 20,882,048 | 0 | 0 | 1 | 1 |
I have installed Python 3.3 on Windows. When i run nosetests from the command prompt giving the absolute path to the python test scripts folder, it runs. However, when i configure build shell in Jenkins as 'nosetests path/to/tests --with-xunit', the build fails. I am trying to install build monitor to see reasons for build failure. The build shell has nosetests C:\seltests\RHIS_Tests\ --with-xunit . I did not set Postbuild to nosetests.xml since it rejects that entry.
Thanks.
Sorry i am adding console output here
Building in workspace C:\Program Files\Jenkins\jobs\P1\workspace
Updating svn://godwin:3691/SVNRepo at revision '2014-01-02T18:28:06.781 +0530'
U Allows SQL in text fields.html
At revision 3
[workspace] $ sh -xe C:\WINDOWS\TEMP\hudson796644116462335904.sh
The system cannot find the file specified
FATAL: command execution failed
java.io.IOException: Cannot run program "sh" (in directory "C:\Program Files\Jenkins\jobs\P1\workspace"): CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at hudson.Proc$LocalProc.<init>(Proc.java:244)
at hudson.Proc$LocalProc.<init>(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:773)
at hudson.Launcher$ProcStarter.start(Launcher.java:353)
at hudson.Launcher$ProcStarter.join(Launcher.java:360)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:94)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:63)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:785)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:566)
at hudson.model.Run.execute(Run.java:1678)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
|
Using Java GAE Datastore on Python locally
| 21,276,528 | 0 | 0 | 78 | 0 |
java,python,google-app-engine,google-cloud-datastore
|
--datastore_path=Location/datastore.db worked for me
| 0 | 1 | 0 | 0 |
2014-01-02T11:42:00.000
| 1 | 0 | false | 20,882,174 | 0 | 0 | 1 | 1 |
How can I use my Java Datastore on Python version on Local, As Python environment has Inbuilt Interactive Console(for custom query), I want to use my application's Datastore which is currently running on GAE Java 1.8.2 to another version of GAE Python.
|
Uninstall Django completely
| 36,064,213 | 0 | 18 | 137,290 | 0 |
python,django,python-2.7,pip
|
I used the same method mentioned by @S-T after the pip uninstall command. And even after that the I got the message that Django was already installed. So i deleted the 'Django-1.7.6.egg-info' folder from '/usr/lib/python2.7/dist-packages' and then it worked for me.
| 0 | 0 | 0 | 0 |
2014-01-03T06:24:00.000
| 11 | 0 | false | 20,897,851 | 1 | 0 | 1 | 5 |
I uninstalled django on my machine using pip uninstall Django. It says successfully uninstalled whereas when I see django version in python shell, it still gives the older version I installed.
To remove it from python path, I deleted the django folder under /usr/local/lib/python-2.7/dist-packages/.
However sudo pip search Django | more /^Django command still shows Django installed version. How do i completely remove it ?
|
Uninstall Django completely
| 29,770,392 | 0 | 18 | 137,290 | 0 |
python,django,python-2.7,pip
|
Remove any old versions of Django
If you are upgrading your installation of Django from a previous version, you will need to uninstall the old Django version before installing the new version.
If you installed Django using pip or easy_install previously, installing with pip or easy_install again will automatically take care of the old version, so you don’t need to do it yourself.
If you previously installed Django using python setup.py install, uninstalling is as simple as deleting the django directory from your Python site-packages. To find the directory you need to remove, you can run the following at your shell prompt (not the interactive Python prompt):
$ python -c "import django; print(django.path)"
| 0 | 0 | 0 | 0 |
2014-01-03T06:24:00.000
| 11 | 0 | false | 20,897,851 | 1 | 0 | 1 | 5 |
I uninstalled django on my machine using pip uninstall Django. It says successfully uninstalled whereas when I see django version in python shell, it still gives the older version I installed.
To remove it from python path, I deleted the django folder under /usr/local/lib/python-2.7/dist-packages/.
However sudo pip search Django | more /^Django command still shows Django installed version. How do i completely remove it ?
|
Uninstall Django completely
| 57,724,958 | 5 | 18 | 137,290 | 0 |
python,django,python-2.7,pip
|
open the CMD and use this command :
**
pip uninstall django
**
it will easy uninstalled .
| 0 | 0 | 0 | 0 |
2014-01-03T06:24:00.000
| 11 | 0.090659 | false | 20,897,851 | 1 | 0 | 1 | 5 |
I uninstalled django on my machine using pip uninstall Django. It says successfully uninstalled whereas when I see django version in python shell, it still gives the older version I installed.
To remove it from python path, I deleted the django folder under /usr/local/lib/python-2.7/dist-packages/.
However sudo pip search Django | more /^Django command still shows Django installed version. How do i completely remove it ?
|
Uninstall Django completely
| 33,917,949 | 0 | 18 | 137,290 | 0 |
python,django,python-2.7,pip
|
On Windows, I had this issue with static files cropping up under pydev/eclipse with python 2.7, due to an instance of django (1.8.7) that had been installed under cygwin. This caused a conflict between windows style paths and cygwin style paths. So, unfindable static files despite all the above fixes. I removed the extra distribution (so that all packages were installed by pip under windows) and this fixed the issue.
| 0 | 0 | 0 | 0 |
2014-01-03T06:24:00.000
| 11 | 0 | false | 20,897,851 | 1 | 0 | 1 | 5 |
I uninstalled django on my machine using pip uninstall Django. It says successfully uninstalled whereas when I see django version in python shell, it still gives the older version I installed.
To remove it from python path, I deleted the django folder under /usr/local/lib/python-2.7/dist-packages/.
However sudo pip search Django | more /^Django command still shows Django installed version. How do i completely remove it ?
|
Uninstall Django completely
| 31,096,784 | 0 | 18 | 137,290 | 0 |
python,django,python-2.7,pip
|
I had to use pip3 instead of pip in order to get the right versions for the right version of python (python 3.4 instead of python 2.x)
Check what you got install at:
/usr/local/lib/python3.4/dist-packages
Also, when you run python, you might have to write python3.4 instead of python in order to use the right version of python.
| 0 | 0 | 0 | 0 |
2014-01-03T06:24:00.000
| 11 | 0 | false | 20,897,851 | 1 | 0 | 1 | 5 |
I uninstalled django on my machine using pip uninstall Django. It says successfully uninstalled whereas when I see django version in python shell, it still gives the older version I installed.
To remove it from python path, I deleted the django folder under /usr/local/lib/python-2.7/dist-packages/.
However sudo pip search Django | more /^Django command still shows Django installed version. How do i completely remove it ?
|
How to Write Python Script in html
| 20,901,453 | 1 | 0 | 152 | 0 |
javascript,html,python-2.7,pyjamas
|
There are more than just security problems. It's just not possible. You can't use the Python socket library inside the client browser. You can convert Python code to JS (probably badly) but you can't use a C based library that is probably not present on the client. You can access the browser only. You cannot reliably get the hostname of the client PC. Maybe ask another question talking about what you are trying to achieve and someone might be able to help
| 0 | 0 | 1 | 0 |
2014-01-03T09:36:00.000
| 1 | 0.197375 | false | 20,900,530 | 0 | 0 | 1 | 1 |
I want to execute my python code on the side even though there might be security problem
How can I write with importing modules and all?
I have tried using of pyjs to convert the below code to JS
import socket
print socket.gethostbyname_ex(socket.gethostname())[2][0]
but I am not find how to do the same.
Please help me how to how can convert this to JS and how to write other the python scripts and how to import modules in HTML.
|
How to tell if there's a render error when manually rendering a Django template?
| 20,910,146 | 1 | 1 | 248 | 0 |
python,django,templates
|
Sadly enough hacking around the problem like that is the best solution available. It should be noted that the variable is called TEMPLATE_STRING_IF_INVALID btw.
Alternatively I would recommend using Jinja2 together with Coffin, that makes debugging a lot easier as well and Jinja2 actually does give proper stacktraces for errors like these.
| 0 | 0 | 0 | 0 |
2014-01-03T17:30:00.000
| 2 | 1.2 | true | 20,909,383 | 0 | 0 | 1 | 1 |
I'm using the Django template system for some form emails, with the templates editable by the end user.
A template's render method returns the rendered content and silently passes over invalid variables and tags. Is there a reasonable way to tell if there was an error during rendering?
I've considering setting settings.TEMPLATE_STRING_IF_INVALID to a unique string and then testing for the presence of this string but that would affect normal template rendering, which isn't acceptable. I've scanned Django's source code in the hope there is a "render invalid variable/tag" method I can override cleanly to raise an exception, but no such luck.
Any ideas?
|
Strange Error Django Runtime
| 20,915,876 | 0 | 1 | 243 | 0 |
python,django,django-models,centos6
|
I strongly recommend that you use the latest Django release, (currently 1.6.1), instead of the development version.
| 0 | 0 | 0 | 0 |
2014-01-04T01:43:00.000
| 1 | 1.2 | true | 20,915,779 | 0 | 0 | 1 | 1 |
I have VPS with CENTOS 6 Python 2.7 and Django 3.0 installed. I have created a new app and corrected my system path but every time I run server this is what I get
RuntimeError: App registry isn't ready yet.
I do understand is already discussed in Django but information is very brief.
Can someone help me overcome this issue please.
Many thanks in advance.
|
Python - Unrecognized Arguments: your_module.YourApi
| 20,942,935 | 0 | 0 | 91 | 0 |
android,python,google-app-engine,google-cloud-endpoints
|
I figured out the problem. When executing the command endpointscfg.py get_client_lib java \ -o . your_module.YourApi, make sure you exclude the "\".
endpointscfg.py get_client_lib java -o . your_module.YourApi
| 0 | 0 | 0 | 0 |
2014-01-05T00:27:00.000
| 1 | 1.2 | true | 20,928,511 | 0 | 0 | 1 | 1 |
If you are experiencing issues with an error that details Unrecognized Arguments than try this for a solution.
When executing the endpointscfg.py get_client_lib java \ -o . your_module.YourApi make sure to exclude the "\".
This solution worked for me and the .zip file was generated no problem.
New command from the root of the python project endpointscfg.py get_client_lib java -o . your_module.YourApi
|
Bypassing Cloudflare Scrapeshield
| 23,142,928 | 1 | 7 | 4,617 | 0 |
python,selenium,web-scraping,cloudflare
|
See, what scrapeshield does is checking if you are using a real browser, it's essentially checking your browser for certain bugs in them. Let's say that Chrome can't process an IFrame if there is a 303 error in the line at the same time, certain web browser react differently to different tests, so webdriver must not react to these causing the system to say "We got an intruder, change the page!". I might be correct, not 100% sure though...
More Info on source:
I found most of this information on a Defcon talk about web sniffers and preventing them from getting the proper vulnerability information on the server, he made a web browser identifier in PHP too.
| 0 | 0 | 1 | 0 |
2014-01-05T08:04:00.000
| 1 | 0.197375 | false | 20,931,426 | 0 | 0 | 1 | 1 |
I'm working on a webscraping project, and I am running into problems with cloudflare scrapeshield. Does anyone know how to get around it? I'm using selenium webdriver, which is getting redirected to some lightspeed page by scrapeshield. Built with python on top of firefox. Browsing normally does not cause it to redirect. Is there something that webdriver does differently from a regular browser?
|
Creating a tastypie resource for a "singleton" non-model object
| 21,005,266 | 1 | 8 | 1,156 | 0 |
python,django,rest,tastypie
|
This sounds like something completely outside of TastyPie's wheelhouse. Why not have a single view somewhere decorated with @require_GET, if you want to control headers, and return an HttpResponse object with the desired payload as application/json?
The fact that your object is a singleton and all other RESTful interactions with it are prohibited suggests that a REST library is the wrong tool for this job.
| 0 | 0 | 0 | 0 |
2014-01-05T11:51:00.000
| 2 | 0.099668 | false | 20,933,214 | 0 | 0 | 1 | 1 |
I'm using tastypie and I want to create a Resource for a "singleton" non-model object.
For the purposes of this question, let's assume what I want the URL to represent is some system settings that exist in an ini file.
What this means is that...:
The fields I return for this URL will be custom created for this Resource - there is no model that contains this information.
I want a single URL that will return the data, e.g. a GET request on /api/v1/settings.
The returned data should return in a format that is similar to a details URL - i.e., it should not have meta and objects parts. It should just contain the fields from the settings.
It should not be possible to GET a list of such object nor is it possible to perform POST, DELETE or PUT (this part I know how to do, but I'm adding this here for completeness).
Optional: it should play well with tastypie-swagger for API exploration purposes.
I got this to work, but I think my method is kind of ass-backwards, so I want to know what is the common wisdom here. What I tried so far is to override dehydrate and do all the work there. This requires me to override obj_get but leave it empty (which is kind of ugly) and also to remove the need for id in the details url by overriding override_urls.
Is there a better way of doing this?
|
Run web server locally without exposing it publicly?
| 20,941,889 | -1 | 0 | 147 | 0 |
python,nginx
|
The short answer is no.
While using a hosting plan, so actually anything that you are doing is 'exposed to the world' since you yourself have to access it remotely, like everyone else.
You have two options, the first, configure the Digital Ocean server to only accept connections from your public IP, and the second, keep using your development server locally until you are ready for primetime.
| 0 | 0 | 0 | 0 |
2014-01-06T02:05:00.000
| 1 | -0.197375 | false | 20,941,829 | 0 | 0 | 1 | 1 |
I am running a dedicated server on Digital Ocean. My site uses Flask on NGINX through Gunicorn. During development I plopped a search engine (solr) on a local VM (through VMWare Fusion) which happens to be running Tomcat. It could have been running any web server per my question. In my app I make all search requests to that local ip: 192.168.1.5. Now, when I install Tomcat on my server and run it you can see it publicly at mysite.com:8080. There's the old welcome screen of Tomcat for the world to see. I want my app to be able to access it locally through localhost:8080 but not show it to the world. Is this possible?
|
Flask assets searching in the wrong directory
| 20,943,669 | 2 | 0 | 1,175 | 0 |
python,flask,flask-assets
|
Flask was incorrectly identifying the location of my static folder. That was the issue. To solve it I told Flask where my static folder sits.
| 0 | 0 | 0 | 0 |
2014-01-06T04:47:00.000
| 1 | 0.379949 | false | 20,943,169 | 0 | 0 | 1 | 1 |
I am trying to get Flask-Assets to load my assets.
My css is here: /home/myname/projects/py/myapp/myapp/static/css/lib/somecsslib.css
It is by default looking in the wrong directory. I get this:
No such file or directory: '/home/myname/projects/py/myapp/static/css/lib/somecsslib.css'
I am initializing it normally;
assets = Environment(app)
I tried setting the load_path:
assets.load_path = '/home/myname/projects/py/myapp/myapp/static/'
When I do that I get the following error:
BundleError: 'css/lib/somecsslib.css' not found in load path: /home/myname/projects/py/myapp/myapp/static/
EDIT
I just found out that load_path is a list.
I tried this instead:
assets.load_path.append('/home/myname/projects/py/myapp/myapp/static/')
I got this as a result:
BuildError: [Errno 2] No such file or directory: '/css/lib/somecsslib.css'
|
Dynos field is blank after pushing Django app to Heroku
| 20,981,795 | 0 | 0 | 853 | 0 |
python,django,heroku
|
Could it be that you forgot to heroku ps:scale web=1 ?
If not, could your Procfile be missing? Your Procfile should be name Procfile (no extension, capital P), and be placed in your project's root. You can check that by heroku run bash and then change in your app's directory and cat Profile.
Finally, if that's already the case then could your app have failed to start and gave up? Are there any other errors in the log?
| 0 | 0 | 0 | 0 |
2014-01-06T07:59:00.000
| 2 | 0 | false | 20,945,494 | 0 | 0 | 1 | 1 |
Programming newb, Trying to use Heroku for the first time for a Django app. After I push it to Heroku, the Dynos field is blank. I expected to see my procfile: web: python manage.py runserver 0.0.0.0:$PORT --noreload
Of course, when I try to open the application on Heroku, I get: An error occurred in the application and your page could not be served. Please try again in a few moments.
If you are the application owner, check your logs for details
Could this be because I don't have an extension on my procfile?
My Procfile should just be a file I created in my text editor, right?
Here is the log:
2014-01-06T07:34:17.321925+00:00 heroku[router]: at=error code=H14
desc="No web processes running" method=GET path=/
host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect=
service= status=503 bytes=
2014-01-06T07:34:17.778360+00:00 heroku[router]: at=error code=H14
desc="No web processes running" method=GET path=/favicon.ico
host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect=
service= status=503 bytes=
2014-01-06T07:35:01.608749+00:00 heroku[router]: at=error code=H14
desc="No web processes running" method=GET path=/
host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect=
service= status=503 bytes=
2014-01-06T07:35:01.868486+00:00 heroku[router]: at=error code=H14
desc="No web processes running" method=GET path=/favicon.ico
host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect=
service= status=503 bytes=
2014-01-06T07:46:57.862560+00:00 heroku[router]: at=error code=H14
desc="No web processes running" method=GET path=/
host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect=
service= status=503 bytes=
2014-01-06T07:46:58.114270+00:00 heroku[router]: at=error code=H14
desc="No web processes running" method=GET path=/favicon.ico
host=aqueous-dawn-4712.herokuapp.com fwd="98.232.45.58" dyno= connect=
service= status=503 bytes=
|
Django - replacing string in whole project
| 20,985,948 | 0 | 0 | 375 | 0 |
python,django,string,performance,replace
|
This is more like an project architecture problem.
The best way of doing what i "think" you want is to:
create a list of the "strings" options in a database.
In the user model create a field like "chosen_string"
When the user selects the option to be used (the string) you just update the user model
whenever you want to use the strings , just do a query.
| 0 | 0 | 0 | 0 |
2014-01-06T08:37:00.000
| 4 | 0 | false | 20,945,972 | 0 | 0 | 1 | 1 |
Currently I m creating quite big project and I need to implement functionality which will replace string with string provided by user. Moreover each of user can have his own custom string. I will give example for better understanding
there is a string "object" and user1 want to change string "object" to "tree", in whole project (all templates etc) string "object" is replaced by "tree"
My ideas are as folllow:
Creating middleware which would replace strings
Creating js plugin
Creating blockreplace(something like blocktrans) which would replace strings only in block ( I would also need to connect it with trans)
Do you have any other ideas which would be better? And which idea for you is the best option?
Examples:
Text in template main.html
...
this object is very useful
...
and every user can personalize site by his custom string
user1 wants "tree" instead of "object"
user2 wants "apple"
user3 wants "grape"
They save their settings and then when they enter main.html they see
user1: this tree is very useful
user2: this apple is very useful
user3: this grape is very useful
hope it helps
|
Unknown command: 'clearsessions'
| 20,947,402 | 7 | 4 | 1,392 | 0 |
python,django,django-admin
|
You should always use manage.py rather than django-admin.py to run any commands that depend on an existing project, as that sets up DJANGO_SETTINGS_MODULE for you.
| 0 | 0 | 0 | 0 |
2014-01-06T09:03:00.000
| 1 | 1 | false | 20,946,366 | 0 | 0 | 1 | 1 |
I am using Django 1.5.1, and according to django documentation cleanup is deprecated in this django version and cleansessions should be used.
When I try using cleansessions it states unknown command. And when I type djando-admin.py help. I don't get it listed in the commands, I instead get cleanup listed.
And on using django-admin.py cleanup, I get the following error -
ImproperlyConfigured: Requested setting USE_I18N, but settings are not
configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing
settings.
Any Idea what is causing so.
|
OpenERP IOError: decoder zip not available
| 21,006,017 | 2 | 0 | 2,118 | 0 |
python-2.7,ubuntu,python-imaging-library,openerp
|
You can solve this by uninstalling PIL but it's a bit like preventing fillings by pulling out your teeth; you solve the immediate problem but...
The IOError you are seeing is usually because PIL can't handle jpeg images. This happens because PIL is using hard-coded library paths.
To fix (on Ubuntu 12.04)
pip uninstall PIL
sudo apt-get install libjpeg8-dev
sudo ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib
pip install PIL
Note the output at the end of the PIL install, it will tell you which image types it is now handling.
| 0 | 0 | 0 | 0 |
2014-01-06T11:22:00.000
| 2 | 0.197375 | false | 20,948,960 | 0 | 0 | 1 | 1 |
I have installed openERP 7 multiple times on my Ubuntu 13.04 machine.
I am unable to create new user in openERP 7. When i try to create new user it shows message
IOError: decoder zip not available
Unable to post complete output of the Error message.
I have already installed all required python packages. But have not solved it yet.
|
manage.py help has different python path in virtualenv
| 21,575,289 | 0 | 0 | 1,129 | 0 |
python,django,virtualenv,pythonpath,manage.py
|
The problem is solved due to add a python path: add2virtualenv '/home/robert/Vadain/vadain.webservice.curtainconfig/'
| 0 | 1 | 0 | 0 |
2014-01-06T13:02:00.000
| 2 | 1.2 | true | 20,950,640 | 1 | 0 | 1 | 1 |
I have a problem in virtualenv that a wrong python path is imported.
The reason is that by running the command:
manage.py help --pythonpath=/home/robert/Vadain/vadain.webservice.curtainconfig/
The result is right, but when I run manage.py help then I missing some imports.
I searched on the internet, but nothing is helped. The last change I have done is at the end of the file virtualenvs/{account}/bin/activate added the following text:
export PYTHONPATH=/home/robert/Vadain/vadain.webservice.curtainconfig
But this not solving the problem, somebody else's suggestion to fix this problem?
|
Keep URL while surfing data Structure in Django web app
| 20,954,419 | 0 | 0 | 163 | 0 |
python,django,url,filepath,file-structure
|
I would suggest looking into forms on the Django documentation site. When the user submits the form the appropriate file structure information will be passed to your view code. The view code can then pass the new file structure information to your template. The template will create the forms and the process will start over again.
| 0 | 0 | 0 | 0 |
2014-01-06T16:07:00.000
| 2 | 0 | false | 20,954,090 | 0 | 0 | 1 | 1 |
I am attempting to create a django based website. The goal of one part of the site is to show the contents of a database in reference to its file structure.
I would like to keep the URL the same while traveling deeper into the file structure, as opposed to developing another view for each level of the file structure a person goes into.
The simplest way I can think to achieve this is to pass the current directory path in the request variable but I am not sure how to do this or if it is possible since it would have to be linked in the html file, and not the view file that is written in python.
If you are able to at very least point me in the right direction it would be greatly appreciated!
|
Allow admins to control Django ORM through a view
| 20,961,607 | 0 | 0 | 76 | 0 |
python,django
|
If admins know SQL - give them phpmyadmin with read-only privelegies.
| 0 | 0 | 0 | 0 |
2014-01-06T22:40:00.000
| 4 | 0 | false | 20,960,667 | 0 | 0 | 1 | 1 |
I want to expose ORM control to my users. I don't mean I want them to add stuff to part of the code, I mean I want to actually allow them to write django code. I only need to allow specific models, and only allow fetching of data (not add or change anything). There'll be like a console and each line will be executed (sort of like ipython notebook), and the returned data (if it's a QuerySet object) will then be displayed in a table of some sort.
This feature will only be available to my super-user admins and so won't be a security concern. What's the best way to do this (if at all possible)?
update
Maybe I should give some background for the intended usage here. See I have built an app that collects and saves statistical information. My users have many filters I have built for them, But they constantly ask for more and more flexibility, sometimes they need to filter on something very specific and it's only a one-time thing, so I can't just keep adding more and more features.
Now my superusers know a little python, and I got the idea that maybe I can give them some sort of way to filter on their own. The idea is they will be able to save queries and name them, then adding those custom filters to a list present on the main site.
The way it would work is they get a QuerySet object containing all objects, which they can filter using a list of pre-determined commands. After running the command, the server will evaluate it, look for errors or forbidden code, and only then will run it. But I'm guessing I can't just use eval() in a production server, now can I? So it there some other way?
|
Serving executable file on App Engine changes file permissions
| 20,971,741 | 4 | 2 | 109 | 0 |
python,google-app-engine,pyinstaller
|
HTTP doesn't support file permissions, i.e. there is no way to make downloaded file exacutable by default.
If your concern is to avoid users to mess with chmod, you can serve .tar.gz archive, which is able to keep records if file is executable or not
| 0 | 1 | 0 | 0 |
2014-01-07T12:14:00.000
| 1 | 1.2 | true | 20,971,366 | 0 | 0 | 1 | 1 |
I generated a Unix executable with PyInstaller. I then changed the permissions of the file using chmod +x+x+x my_file
-rwxr-xr-x my_file
When I serve that file from mysite.appspot.com/static/filename, I successfully download my app but the file permissions change and it can't be run as an executable anymore.
-rw-r--r my_file_after_being_downloaded
How can I serve my file while keeping its permissions unchanged?
(note that I can confirm that manually chmod-ing this downloaded file does turn it back into a Unix executable, and hence opens with double-click.)
|
Caching dynamic web pages (page may be 99% static but contain some dynamic content)
| 20,981,669 | 4 | 1 | 839 | 0 |
javascript,php,python,ruby-on-rails,caching
|
You can load the page as a static page and then load the small amount of dynamic content using AJAX. Then you can cache the page for as long as you'd like without problems. If the amount of dynamic content or some other aspect keeps you from doing that, you still have several options to improve performance.
If you're site is hit very frequently (like several times a second) you can cache the entire dynamically generated page for short intervals, such as a minute or thirty seconds. This will give you a tremendous performance improvement and will likely not be noticeable to the user, if reasonable intervals are used.
For further improvements, consider caching database queries and other portions of the application, even if you do so for short intervals.
| 0 | 0 | 0 | 1 |
2014-01-07T20:46:00.000
| 1 | 1.2 | true | 20,981,545 | 0 | 0 | 1 | 1 |
Having a layer of caching for static web pages is a pretty straight forward concept. On the other hand, most dynamically generated web pages in PHP, Python, Ruby, etc. use templates that are static and there's just a small portion of dynamic content. If I have a page that's hit very frequently and that's 99% static, can I still benefit from caching when that 1% of dynamic content is specific to each user that views the page? I feel as though there are two different versions of the same problem.
Content that is static for a user's entire session, such as a static top bar that's shown on each and every page (e.g. top bar on a site like Facebook that may contain a user's picture and name). Can this user specific information be cached locally in Javascript to prevent needing to request this same information for each and every page load?
Pages that are 99% static and that contain 1% of dynamic content that is mostly unique for a given viewer and differs from page to page (e.g. a page that only differs by indicating whether the user 'likes' some of the content on the page via a thumbs up icon. So most of the content is static except for the few 'thumbs up' icons for certain items on the page).
I appreciate any insight into this.
|
Increase memory in Pydev using run configurations
| 21,041,901 | 6 | 2 | 2,095 | 0 |
python,eclipse,out-of-memory,pydev
|
Python requires no such flag (so, not really PyDev related).
Python (unlike java), will happily use all the memory you have available in your computer, so, in this case, your algorithm is really using up all the memory it can.
Note that if you are running a Python which is compiled in 32 bits, the max memory you'll have for the process is 2GB. If you need more memory (and have it available in your computer), you need to use a 64-bit compiled Python (usually marked as x86_64).
| 0 | 0 | 0 | 0 |
2014-01-07T23:04:00.000
| 1 | 1.2 | true | 20,983,858 | 1 | 0 | 1 | 1 |
I'm working on indexing system and I need so much of ram, as I know in java we can pass some parameter to JVM to increase the heap size, but in python I couldn't figure out it how, and every time I run my application I get MemoryError after indexing ten thousands documents.
|
How can I scroll a web page using selenium webdriver in python?
| 65,731,313 | 2 | 209 | 415,578 | 0 |
python,selenium,selenium-webdriver,automated-tests
|
insert this line driver.execute_script("window.scrollBy(0,925)", "")
| 0 | 0 | 1 | 0 |
2014-01-08T03:44:00.000
| 21 | 0.019045 | false | 20,986,631 | 0 | 0 | 1 | 1 |
I am currently using selenium webdriver to parse through facebook user friends page and extract all ids from the AJAX script. But I need to scroll down to get all the friends. How can I scroll down in Selenium. I am using python.
|
TwistedWeb: Custom 404 Not Found pages
| 20,999,393 | 1 | 4 | 396 | 0 |
python,twisted,twisted.web
|
There is no API in Twisted Web like something.set404(someResource). A NOT FOUND response is generated as the default when resource traversal reaches a point where the next child does not exist - as indicated by the next IResource.getChildWithDefault call. Depending on how your application is structured, this means you may want to have your own base class implementing IResource which creates your custom NOT FOUND resource for all of its subclasses (or, better, make a wrapper since composition is better than inheritance).
If you read the implementation of twisted.web.resource.Resource.getChild you'll see where the default NOT FOUND behavior comes from and maybe get an idea of how to create your own similar behavior with different content.
| 0 | 0 | 1 | 0 |
2014-01-08T05:12:00.000
| 1 | 1.2 | true | 20,987,496 | 0 | 0 | 1 | 1 |
I am quite surprised I couldn't find anything on this in my Google searching.
I'm using TwistedWeb to make a simple JSON HTTP API. I'd like to customize the 404 page so it returns something in JSON rather than the default HTML. How might I do this?
|
Python or Java as Backend Language in Google App engine?
| 20,999,802 | 3 | 1 | 4,232 | 0 |
java,android,python,google-app-engine
|
You can really go with either, to be honest, and use whatever suits your style.
When I started using App Engine, I was Java all the way. I recently switched to Python and love it too!
If you have a lot of existing java dependencies, such as libraries etc. that you want to continue using, then stick with it. Otherwise, it's worth dipping your toe in the Python waters.
| 0 | 1 | 0 | 0 |
2014-01-08T15:14:00.000
| 2 | 1.2 | true | 20,999,456 | 0 | 0 | 1 | 2 |
I am developing an application in Android using Google App engine and Google Compute Engine as backend .
I have followed the Google's demo code in python as base for my application.
Now I have question in my mind that since I am more familiar with Java then Python and also need to consider the fact that Google is supporting Python more then Java in its most of the demo codes, Should I change my GAE backend language to Java??
I should stick with Python and hope that I would come around with Python eventually.
Any suggestions are appreciated. Thanks
|
Python or Java as Backend Language in Google App engine?
| 20,999,591 | 4 | 1 | 4,232 | 0 |
java,android,python,google-app-engine
|
Here are some points to consider:
Both Python and Java are capable languages and App Engine Services are available to a large extent in both the environments.
You should use the environment that you are most comfortable with. This will help when debugging issues on the Server side. I would go with the language that I am most familiar with in case the application is critical, is on a tight deadline, etc. If you are learning the environment and have the time, it is great to look at a new language.
Since you are writing an Android application that is interacting with your Server side application in App Engine, one assumes that you would be exposing this functionality over Web Services. Both Python and Java environments are capable of hosting Web Services. In fact, with Google Cloud Endpoints, you should be able to even generate client side bindings (client libraries) for Android that integrate easily.
| 0 | 1 | 0 | 0 |
2014-01-08T15:14:00.000
| 2 | 0.379949 | false | 20,999,456 | 0 | 0 | 1 | 2 |
I am developing an application in Android using Google App engine and Google Compute Engine as backend .
I have followed the Google's demo code in python as base for my application.
Now I have question in my mind that since I am more familiar with Java then Python and also need to consider the fact that Google is supporting Python more then Java in its most of the demo codes, Should I change my GAE backend language to Java??
I should stick with Python and hope that I would come around with Python eventually.
Any suggestions are appreciated. Thanks
|
Trying to download html pages to create a very simple web crawler
| 21,000,334 | 1 | 1 | 162 | 0 |
python,html,regex,web-crawler
|
I think the best way to do this would be to create some sort of mapping file. The file would map the original URL on the BBC site => the path to the file on your machine. You could generate this file very easily during the process when you are scraping the links from the homepage. Then, when you want to crawl this site offline you can simply iterate over this document and visit the local file paths. Alternatively you could crawl over the original homepage and do a search for the links in the mapping file and find out what file they lead to.
There are some clear downsides to this approach, the most obvious being that changing the directory structure/filenames of the downloaded pages will break your crawl...
| 0 | 0 | 1 | 0 |
2014-01-08T15:39:00.000
| 1 | 0.197375 | false | 21,000,038 | 0 | 0 | 1 | 1 |
I'm new to working with html pages on python.
I'm trying to run the BBC site offline from my PC, and I wrote a python code for that.
I've already made functions that download all html pages on the site, by going through the links found on homepage (with regex).
I have all links on a local directory, but they are all called sub0,sub1,sub2.
How can I edit the homepage so it would direct all links to the html pages on my directory instead of the pages online?
again, the pages aren't called in their original name-
so replacing the domain with a local directory won't work.
I need a way to go through all links on main page and change their whole path.
|
s3cmd tool on Windows server with progress support
| 21,165,278 | 2 | 1 | 701 | 1 |
python,windows,progress-bar,progress,s3cmd
|
OK, I have found a decent workaround to that:
Just navigate to C:\Python27\Scripts\s3cmd and comment out lines 1837-1845.
This way we can essentially skip a windows check and print progress on the cmd.
However, since it works normally, I have no clue why the authors put it there in the first place.
Cheers.
| 0 | 0 | 1 | 0 |
2014-01-09T10:38:00.000
| 2 | 1.2 | true | 21,017,853 | 0 | 0 | 1 | 1 |
As the title suggests, I am using the s3cmd tool to upload/download files on Amazon.
However I have to use Windows Server and bring in some sort of progress reporting.
The problem is that on windows, s3cmd gives me the following error:
ERROR: Option --progress is not yet supported on MS Windows platform. Assuming -
-no-progress.
Now, I need this --progress option.
Are there any workarounds for that? Or maybe some other tool?
Thanks.
|
Sphinx: force rebuild of html, including autodoc
| 22,141,973 | 7 | 29 | 12,474 | 0 |
python,python-sphinx,sphinx-apidoc
|
I do not use sphinx-build but with make html I always do touch *.rst on my source files. Then make html can pickup changes.
| 0 | 0 | 0 | 0 |
2014-01-09T11:53:00.000
| 4 | 1 | false | 21,019,505 | 1 | 0 | 1 | 1 |
Currently, whenever I run sphinx-build, only when there are changes to the source files are the inline docstrings picked up and used. I've tried calling sphinx-build with the -a switch but this seems to have no effect.
How can I force a full rebuild of the HTML output and force autodoc execution?
|
Verify file authenticity from Flash client without revealing key
| 21,033,015 | 2 | 1 | 68 | 0 |
python,flash,security,hash,digital-signature
|
Is there another way that I can verify a file's signature on the client side, without exposing the method used to create that signature?
Public key crypto. You have only a public key at the client end, and require the private key on the server side to generate a signature for it to verify.
What is the attack you're trying to prevent? If you are concerned about a man-in-the-middle attack between an innocent user and your server, the sensible choice would be TLS (HTTPS). This is a pre-cooked, known-good implementation including public key cryptography. It's far preferable to rolling your own crypto, which is very easy to get wrong.
| 0 | 0 | 0 | 0 |
2014-01-09T12:38:00.000
| 1 | 1.2 | true | 21,020,507 | 0 | 0 | 1 | 1 |
I'm building a Flash application to run on the web, where users can visit and create their own content in conjunction with my service (built with Python). Specifically: the user sends in some data; some transformation is performed on the server; then the finished content is sent back to the user, where it is rendered by the client app.
I want to be able to prevent the client from rendering bogus content, which I can do by passing a keyed hash along with the main content, generated by the server. The client would then use the same key to hash the content once again, and confirm that the hashes/signatures match. If there's a mismatch, it can be assumed that the content is inauthentic.
The problem I have is that keeping the key inside the SWF is insecure. I've considered a number of ways to obfuscate the key, but am learning that if an attacker wants it, they can get it quite easily. Once an attacker has that, they can start creating their own content to be unknowingly accepted by the client.
Is there another way that I can verify a file's signature on the client side, without exposing the method used to create that signature?
|
Git pre-pushed object on remote server? git ls-tree
| 21,030,409 | 0 | 0 | 146 | 0 |
python,git,bash
|
java code formatter as a pre-receive hook
Don't do it. You're trying to run the equivalent of git filter-branch behind your developer's back. Don't do it.
Is there any other way of doing this?
If you want inbound code formatted in a particular way, validate the inbound files. If any aren't done right list them and reject the push.
How to get that object on a remote server?
You can't fetch arbitrary objects, you can only fetch by ref (branch or tag) name. The pre-receive hook runs before any refs have been updated, so no ref names the inbound commits.
| 0 | 1 | 0 | 0 |
2014-01-09T18:59:00.000
| 1 | 0 | false | 21,028,845 | 0 | 0 | 1 | 1 |
I have a Atlassian Stash server for git.
I am looking to write a script that will run java code formatter as a pre-receive hook (before it pushes the changes to the repository).
So, what I am looking to do is NOT to do the work on the stash server itself rather perform the work on another server and send the status back (0 or 1) to the Stash server.
I have written the script in Python where it calls a cgi (python) script on the remote server with "ref oldrev newrev" as HTTP GET Method. Once I have the STDIN values (ref oldrev newrev) on a remote server, I created a dir, git init, git remote add origin URL, and git fetch (i even tried git pull) to get the latest contents/objects of a reporsitory in hoping to get the object that has not been pushed to the repository but its in a pre-pushed stage environment.
The hash or SHA key or "newrev" key of the object that is in the pre-pushed stage: 36ac63fe7b15049c132c310e1ee153e044b236b7
Now, when I run 'git ls-tree 36ac63fe7b15049c132c310e1ee153e044b236b7 Test.java' inside the directory I created above, it gives me error.
'fatal: not a tree object'
Now, My questions are:
How to get that object on a remote server?
What might be the git command that I run that will give me that object in that stage?
Is there any other way of doing this?
Does it make any sense of what I've asked above. Let me know if I am not clear and I will try to clear things up more.
Thanks very much in advanced for any/all the help?
|
Web2py postgreSQL database
| 21,050,586 | 0 | 0 | 535 | 1 |
python,web2py
|
fake_migrate_all doesn't do any actual migration (hence the "fake") -- it just makes sure the metadata in the .table files matches the current set of table definitions (and therefore the actual database, assuming the table definitions in fact match the database).
If you want to do an actual migration of the database, then you need to make sure you do not have migrate_enabled=False in the call to DAL(), nor migrate=False in the relevant db.define_table() calls. Unless you explicitly set those to false, migrations are enabled by default.
Always a good idea to back up your database before doing a migration.
| 0 | 0 | 0 | 0 |
2014-01-10T13:55:00.000
| 1 | 0 | false | 21,046,136 | 0 | 0 | 1 | 1 |
Recently i m working on web2py postgresql i made few changes in my table added new fields with fake_migration_all = true it does updated my .table file but the two new added fields were not able to be altered in postgres database table and i also tried fake_migration_all = false and also deleted mu .table file but still it didnt help to alter my table does able two add fields in datatable
Any better solution available so that i should not drop my data table and fields should also be altered/added in my table so my data shouldn't be loast
|
Deploying Django - Most static files are appearing except images
| 21,052,730 | 0 | 0 | 106 | 0 |
python,django,deployment,amazon-s3
|
It's probably impossible to give an accurate assessment with the limited info on your setup. If your css files are working what folder are they sitting in on your server?
Why not have images folder in the same directory and set that directory to your MEDIA_URL in your settings.py file?
In your browser check your images full path and compare that to your CSS files, where are they pointing, do you have a directory on your server where they are supposed to be? are you receiving an access denied if you try to directly put in that image url into your browser?
| 0 | 0 | 0 | 0 |
2014-01-10T18:29:00.000
| 1 | 0 | false | 21,051,731 | 0 | 0 | 1 | 1 |
Most of my static files on my newly deployed Django website are working (CSS), but not the images. All the images are broken links for some reason and I cannot figure out why. I am serving my static files via Amazon AWS S3.
I believe all my settings are configured correctly as the collectstatic command works (and the css styling sheets are up on the web). What could be the problem?
|
App Engine dev server: bad runtime process port [''] No module named google.appengine.dist27.threading
| 22,256,340 | 1 | 4 | 1,955 | 0 |
python,google-app-engine
|
A recent upgrade of the development SDK started causing this problem for me. After much turmoil, I found that the problem was that the SDK was in a sub-directory of my project code. When I ran the SDK from a different (parent) directory the error went away.
| 0 | 1 | 0 | 0 |
2014-01-10T19:10:00.000
| 2 | 0.099668 | false | 21,052,461 | 0 | 0 | 1 | 1 |
When I try to run any of my app engine projects by python GoogleAppEngineLauncher
I got the error log as follows:
Does anyone have any ideas of what's going on?
I tried remove the SDK and reinstall it. Nothing happens. Still got the same error.
Everything is working fine and I don't think I made any changes before this happens.
The only thing that I can think of is that I install bigquery command line tool before this happens. But I don't think this should be the reason of this.
bad runtime process port ['']
Traceback (most recent call last):
File
"/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/_python_runtime.py",
line 197, in
_run_file(file, globals()) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/_python_runtime.py",
line 193, in _run_file
execfile(script_path, globals_) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/runtime.py",
line 175, in
main() File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/runtime.py",
line 153, in main
sandbox.enable_sandbox(config) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py",
line 159, in enable_sandbox
import('%s.threading' % dist27.name) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py",
line 903, in load_module
raise ImportError('No module named %s' % fullname) ImportError: No module named google.appengine.dist27.threading
|
Handle Redirects one by one with scrapy
| 21,065,555 | 0 | 0 | 302 | 0 |
redirect,python-2.7,scrapy,http-post
|
I found my own solution to this promlem. Instead of building a list of requests and return them at once, I build a chain of them and passed the next one inside the requests meta_data.
Inside the callback I pass either the next request, storing the parsed item in a spider member, or the parsed list of items if there is no next request to execute.
| 0 | 0 | 1 | 0 |
2014-01-11T16:01:00.000
| 1 | 1.2 | true | 21,064,467 | 0 | 0 | 1 | 1 |
I've written a Spider which has one start_url. The parse method of my spider scraps some data and returns a list of FormRequests.
The problem comes with the response of that post request. It redirects me to another site with some irrelevant GET Parameters. The only parameter which seems to matter is a SESSION_ID posted along in the header. Unfortunately Scrapys behavior is to execute my requests, one after another and queues the redirect response at the end of the queue. If all returned FormRequests are executed, scrapy starts to execute all redirects, which all return the same site.
How can I circumvent this behavior, so that a FormRequest is executed, and the redirect returned in the requests response is executed befor any new FormRequest? Maybe there is another way, like forcing the site somehow to get a new SESSION_ID cookie for each FormRequest. I'm open to any idea that could probably solve the problem.
|
DatastoreInputReader using entity kind with ancestor
| 21,069,451 | 0 | 0 | 57 | 0 |
python,google-app-engine
|
You cannot specify an ancestor for the DatastoreInputReader -- except for a namespace -- so the pipeline will always go through all your Domain entities in a given namespace.
| 0 | 1 | 0 | 0 |
2014-01-11T21:48:00.000
| 1 | 1.2 | true | 21,068,311 | 0 | 0 | 1 | 1 |
Is there a way to use the standard DatastoreInputReader from AppEngine's mapreduce with entity kind requiring ancestors ?
Let's say I have an entity kind Domain with ancestor kind SuperDomain (useful for transactions), where do I specify in mapreduce_pipeline.MapreducePipeline how to use a specific SuperDomain entity as ancestor to all queries?
|
Background thread behind django project
| 21,082,317 | 0 | 0 | 267 | 0 |
python,django,multithreading,background
|
Celery is good, if you have tasks need to be runned in background. For example it could be a interaction with web-workers ( like sending emails, massive updates in stores and etc), or it could be parallel tasks, when one master worker sends tasks to celery server ( or servers ).
In you case, I think better solution is:
Create one daemon, which will talk with your SERIAL PORT in infinite loop and save data somewhere.
Web workers, which will read this data and represent to user.
If will need something like, long queries with heavy calculation for users, you can add Celery to your stack, and this celery will work as web workers, just read data and return results to web workers.
| 0 | 0 | 0 | 0 |
2014-01-12T06:08:00.000
| 1 | 0 | false | 21,071,853 | 0 | 0 | 1 | 1 |
I never worked on web application/service side and not sure if this is the right way for my work:
I have data collection system collecting data from serial port, and also want to present the data to user using web service. I'm thinking of creating a Django project to show my data on website. Also, to collecting the data, I need some background thread running when the website started. I'm trying to re-use the models defined in my django project in the data collecting thread.
First, I'd like to know if this is a reasonable design? If yes, is there any easy way to do that? I saw a lot topics about background tasks using celery but those are very complicate scenarios. Isn't there an easy way for this?
|
Django fandjango migration 4.2
| 21,109,766 | 1 | 0 | 547 | 0 |
python,django,facebook,fandjango
|
Ok, I get it. The problem was with mysql database. The new version added a json field extradata. MySql interpreted it as text field with NULL value. So the problem was that fandjango wanted empty json, not NULL. I have updated the extradata field with '{}' and it's worked.
Now I have a standart problem: The mobile version of the app is unavailable because it is misconfigured for mobile access.
As it was earlier, before new version
Now I will try to figure out what is this. :)
| 0 | 0 | 0 | 0 |
2014-01-12T16:11:00.000
| 1 | 0.197375 | false | 21,076,983 | 0 | 0 | 1 | 1 |
After migration fandjango to version 4.2., I've got an error when I access my facebook application:
Exception Value: [u'Enter valid JSON']
Exception Location: /usr/local/lib/python2.7/dist-packages/jsonfield/fields.py in pre_init, line 77
Trace:
/usr/local/lib/python2.7/dist-packages/jsonfield/subclassing.py in set
obj.dict[self.field.name] = self.field.pre_init(value, obj)
...
jsonfield.subclassing.Creator object at 0x2a5c750
obj
User: My User
value u''
/usr/local/lib/python2.7/dist-packages/jsonfield/fields.py in pre_init
raise ValidationError(_("Enter valid JSON"))
...
▼ Local vars
Variable Value
self
jsonfield.fields.JSONField: extra_data
obj
User: My User
value u''
I have upgraded fandjagno using pip install -upgrade fandjango, python manage.py migrate fandjango.
There were another problems:
-No module named jsonfield, so I installed it using pip
-No module named dateutil.tz, so I installed it as well.
-Also it asked for property DJANGO_SITE_URL, which was not defined in the settings object. I putted also it in the settings file. However I didn't find any documentation about this property.
So now I am trying to figure out what else is needed.
|
Integration between a Python script and Java program
| 21,082,222 | 2 | 1 | 69 | 0 |
java,python
|
I'm not sure if this is even possible, but is there any way to keep the python script in a state where it wouldn't have to completely re-run from the start every single time?
The correct and most obvious way to do this is to re-implement (if you can) the Python script and turn it into some kind of Remote Serivce and use some kind of Interface:
Examples:
Web Service over JSON
Web Service over RPC, JSON-RPC, XML-RPC
You would then access the service(s) remotely over a network connection from your Java program and serialize parameters passed to the Python program and theh results back to Java via something both can speak eaisly. e.g: JSON
| 0 | 0 | 0 | 1 |
2014-01-13T00:31:00.000
| 2 | 0.197375 | false | 21,082,196 | 0 | 0 | 1 | 1 |
I fiddled around with calling a python script from a Java program for a little while and was finally able to get it working. However, When I called it I noticed that there is a certain call in the python script that creates an object that takes a couple of seconds (which is longer than I'd like). So in an essence every time the script runs it has to re-import a few libraries and create a new object. I'm not sure if this is even possible, but is there any way to keep the python script in a state where it wouldn't have to completely re-run from the start every single time?
Any help would be greatly appreciated. I do not have much experience with the integration of programs with different languages.
Thank you very much!!! Any suggestions are welcome.
|
Android automation using APPIUM framework
| 21,126,944 | 1 | 0 | 751 | 0 |
python,android-testing,appium
|
Appium for Android is based on the UIAutomator framework. Selendroid is based on instrumentation.
There are no drawbacks to using python, Appium works with all languages with Selenium/WebDriver bindings which includes python, node.js, objective-c, java, c#, ruby, and more.
| 0 | 0 | 0 | 1 |
2014-01-13T08:37:00.000
| 2 | 0.099668 | false | 21,086,872 | 0 | 0 | 1 | 2 |
Somebody said to me 'python does not do automation for android app, as the python stack does not exist in android OS'.
Is it true?
Is Appium based on Android instrumentation framework?
Are there any drawbacks of using Python for writing my test cases? Should I use some other language?
|
Android automation using APPIUM framework
| 36,628,768 | 0 | 0 | 751 | 0 |
python,android-testing,appium
|
I believe appium dose not have any drawback if python is used. I suggest to use JAVA as a lot of examples and Q/A can be found on web easily.
| 0 | 0 | 0 | 1 |
2014-01-13T08:37:00.000
| 2 | 0 | false | 21,086,872 | 0 | 0 | 1 | 2 |
Somebody said to me 'python does not do automation for android app, as the python stack does not exist in android OS'.
Is it true?
Is Appium based on Android instrumentation framework?
Are there any drawbacks of using Python for writing my test cases? Should I use some other language?
|
Liclipse/Eclipse: setup debugging environment for a django project alongwith its virtualenv
| 21,090,721 | 1 | 1 | 3,362 | 0 |
python,django,eclipse,debugging,pydev
|
[Update]
The error got resolved after setting python environment via
Right Click on Project in Project Explorer -> PyDev -> Source PyDev Project Config
Project Explorer -> Properties -> PyDev Interpreter
Project Explorer -> Properties -> PyDev PYTHONPATH
add exact path within virtualenv where the python site-packages are installed
After this, one also needs to fill two fields in PyDev - Django
Django manage.py = your manage.py file
Django settings module = settings.local or whichever is your settings file
Hope it helps.
I am able to run the django server from eclipse but still not able to make the code stop at breakpoint. :(
| 0 | 0 | 0 | 0 |
2014-01-13T10:54:00.000
| 1 | 1.2 | true | 21,089,507 | 0 | 0 | 1 | 1 |
I am looking for a clearly written set of steps to import an existing django project stored in a GIT repository into Liclipse (Eclipse configured for python) configured using virtualenv and running successfully.
I used File->Import to import an existing project from its top level directory /home/comiventor/ProjectXYZ/ containing .git
Now when I run ProjectXYZ->Django-> Sync DB (manage.py syncdb)
It says "pydev nature is not properly set"
I could not derive much help on this error from any other source. :(
[Update]
I am able to run the django server from eclipse (steps in my answer below) but still not able to make the code stop at breakpoint. :(
|
How should I schedule my task in django
| 21,097,588 | 0 | 0 | 114 | 0 |
python,django,scheduled-tasks
|
Basically you can use Celery's preiodic tasks with expire option, which makes you sure that your tasks will not be executed twice.
Also you could run your own script with infinite loop like which will run calculation. If your calculation will run more than minute you can spawn your tasks using eventlet or gevent. Other option you could creare celery-tasks from this script and be sure that your tasks executes every N seconds, as you prefer.
| 0 | 1 | 0 | 0 |
2014-01-13T11:39:00.000
| 1 | 0 | false | 21,090,365 | 0 | 0 | 1 | 1 |
In my django project, I need to collect data from about 50 remote servers into the local database minutely or every 30-seconds. Though it works with crontab in the remote servers, I want to do this in the project. Firstly, I consider the django-celery. However it does well in asynchronous processing and the collect-data task could not be delayed. Therefore i think, it may be not fit. How if i do this use the timer for python and what need i to pay more attention. Excuse for my ignorance of python and django. I'll appreciate other advice or ideas. Many thanks
|
How can one assert in Django that a model field has already been populated from the DB?
| 21,092,602 | 6 | 6 | 1,129 | 0 |
python,django,django-models
|
In the particular case of a ForeignKey, you can check the existence of the _FOO_cache attribute. For instance, if your Employee object has a ForeignKey to Company, then if my_employee.company is populated then my_employee._company_cache will exist, so you can do hasattr(my_employee, '_company_cache').
| 0 | 0 | 0 | 0 |
2014-01-13T13:07:00.000
| 2 | 1.2 | true | 21,092,110 | 0 | 0 | 1 | 1 |
In Django, is there an easy way to test that a model field on an object has already been queried from the database (e.g. an object coming from a foreign-key relationship)?
I would like to make an assertion like this in one of my tests to ensure that accessing a particular attribute on one of my objects won't trigger an additional database query.
|
How to run specific tests with django-admin.py?
| 21,094,376 | 0 | 3 | 768 | 0 |
python,django,testing
|
You need to use manage.py instead on django-admin.py. So run ./manage.py test app
| 0 | 0 | 0 | 0 |
2014-01-13T14:56:00.000
| 1 | 0 | false | 21,094,299 | 0 | 0 | 1 | 1 |
I have an app located at app/ and tests which reside at app/tests/tests.py. How can I run those tests with django-admin.py?
I tried django-admin.py test app, django-admin.py test app.tests and django-admin.py test app.tests.tests but with no success.
I add that I am also adding the --settings param to the above commands but cut it off for readability.
|
ImportError: No module named flask.ext.sqlalchemy
| 21,124,613 | 0 | 0 | 2,246 | 1 |
python,deployment,amazon-ec2,flask,flask-sqlalchemy
|
You should be building your python apps in a virtualenv rather than using the system's installation of python. Try creating a virtualenv for your app and installing all of the extensions in there.
| 0 | 0 | 0 | 0 |
2014-01-14T07:23:00.000
| 1 | 0 | false | 21,107,967 | 0 | 0 | 1 | 1 |
I am deploying my flask app to EC2, however i get the error in my error.log file once i visit the link of my app.
My extensions are present in the site-packages of my flask environment and not the "usr" folder of the server, however it tries to search usr folder to find the hook
File "/usr/local/lib/python2.7/dist-packages/flask/exthook.py", line 87, in load_module
It is located in
/var/www/sample/flask/lib/python2.7/site-packages
How to get over this issue?
|
How to attach zip file in email at OpenERP?
| 21,127,961 | 1 | 0 | 666 | 0 |
python,python-2.7,openerp,openerp-7
|
All attachments to emails are stored in the ir.attachments model. The basic procedure is to create your attachment in whatever binary format you like (png, zip, gzip etc...), then you base64 encode it. All attachments stored in OpenERP are base 64 encoded and the standard attachment functionality with encode and decode as required. If you are doing it by hand you must encode yourself. Emails have a many2many relationship with ir.attachments IIRC so you create a values dictionary for ir.attachments and write it along with the email using the magic numbers (6, 0, [list_of_value_dictionaries])
| 0 | 0 | 0 | 0 |
2014-01-14T09:32:00.000
| 1 | 0.197375 | false | 21,110,022 | 0 | 0 | 1 | 1 |
I want to attach Zip file in openerp.
I see purchase order like that pdf is auto attached when the email widzard form is coming. But No idea how to create Email Widzard with attached file.
I can create Zip file at backend but no idea how to put inside the widzard together with form.
Please guide if soemone have already done.
Thanks in advance.
Phyo
|
How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
| 21,117,703 | 0 | 2 | 882 | 0 |
python,virtualhost,cherrypy,bottle
|
perhaps you can simply put nginx as reverse proxy and configure it to send the traffic to the two domains to the right upstream (the cherryPy webserver).
| 0 | 1 | 0 | 0 |
2014-01-14T15:13:00.000
| 4 | 0 | false | 21,117,002 | 0 | 0 | 1 | 1 |
I have a website (which running in Amazon EC2 Instance) running Python Bottle application with CherryPy as its front end web server.
Now I need to add another website with a different domain name already registered. To reduce the cost, I want to utilize the existing website host to do that.
Obviously, virtual host is the solution.
I know Apache mod_wsgi could play the trick. But I don't want to replace CherryPy.
I've googled a a lot, there are some articles showing how to make virtual hosts on CherryPy, but they all assume Cherrypy as Web Sever + Web application, Not CherrPy as Web server and Bottle as Application.
How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
|
Conditionally Show Form Elements in Django
| 21,173,464 | 0 | 0 | 90 | 0 |
jquery,python,django,forms
|
Breaking this into 2 seperate models (Student, Industry) would not be a problem, it would actually help you if you need in the future to add more fields to each individual model.
Since a Person can only belong to 1 university or 1 Industry then your query is also combined with no much additional overhead.
Your initial approach is not wrong as well, but you need to think if in the future you will need to add additional information to the related models, if for instance you need to add courses, or sectors then you start overloading your initial model.
| 0 | 0 | 0 | 0 |
2014-01-16T21:27:00.000
| 1 | 0 | false | 21,173,241 | 0 | 0 | 1 | 1 |
I'm working on a small Django project and for a form, i want to capture the details of the person signing in. There is a radio option which has the values 'Student' or 'Industry'. If Student is chosen, I want two input boxes to be shown, one for 'graduating year' and other for 'university name'. If 'Industry' is chosen I want 2 text boxes, one for 'Company name' other for 'Job title'.
Right now, I'm able to get this working using jQuery to hide the un-needed text boxes and attaching a changelistener to the radiobuttons. However is there a django way of doing the same? Right now, my model has:
name - common for both cases
student_or_industry - ChoiceField
job_title
company_name
univeristy
graduating_year
And my form is created using the simple ModelForm, which leads to loads of NULLs in the table. Should I be creating a different model for Student and Industry and linking these with a foreign key? If yes, how does this tie in with the forms? Do I create multiple forms?
Thanks in Advance
|
Use template html page with BaseHttpRequestHandler
| 21,260,040 | 1 | 0 | 205 | 0 |
python,html,webserver
|
This has nothing to do with BaseHTTPRequestHandler as its purpose is to serve HTML, how you generate the HTML is another topic.
You should use a templating tool, there are a lot available for Python, I would suggest using Mako or Jinja2. then, on your code, just get the real HTML using the template and use it on your handler response.
| 0 | 0 | 1 | 0 |
2014-01-18T16:14:00.000
| 1 | 1.2 | true | 21,206,568 | 0 | 0 | 1 | 1 |
I am building a small program with Python, and I would like to have a GUI for some configuration stuff. Now I have started with a BaseHTTPServer, and I am implementing a BaseHTTPRequestHandler to handle GET and POST requests. But I am wondering what would be best practice for the following problem.
I have two separate requests that result in very similar responses. That is, the two pages that I return have a lot of html in common. I could create a template html page that I retrieve when either of these requests is done and fill in the missing pieces according to the specific request. But I feel like there should be a way where I could directly retrieve two separate html pages, for the two requests, but still have one template page so that I don't have to copy this.
I would like to know how I could best handle this, e.g. something scalable. Thanks!
|
Advance Django app
| 21,207,658 | 1 | 0 | 118 | 0 |
python,django,angularjs,web-applications,backbone.js
|
Django is a full-featured MVC application where you generate the views on the serverside. I would say that is redundant with a single-page web application framework like Angular. If you use that, and you want to stick with Python, then you would probably be better served with a REST API library like Flask.
Neither is "better." It depends on which programming model you prefer and the requirements for your application.
| 0 | 0 | 0 | 0 |
2014-01-18T17:46:00.000
| 2 | 0.099668 | false | 21,207,628 | 0 | 0 | 1 | 2 |
I am a little new to Django framework. I have pass the Django's tutorial and I would like to ask a very simple question. if I want to build an advance web app with database except of django framework(server side), do I really need to choose also a client framework like angular.js or backbone?
Can I do the client side without involving a specific framework?
I ask this question as a matter of cautious and saving time.
|
Advance Django app
| 21,207,803 | 1 | 0 | 118 | 0 |
python,django,angularjs,web-applications,backbone.js
|
You don't need to choose any other client framework, you can use solely Django - it's a full featured framework which is designed to be flexible enough for all your needs.
There's a small learning curve (as with all good frameworks) but it's really not hard, especially if you have a background in Python.
My advice would be to just play with it. Follow the tutorial making the voting application and then move onto creating forms, playing with the models and forms, making everything work cohesively and then once you're familiar with things you can begin writing your advanced web application.
Also if you get stuck then there's the #django channel on Freenode (IRC) which can be useful.
| 0 | 0 | 0 | 0 |
2014-01-18T17:46:00.000
| 2 | 1.2 | true | 21,207,628 | 0 | 0 | 1 | 2 |
I am a little new to Django framework. I have pass the Django's tutorial and I would like to ask a very simple question. if I want to build an advance web app with database except of django framework(server side), do I really need to choose also a client framework like angular.js or backbone?
Can I do the client side without involving a specific framework?
I ask this question as a matter of cautious and saving time.
|
Can libffi be used for Python and Java to communicate?
| 21,247,326 | 1 | 0 | 232 | 0 |
java,python,c,libffi
|
In general, things get complicated when you're talking about two managed runtimes (CPython and the JVM, for instance). libffi only really deals with a subset of the issues here. I would look more at remote method invocations as a way to integrate code written in different managed runtime environments.
| 0 | 0 | 0 | 1 |
2014-01-19T12:23:00.000
| 1 | 0.197375 | false | 21,216,706 | 0 | 0 | 1 | 1 |
I'm wondering if it is possible for an app to run in Python and call Java methods (and vice versa) through libffi?
|
Google App Engine: Using Ajax
| 21,220,947 | 2 | 1 | 281 | 0 |
python,ajax,google-app-engine
|
AJAX has nothing to do with PHP: it's a fancy name for a technique whose goal is to provide a way for the browser to communicate asynchronously with an HTTP server. It is independent of whatever is powering that server (be it PHP, Python or anything).
I fear that you might not be able to understand this yet, so I recommend you to Google about it and experiment a lot before starting your project.
| 0 | 1 | 0 | 0 |
2014-01-19T18:10:00.000
| 2 | 0.197375 | false | 21,220,592 | 0 | 0 | 1 | 1 |
I was planning to develop an ecommerce site using Google App Engine in Python. Now, I want to use Ajax for some added dynamic features. However, I read somewhere that I need to know PHP in order to use AJAX on my website. So, is there no way I can use Ajax in Python in Google App Engine? Also, I would be using the webapp2 framework for my application.
Also, if its possible to use Ajax in Google App Engine with Python, can anyone suggest some good tutorials for learning Ajax for the same?
|
Python - Transferring session between two browsers
| 21,532,850 | 0 | 1 | 384 | 0 |
javascript,python,asp.net,pyqt,qtwebkit
|
Since nobody answered, I will post my work-around.
Basically, wanted to "transfer" my session from Mechanize (the python module) to the QtWebKits QWebView (PyQt4 module) because the vast majority of my project was automated headless, but I had encountered a road block where I had no choice but to have the user manually enter data into a possible resulting page (as the form was different each time depending on circumstances).
Instead of transferring sessions, I met this requirement by utilizing QWebViews javascript functionality. My method went like this:
Load page in Mechanize, and save the downloaded HTML to a local temporary file.
Load this local file in QWebView.
The user can now enter required data into the local copy of this page.
Locate the form fields on this page, and pull the data the user entered using javascript. You can do this by getting the main frame object for the page (QWebView->Page()->MainFrame()), and then evaluating javascript code to accomplish the above task (use evaluateJavaScript()).
Take the data you have extracted from the form fields, and use it to submit the form with the connection you still have open with mechanize.
That's it! A bit of a work-around, but it works none-the-less :\
| 1 | 0 | 0 | 0 |
2014-01-19T18:57:00.000
| 1 | 1.2 | true | 21,221,141 | 0 | 0 | 1 | 1 |
The issue:
I have written a ton of code (to automate some pretty laborious tasks online), and have used the mechanize library for Python to handle network requests. It is working very well, except now I have encountered a page which I need javascript functionality... mechanize does not handle javascript.
Proposed Solution:
I am using PyQt to write the GUI for this app, and it comes packaged with QtWebKit, which DOES handle javascript. I want to use QtWebKit to evaluate the javascript on the page that I am stuck on, and the easiest way of doing this would be to transfer my web session from mechanize over to QtWebKit.
I DO NOT want to use PhantomJS, Selenium, or QtWebKit for the entirety of my web requests; I 100% want to keep mechanize for this purpose. I'm wondering how I might be able to transfer my logged in session from mechanize to QtWebKit.
Would this work?
Transfer all cookies from mechanize to QtWebView
Transfer the values of all state variables (like _VIEWSTATE, etc.) from mechanize to QWebView (the page is an ASP.net page...)
Change the User-Agent header of QWebView to be identical to mechanize...
I don't really see how I could make the two "browsers" appear more identical to the server... would this work? Thanks!
|
Django as a service login and logout
| 21,223,261 | 0 | 0 | 378 | 0 |
python,django,rest,login
|
I don't think so. If this is safe for using on web pages, why should it be a problem for API calls?
If you are really worried about someone getting session IDs, use SSL to encrypt your communication. But that should be the same for web resources as well, you should use https if you don't want session cookies to be stolen.
| 0 | 0 | 0 | 0 |
2014-01-19T21:57:00.000
| 1 | 1.2 | true | 21,223,230 | 0 | 0 | 1 | 1 |
I have a rest API in Django 1.6 but I'm not using any library like django-tastypie or other to do that. I just write my endpoints (urls.py) and return json data in my views.py. For authentication I'm using django basic auth provided. So in every request made by front-end I check request.user.id and with that work to know if that user has access to a certain resource in other words I'm using login session data that django puts when front-end calls login endpoint. Am I incurring safety issues doing this?
|
How do I run a Django 1.6 project with multiple instances running off the same server, using the same db backend?
| 21,233,816 | 2 | 1 | 222 | 0 |
python,django,deployment,paas
|
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance.
This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server).
I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that.
You could then string these together to easily create a new instance.
You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.
| 0 | 1 | 0 | 0 |
2014-01-20T02:31:00.000
| 1 | 0.379949 | false | 21,225,368 | 0 | 0 | 1 | 1 |
I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS.
The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server).
I should hopefully be able to create and remove these 'instances' easily.
What's the best way of doing this?
I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too.
Thanks in advance.
(EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
|
Passing arguments to Django social-auth Facebook login
| 21,248,297 | 1 | 0 | 170 | 0 |
python,django,django-socialauth
|
Set SOCIAL_AUTH_FIELDS_STORED_IN_SESSION = ['foo_id'] in your settings, then you will be able to access foo_id in the session in your update_user_details by doing the usual request.session['foo_id'].
| 0 | 0 | 0 | 0 |
2014-01-20T07:21:00.000
| 1 | 0.197375 | false | 21,228,282 | 0 | 0 | 1 | 1 |
My Django social-auth Facebook login works fine, using the default url /login/facebook/. I'm also able to do stuff with the new user by overriding the update_user_details method. But I would like to pass some more arguments to process in update_user_details. For instance, if I wanted to associate a model Foo with the user after it's been created, I should have liked to call the following url /login/facebook/?foo_id=bar, so that I can get back the foo_id in update_user_details. Any ideas?
|
Is there any way to search for tracks by city in the SoundCloud API?
| 21,263,821 | 0 | 0 | 934 | 0 |
python,api,soundcloud
|
You can't filter tracks by city. The city is actually stored with the user. So you would have to search for the tracks you want, then perform an additional step to check if the user for each of the tracks is from the city you want.
I wanted to do something similar, but too many users do not have their city saved in their profile so the results are very limited.
| 0 | 0 | 1 | 0 |
2014-01-20T22:19:00.000
| 1 | 0 | false | 21,245,341 | 0 | 0 | 1 | 1 |
I had read about an app called citycounds.fm, which is no longer active, where they made city-based playlists. Unfortunately, I can't seem to find any way to search for tracks by city in the soundcloud api documentation.
Any one know if this is possible?
|
How to speed up JSON for a flask application?
| 21,251,197 | 3 | 1 | 2,480 | 0 |
python,json,rest,flask
|
Here are some ideas:
If the source data that you use for your calculations is not likely to change often then you can run the calculations once and save the results. Then you can serve the results directly for as long as the source data remains the same.
You can save the results back to your database, or as you suggest, you can save them in a faster storage such as Redis. Based on your description I suspect the big performance gain will be in not doing calculations so often, the difference between storing in a regular database vs. Redis or similar is probably not significant in comparison.
If the data changes often then you will still need to do calculations frequently. For such a case an option that you have is to push the calculations to the client. Your Flask app can just return the source data in JSON format and then the browser can do the processing on the user's computer.
I hope this helps.
| 0 | 0 | 0 | 0 |
2014-01-21T03:10:00.000
| 2 | 1.2 | true | 21,248,395 | 0 | 0 | 1 | 1 |
I'm currently implementing a webapp in flask. It's an app that does a visualization of data gathered. Each page or section will always have a GET call and each call will return a JSON response which then will be processed into displayed data.
The current problem is that some calculation is needed before the function could return a JSON response. This causes some of the response to arrive slower than others and thus making the page loads a bit slow. How do I properly deal with this? I have read into caching in flask and wonder whether that is what the app need right now. I have also researched a bit into implementing a Redis-Queue. I'm not really sure which is the correct method.
Any help or insights would be appreciated. Thanks in advance
|
How can I enable activex controls on IE for auto loading of applets
| 21,260,619 | 1 | 1 | 923 | 0 |
java,python,internet-explorer,activex
|
I found one solution to this.
We can make the below modification to the registry and achieve running of applets automatically without pop-ups
C:\Windows\system32>reg add "HKCU\Software\Microsoft\Internet Explorer\Main\Feat
ureControl\FEATURE_LOCALMACHINE_LOCKDOWN" /v iexplore.exe /t REG_DWORD /d 0 /f
| 0 | 0 | 1 | 1 |
2014-01-21T05:49:00.000
| 1 | 0.197375 | false | 21,250,136 | 0 | 0 | 1 | 1 |
I am working on some applets and whenever I'm trying to open the applets on IE using my python script, It stops for a manual input to enable the activex.
I tried doing it from the IE settings. but, I require a command line to do it by which I can integrate it in my python script only.
|
Flask: asynchronous response to client
| 21,262,500 | 7 | 6 | 3,502 | 0 |
python,asynchronous,background,flask,response
|
What you ask cannot be done with the HTTP protocol. Each request receives a response synchronously. The closest thing to achieve what you want would be this:
The client sends the request and the server responds with a job id immediately, while it also starts a background task for this long calculation.
The client can then poll the server for status by sending the job id in a new request. The response is again immediate and contains a job status, such as "in progress", "completed", "failed", etc. The server can also return a progress percentage, which the client can use to render a progress bar.
You could also implement web sockets, but that will require socket enabled server and client.
| 0 | 0 | 0 | 0 |
2014-01-21T13:28:00.000
| 1 | 1.2 | true | 21,259,553 | 0 | 0 | 1 | 1 |
I'm using Flask to develop a web server in a python app. I'm achieving this scenario: the client (it won't be a browser) sends a request, the server does some long task in background and on completion sends the response back to the client asynchronously. Is it possible to do that?
|
Ajax with Python as backend
| 21,264,918 | 1 | 0 | 1,351 | 0 |
php,python,ajax,google-app-engine
|
The backend is irrelevant when doing Ajax. You could write it in PHP, Python, or even COBOL if that's what floats your boat. The main thing is that your Javascript is asynchronously requesting data, and that your backend is providing it in the format your frontend expects. These days, that's mostly JSON. Python is of course perfectly capable of providing JSON data (via the json module from the standard library).
| 0 | 0 | 0 | 0 |
2014-01-21T17:11:00.000
| 3 | 0.066568 | false | 21,264,710 | 0 | 0 | 1 | 1 |
I'm currently working on a web application, hosted on Google App Engine with the back-end written in Python. Now I feel the need to add Ajax-like features into my website. When I went through some of the Ajax tutorials on the internet, I found that all of them taught in context to having the back-end written in PHP.
So my question is, can't I use Ajax-like features on my application written in Python, hosted on Google App Engine? And if yes, can someone suggest some good tutorials for learning Ajax which uses Python as the back-end example?
EDIT: I'm using webapp2 framework, and am not familiar with Django.
|
Turn off SSL to Google cloud storage
| 21,271,776 | 3 | 3 | 1,475 | 0 |
python,google-cloud-storage
|
Try to PUT in larger blocks, since latency is probably the gating factor. You can edit the DEFAULT_CHUNK_SIZE in apiclient/http.py as a workaround.
| 0 | 0 | 1 | 0 |
2014-01-21T23:07:00.000
| 3 | 0.197375 | false | 21,270,951 | 0 | 0 | 1 | 3 |
How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module.
The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little.
The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions.
|
Turn off SSL to Google cloud storage
| 21,678,526 | 1 | 3 | 1,475 | 0 |
python,google-cloud-storage
|
You should keep SSL. When using OAuth2 (as GCS does), any request may include an http header (access_token) that you don't want third parties to see. Otherwise, hijacking your account would be extremely easy.
| 0 | 0 | 1 | 0 |
2014-01-21T23:07:00.000
| 3 | 0.066568 | false | 21,270,951 | 0 | 0 | 1 | 3 |
How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module.
The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little.
The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions.
|
Turn off SSL to Google cloud storage
| 21,271,640 | 3 | 3 | 1,475 | 0 |
python,google-cloud-storage
|
apiclient uses the Google Cloud Storage JSON API, which requires HTTPS.
Can you say a bit about why you would like to disable SSL?
Thanks.
| 0 | 0 | 1 | 0 |
2014-01-21T23:07:00.000
| 3 | 1.2 | true | 21,270,951 | 0 | 0 | 1 | 3 |
How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module.
The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little.
The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions.
|
Safely removing program from usr/local/bin on Mac OSX 10.6.8?
| 21,274,416 | 5 | 2 | 10,163 | 0 |
python,macos,command-line,scrapy,bin
|
First, next time you get a Permission Denied from pip uninstall foo, try sudo pip uninstall foo rather than trying to do it manually.
But it's too late to do that now, you've already erased the files that pip needs to do the uninstall.
Also:
Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason...
Yes, they're hidden so that people who don't use command-line programs, write their own scripts, etc. will never have to see them. That isn't you. You're a power-user, and sometimes you will need to see stuff that Apple hides from novices. You already looked into /Library, so why not /usr/local?
The one thing to keep in mind is learning to distinguish stuff installed by OS X itself from stuff installed by third-party programs. Basically, anything in /System/Library or /usr is part of OS X, so you should generally not touch it or you might break the OS; anything installed in /Library or /usr/local is not part of OS X, so the worst you could do is break some program that you installed.
Also, remember that you can always back things up. Instead of deleting a file, move it to some quarantine location under your home directory. Then, it it turns out you made a mistake, just move it back.
Anyway, yes, it's safe to delete /usr/local/bin/scrapy. Of course it will break scrapy, but that's the whole point of what you're trying to do, right?
On the other hand, it's also safe to leave it there, except for the fact that if you accidentally type scrapy at a shell prompt, you'll get an error about scrapy not being able to find its modules, instead of an error about no such program existing. Well, that, and it may get in the way of you trying to re-install scrapy.
Anyway, what I'd suggest is this: pip install scrapy again. When it complains about files that it doesn't want to overwrite, those files must be from the previous installation, so delete them, and try again. Repeat until it succeeds.
If at some point it complains that you already have scrapy (which I don't think it will, given what you posted), do pip install --upgrade scrapy instead.
If at some point it fails because it wants to update some Apple pre-installed library in /System/Library/…/lib/python, don't delete that; instead, switch to pip install --no-deps scrapy. (Combine this with the --upgrade flag if necessary.) Normally, the --no-deps option isn't very useful; all it does is get you a non-working installation. But if you're only installing to uninstall, that's not a problem.
Anyway, once you get it installed, pip uninstall scrapy, and now you should be done, all traces gone.
| 0 | 1 | 0 | 0 |
2014-01-22T04:43:00.000
| 1 | 1.2 | true | 21,274,359 | 0 | 0 | 1 | 1 |
So I've been having a lot of trouble lately with a messy install of Scrapy. While I was learning the command line, I ended up installing with pip and then easy_install at the same time. Idk what kinda mess that made.
I tried the command pip uninstall scrapy, and it gave me the following error:
OSError: [Errno 13] Permission denied: '/Library/Python/2.6/site-packages/Scrapy-0.22.0-py2.6.egg/EGG-INFO/dependency_links.txt'
so, I followed the path and deleted the text file... along with anything else that said "Scrapy" within that path. There were two versions in the /site-packages/ directory.
When I tried again with the pip uninstall scrapy command, I was given the following error:
Cannot uninstall requirement scrapy, not installed
That felt too easy, so I went exploring through my directory hierarchy and I found the following in the usr/local/bin directory:
-rwxr-xr-x 1 greyelerson staff 173 Jan 21 06:57 scrapy*
Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason...
1.) Is it safe to just delete it?
2.) Will that completely remove Scrapy, or are there more files that I need to remove as well? (I haven't found any robust documentation on how to remove Scrapy once it's installed)
|
Running a testsuite in Robotframework
| 21,280,237 | 4 | 1 | 2,773 | 0 |
python,selenium,automation,automated-tests,robotframework
|
pybot --suite mytestsuite /path/to/mytestuite-dir So drop the .txt and put path to the directory where the suite is at the end of the command.
| 0 | 0 | 0 | 1 |
2014-01-22T09:56:00.000
| 1 | 1.2 | true | 21,279,553 | 0 | 0 | 1 | 1 |
I have a robot framework testcase file with the name 'mytestsuite.txt'. It has few test cases..I can run this suite using,
pybot mytestsuite.txt
But when I tried to execute it using --suite option,
pybot --suite mytestsuite.txt
getting the error ,
[ ERROR ] Expected at least 1 argument, got 0.
Is anything wrong in this ,or anyone can suggest how to execute the testsuite file.
Thanks in advance.
|
Pass data from view in app1 to a view in app2 in django
| 21,279,991 | 0 | 1 | 56 | 0 |
python,django
|
I think you should use redirect func with passing additional args with it.
| 0 | 0 | 0 | 0 |
2014-01-22T10:05:00.000
| 1 | 1.2 | true | 21,279,742 | 0 | 0 | 1 | 1 |
I have some details on a customer and I would like to get those details from a view function in app1 to a view in another app, app2. How can this be done?
|
how to wire all the django apps
| 21,280,027 | 0 | 0 | 66 | 0 |
python,django
|
You could have base app if you want to, but you don't need one. All apps are wired when you declare them in the INSTALLED_APPS in the settings, each app has a urls.py file that will catch the route and call one of the views in that app if there's a match.
I use a base app to define global templates, global static files, helpers.
Hope this helps
| 0 | 0 | 0 | 0 |
2014-01-22T10:08:00.000
| 1 | 1.2 | true | 21,279,835 | 0 | 0 | 1 | 1 |
I'm pretty new with Django, I've been reading and watching videos but there is one thing that is confusing me. It is related to the apps. I've watched a video where a guy said that is convenient to have apps that do a single thing, so if I have a big project, I will have a lot of apps. I made an analogy to a bunch of classes, where each app would be a class with their own functions and elements, is this a correct interpretation? In this case, is there like an app where I have like a main method in a class? I mean, I don't know how to wire all the applications I have, is there like a principal app in charge of manage the others? or how does it work?
thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.