Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Steps to connect Epson LK 300 II with python application
| 33,211,416 | 0 | 0 | 48 | 0 |
python,django,printers
|
This should not be a Django matter. It doesn't matter if the web app is on your local machine, a web app is a web app. You have to make clear in your design if you wish to use a printer as a server or as a client.
Since I assume that it is more reasonable to print as the end user, you can simply use a bit of javascript calling window.print() on click.
| 0 | 0 | 0 | 0 |
2015-10-19T09:36:00.000
| 1 | 0 | false | 33,211,130 | 0 | 0 | 1 | 1 |
I have created one python based web-application. Basically the application is printing the customer bills as per their purchase. So I have created bill format but not sure how to print that by clicking on button print which i have added in page.
My requirement here is when I click on that print button of my application then the bill should be printed from printer.
If anyone having idea about the steps how should i process to solve this please feel free to add comments.
|
Robot Framework Internet explorer not opening
| 35,595,207 | 1 | 3 | 5,285 | 0 |
python,internet-explorer,internet-explorer-11,robotframework
|
I have also encountered the same problem.Below are the steps which i have followed.
1.I have enabled the proxy in IE.
2.Set environmental variable no_proxy to 127.0.0.1 before launching the browser
Ex: Set Environmental Variable no_proxy 127.0.0.1
3.Set all the internet zones to same level(medium to high) expect restricted sites
Open browser>Tools>Internet Options>Security Tab
4.Enable "Enable Protected mode" in all zones
Please let me know your feedback.
| 0 | 0 | 1 | 0 |
2015-10-19T10:42:00.000
| 2 | 0.099668 | false | 33,212,370 | 0 | 0 | 1 | 1 |
I am writing some test cases in the Robot Framework using Ride. I can run the tests on both Chrome and Firefox, but for some reason Internet Explorer is not working.
I have tested with the iedriverServer.exe (32bit version 2.47.0.0).
One thing to add is that I am using a proxy. When I disable the proxy in IE and enable the automatic proxy configuration... IE can start up. But it can not load the website. For Chrome and FF the proxy is working fine.
Error message:
WebDriverException: Message: Can not connect to the IEDriver.
|
Selenium on server
| 33,231,191 | 0 | 0 | 63 | 0 |
python,django,selenium,selenium-webdriver
|
I suggest you use a continuous integration solution like Jenkins to run your tests periodically.
| 0 | 0 | 1 | 0 |
2015-10-20T08:01:00.000
| 1 | 0 | false | 33,231,156 | 0 | 0 | 1 | 1 |
I'm quite new to whole Selenim thing and I have a simple question.
When I run tests (Django application) on my local machine, everything works great. But how this should be done on server? There is no X, so how can I start up webdriver there? What's the common way?
Thanks
|
Non-db fields for Django model or admin
| 33,277,390 | -2 | 0 | 1,083 | 0 |
python,django,django-models,django-admin,django-grappelli
|
I think you can use a none-model class, which wrapper the Model class and have some extra fields, where you can set/get or save to other place
| 0 | 0 | 0 | 0 |
2015-10-22T08:05:00.000
| 1 | -0.379949 | false | 33,276,126 | 0 | 0 | 1 | 1 |
I have a model, which has some fields stored in db. On top of it, I need to implement non-db fields, which would be loaded and saved using a custom API.
Users should interact with the model using the admin interface, Grappelli is used to enhance the standard Django admin.
I am interested in one of the following:
Model virtual fields or properties, where I can override how to read and save custom fields. (Simple python properties won't work with Django admin)
Editable callables for admin (not sure if it is even possible)
Any other means to display and process custom fields in admin, except of creating custom forms and moving the logic into the forms.
|
How to enable the lazy-apps in uwsgi to use fork () in the code?
| 58,931,038 | 1 | 3 | 2,374 | 0 |
python,django-views,fork,uwsgi
|
use lazy-apps = true instead of 1
| 0 | 1 | 0 | 1 |
2015-10-22T21:17:00.000
| 1 | 0.197375 | false | 33,290,927 | 0 | 0 | 1 | 1 |
I use Debian + Nginx + Django + UWSGI.
One of my function us fork() in the file view.py (the fork works well), then immediately written return render (request, ...
After the fork() the page loads for a long time and after that browser prints error - "Web page not available». On the other hand the error doesn’t occur if i reload the page during loading (because i don’t launch the fork() again).
The documentation UWSGI there is -
uWSGI tries to (ab)use the Copy On Write semantics of the fork() call whenever possible. By default it will fork after having loaded your applications to share as much of their memory as possible. If this behavior is undesirable for some reason, use the lazy-apps option. This will instruct uWSGI to load the applications after each worker’s fork(). Beware as there is an older options named lazy that is way more invasive and highly discouraged (it is still here only for backward compatibility)
I do not understand everything, and I wrote in a configuration option uWSGI lazy-apps: lazy-apps: 1 in my uwsgi.yaml.
It does not help that I'm wrong?
What do I do with this problem?
P.S. other options besides fork() is that I do not fit ..
PP.S. Sorry, I used google translate ..
|
Login in from Phone App
| 33,294,839 | 0 | 1 | 57 | 0 |
javascript,python,django,cordova,authentication
|
You would need to either expose the django token in the settings file so that it can be accessed via jquery, or that decorator wont be accessible via mobile. Alternatively, you can start using something like oauth
| 0 | 0 | 0 | 0 |
2015-10-23T04:12:00.000
| 2 | 0 | false | 33,294,758 | 0 | 0 | 1 | 1 |
I have developed a Python/Django application for a company. In this app all the employees of the company have a username and a password to login in. Now there is a need for a phone application that can do some functionality.
In some functions I have the decorator @login_required
For security reasons I would like to work with this decorator than against it, so how do I?
I'm using PhoneGap (JavaScript/JQuery) to make the phone app if that helps. I can do my own research but I just need a starting point. Do I get some sort of token and keep it in all my HTTP request headers?
First Attempt:
I was thinking that maybe I POST to the server and get some kind of Authentication Token or something. Maybe there is some Javascript code that hashes my password using the same algorithm so that I can compare it to the database.
Thanks
|
MoviePy error: FFMPEG permission error
| 33,412,882 | 0 | 3 | 2,767 | 0 |
python,django,apache,ffmpeg,moviepy
|
After spending lots of time and trying lots of things, I have finally solved this issue.
We can pass full path of temp video along with it's name, then it will create temp video at given path. Make sure you have write permissions on directory which you are going to set for temp video.
| 0 | 0 | 0 | 0 |
2015-10-23T05:29:00.000
| 2 | 1.2 | true | 33,295,439 | 0 | 0 | 1 | 1 |
I am using Moviepy through a Django application on Ubuntu 14.04 system. It is giving me permissions error when it tries to write video file. Following are details of error :
MoviePy error: FFMPEG encountered the following error while writing file test1TEMP_MPY_wvf_snd.mp3:
test1TEMP_MPY_wvf_snd.mp3: Permission denied
It seems it has not correct permissions on directory where it is trying to write temporary files.
I have set the 777 on /tmp directory but no luck.
Please help me fix this issue.
Thanks
|
Access to Amazon S3 Bucket from EC2 instance
| 33,375,622 | 0 | 5 | 6,069 | 1 |
python,amazon-web-services,amazon-s3,amazon-ec2,amazon-iam
|
As mentioned above, you can do this with Boto. To make it more secure and not worry about the user credentials, you could use IAM to grant the EC2 machine access to the specific bucket only. Hope that helps.
| 0 | 0 | 1 | 0 |
2015-10-23T09:17:00.000
| 5 | 0 | false | 33,298,821 | 0 | 0 | 1 | 1 |
I have an EC2 instance and an S3 bucket in different region. The bucket contains some files that are used regularly by my EC2 instance.
I want to programatically download the files on my EC2 instance (using python)
Is there a way to do that?
|
Return dictionary instead of array in REST framework
| 68,921,035 | 0 | 8 | 6,333 | 0 |
python,django,django-rest-framework
|
if you get single data in array format then user .get method instead of .filter method
.get
instead of
.filter
for get single data response only
| 0 | 0 | 0 | 0 |
2015-10-23T15:24:00.000
| 4 | 0 | false | 33,306,071 | 0 | 0 | 1 | 1 |
I am converting a set of existing APIs from tastypie to REST framework. By default when doing list APIs, tastypie returns a dictionary containing the list of objects and a dictionary of metadata, where REST framework just returns an array of objects. For example, I have a model called Site. Tastypie returns a dictionary that looks like
{
"meta":
{ ... some data here ...},
"site":
[
{... first site...},
{...second site...}
...
]
}
where REST framework returns just the array
[
{... first site...},
{...second site...}
...
]
We are not using the metadata from tastypie in any way. What is the least invasive way to change the return value in REST framework? I could override list(), but I would rather have REST framework do its thing where ever possible.
|
Django Rest Framework -- no module named rest_framework
| 67,793,330 | 1 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
Also, if you're getting this error while running docker-compose up. Make sure to run docker-compose up --build because docker needs to install the djangorestframework dependency as well.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0.006896 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 68,277,646 | 0 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
After installing the necessary packages with python3/pip3 inside my virtual environment, it all came down to running my server with python manage.py runserver instead of python3 manage.py runserver. This was because the virtual environment and other packages were installed using python3/pip3 and not python2/pip2, hence running the server with python3 again resulted in the error. Am sure this will help someone else.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 67,944,634 | 0 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
I face the same problem. In my case, I solved it by update Windows Defender configuration.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 61,280,125 | 0 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
I know there is an accepted answer for this question and many other answers also but I just wanted to add an another case which happened with me was Updating the django and django rest framework to the latest versions to make them work properly without any error.
So all you have to do is just uninstall both django and django rest framework using:
pip uninstall django pip uninstall djangorestframework
and then install it again using:
pip install django pip install djangorestframework
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 61,382,641 | 2 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
Yeh for me it was the python version as well ...
much better to use pipenv ...
create a virtual env using using python 3 ...
install pipenv : pip3 install pipenv
create the virtualenv: pipenv --python 3
activate the virtual env: pipenv shell
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0.013792 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 59,230,374 | 0 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
(I would assume that folks using containers know what they're doing, but here's my two cents)
Let's say you setup your project using cookiecutter-django and enabled the docker container support, be sure to update the pip requirements file with djangorestframework==<x.yy.z> (or whichever python dependency you're trying to install) and re-build the docker images (local and production).
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 68,626,596 | 0 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
Install pip3 install djangorestframework first
and add rest_framework in the settings.py.
This is how I have a shout out the problem.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 50,729,985 | 5 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
If you're using some sort of virtual environment do this!
Exit from your virtual environment.
Activate your virtual environment.
After you've done this you can try running your command again and this time it probably won't have any ImportErrors.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0.034469 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 52,544,748 | 18 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
Also, check for the possibility of a tiny typo:
It's rest_framework with an underscore (_) in between!
Took me a while to figure out that I was using a dash instead...
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 1 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 56,489,277 | -1 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
On Windows, with PowerShell, I had to close and reopen the console and then reactive the virtual environment.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | -0.006896 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 54,212,984 | 1 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
if you used pipenv:
if you installed rest_framework thru the new pipenv,
you need to run it thru the virtual environment:
1.pipenv shell
2.(env) now, run your command(for example python manage.py runserver)
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0.006896 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 52,474,178 | 2 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
If you are working with PyCharm, I found that restarting the program and closing all prompts after adding 'rest_framework' to my INSTALLED_APPS worked for me.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0.013792 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 71,253,035 | 1 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
if after installing and adding it to your INSTALLED_APPS it persist, then it's most likely because you're using python3 to run the server and thats okay. So what you do while installing is use python3 -m pip install djangorestframework .
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0.006896 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 43,004,127 | 0 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
try this if you are using JWT pip install djangorestframework-jwt
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
Django Rest Framework -- no module named rest_framework
| 38,469,924 | 2 | 116 | 221,940 | 0 |
python,django,python-3.x,pip,django-rest-framework
|
When using a virtual environment like virtualenvwithout having django-rest-framework installed globally you might as well have the error.
The solution would be:
activate the environment first with {{your environment name}}/bin/activate for Linux or {{your environment name}}/Scripts/activate for Windows
and then run the command again.
| 0 | 0 | 0 | 0 |
2015-10-23T18:05:00.000
| 29 | 0.013792 | false | 33,308,781 | 0 | 0 | 1 | 15 |
I've installed django rest framework using pip install djangorestframework yet I still get this error when I run "python3 manage.py sycndb":
ImportError: No module named 'rest_framework'
I'm using python3, is this my issue?
|
How to get token value while sending post requests
| 33,321,212 | 0 | 0 | 70 | 0 |
python,session,cookies,python-requests
|
You need to get the response from the page then regex match for token.
| 0 | 0 | 1 | 0 |
2015-10-24T17:19:00.000
| 2 | 0 | false | 33,321,076 | 0 | 0 | 1 | 2 |
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
|
How to get token value while sending post requests
| 37,413,165 | 0 | 0 | 70 | 0 |
python,session,cookies,python-requests
|
token value is stored in the cookie file..check the cookie file and extract the value from it..
for example,a cookie file after login contain jsession ID=A01~xxxxxxx
where 'xxxxxxx' is the token value..extract this value..and post this value
| 0 | 0 | 1 | 0 |
2015-10-24T17:19:00.000
| 2 | 0 | false | 33,321,076 | 0 | 0 | 1 | 2 |
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
|
Defining a default URL prefix using markdown / django-wiki
| 33,327,523 | 0 | 0 | 226 | 0 |
python,django,markdown,django-wiki
|
Well, it looks like there is just such an extension for this in MarkDown (WikiLinkExtension - which takes a base_url parameter).
I've had to modify my copy of django-wiki to add a new setting to use it (submitted an upstream pull request for it too, since I suspect this will be useful to others). Kinda surprised django-wiki didn't have this support already built but there you go.
EDIT: Ok, it looks like this approach doesn't play nice with a hierarchical Wiki layout (which Django-wiki is designed for). I've cobbled together a hack that allows me to link to child pages of the current page, which is enough to be workable even if it's kind of limited.
| 0 | 0 | 0 | 0 |
2015-10-25T07:20:00.000
| 1 | 1.2 | true | 33,327,217 | 0 | 0 | 1 | 1 |
I've got a Django app that I'm working on, with a wiki (powered by django-wiki) sitting under the wiki/ folder.
The problem I'm having is that if I create links using Markdown, they all direct to the root /, whereas I want any links generated from the markdown to go into the same subdirectory (under /wiki). The documentation doesn't appear to be particularly forthcoming on this (mainly directing me to to the source code, which so far has revealed nothing).
The other avenue I'm looking into is how to direct Markdown itself to prefix all links with a specified path fragment. Is there a Markdown extension or trick that might be useful for accomplishing this?
|
How to organize groups in Django?
| 33,329,001 | 0 | 0 | 77 | 0 |
python,django
|
Groups in django (django.contrib.auth) are used to specify certain rights of viewing content mainly in the admin to certain users. I think your group functionality might be more custom than this and that you're better of creating your own group models, and making your own user and group management structure that suits the way your website is used better.
| 0 | 0 | 0 | 0 |
2015-10-25T10:41:00.000
| 1 | 0 | false | 33,328,730 | 0 | 0 | 1 | 1 |
I am currently learning how to use Django. I want to make a web app where you as a user can join groups. These groups have content that just members of this group should be able to see. I learned about users, groups and a bit of authentication.
My first impression is, that this is more about the administration of the website itself and I cannot really believe that I can solve my idea with it.
I just want to know if thats the way to go in Django. I probably have to create groups in Django that have the right to see the content of the group on the website. But that means that everytime a group is created, I have to create a django group. Is that an overkill or the right way?
|
How to deal with excessive requests on heroku
| 33,346,079 | 0 | 0 | 46 | 0 |
heroku,python-requests
|
Have your host based firewall throttle those requests. Depending on your setup, you can also add Nginx in to the mix, which can throttle requests too.
| 0 | 0 | 1 | 0 |
2015-10-26T12:30:00.000
| 1 | 0 | false | 33,345,960 | 0 | 0 | 1 | 1 |
I am experiencing a once per 60-90 minute spike in traffic that's causing my Heroku app to slow to a crawl for the duration of the spike - NewRelic is reporting response times of 20-50 seconds per request, with 99% of that down to the Heroku router queuing requests up. The request count goes from an average of around 50-100rpm up to 400-500rpm
Looking at the logs, it looks to me like a scraping bot or spider trying to access a lot of content pages on the site. However it's not all coming from a single IP.
What can I do about it?
My sysadmin / devops skills are pretty minimal.
Guy
|
How to set a default value for a Django Form Field, that would be saved even in the absence of user initiated changes?
| 33,350,576 | 1 | 1 | 1,119 | 0 |
python,django,django-forms
|
I don't think there is a premade solution for you. You'll have to do one of two things:
When the form is submitted, examine the value of the field in question. If it is equal to the default value, then ignore the result of has_changed and save it. (Be aware that this could result in duplicate items being saved, depending on your schema.)
When the form is submitted, search for an existing record with those field values. If no such record exists, save it. Otherwise do nothing. (If these records contain a last-updated timestamp, you might update that value, depending on your application.)
| 0 | 0 | 0 | 0 |
2015-10-26T15:41:00.000
| 2 | 0.099668 | false | 33,349,846 | 0 | 0 | 1 | 1 |
When looking for this feature, one is flooded under answers pointing toward the Form initial member.
Yet, by its design, this approach would not save anything to database if the user does not change at least one value in the form, because the has_changed method returns False when only initial values are submitted.
Then, if one were to override has_changed to always return true, the behaviour would be to try to save forms for which no value (nor initial nor user input) is found.
Is it possible to have a real default value in Django: a value that the user can change if he wants, but that would still save the form to DB when the form is only submitted with default values ?
|
Escape characters in Google AppEngine full text search
| 33,351,187 | 0 | 0 | 92 | 0 |
python,google-app-engine
|
It might help to include the code in question, but try putting a \ before the +, that's what can escape things within quotes in python, so it might work here. E.g.: C\+
| 0 | 1 | 0 | 0 |
2015-10-26T16:30:00.000
| 1 | 0 | false | 33,350,869 | 0 | 0 | 1 | 1 |
I'm using full text search and I'd like to search for items that have a property with value 'C+'
is there a way I can escape the '+' Char so that this search would work?
|
Architectual pattern for CLI tool
| 33,353,621 | 0 | 0 | 355 | 0 |
python,design-patterns,command-line-interface,restful-architecture,n-tier-architecture
|
Since your app is not very complex, I see 2 layers here:
ServerClient: it provides API for remote calls and hides any details. It knows how to access HTTP server, provide auth, deal with errors etc. It has methods like do_something_good() which anyone may call and do not care if it remote method or not.
CommandLine: it uses optparse (or argparse) to implement CLI, it may support history etc. This layer uses ServerClient to access remote service.
Both layers do not know anything about each other (only protocol like list of known methods). It will allow you to use somethign instead of HTTP Rest and CLI will still work. Or you may change CLI with batch files and HTTP should work.
| 0 | 1 | 0 | 0 |
2015-10-26T18:50:00.000
| 1 | 1.2 | true | 33,353,398 | 0 | 0 | 1 | 1 |
I am going to write some HTTP (REST) client in Python. This will be a Command Line Interface tool with no gui. I won't use any business logic objects, no database, just using an API to communicate with the server (using Curl). Would you recommend me some architectual patterns for doing that, except for Model View Controller?
Note: I am not asking for a design patterns like Command or Strategy. I just want to know how to segregate and decouple abstraction layers.
I think using MVC is pointless regarding the fact of not having a business logic - please correct me if I'm wrong. Please give me your suggestions!
Do you know any examples of CLI projects (in any language, not necessarily in Python) that are well maintained and with clean code?
Cheers
|
Can Flask use the async feature of Tornado Server?
| 33,369,914 | 4 | 3 | 1,151 | 0 |
python,flask,tornado,python-asyncio
|
No. It is possible to run Flask on Tornado's WSGIContainer, but since Flask is limited by the WSGI interface it will be unable to take advantage of Tornado's asynchronous features. gunicorn or uwsgi is generally a much better choice than Tornado's WSGIContainer unless you have a specific need to run a Flask application in the same process as native Tornado RequestHandlers.
| 0 | 1 | 0 | 0 |
2015-10-27T12:58:00.000
| 1 | 0.664037 | false | 33,368,621 | 1 | 0 | 1 | 1 |
We have a project use Flask+Gunicorn(sync). This works well for a long time, however, recently i came across to know that Asyncio(Python3.5) support async io in standard library.
However, before Asyncio, there are both Twisted and Tornado async servers. So, i wander whether Flask can use the aysncio feature of Tornado, cause Gunicorn support tornado worker class.
|
Where the new pages get stored after creation in django CMS?
| 33,373,378 | 1 | 1 | 408 | 0 |
python,django-cms
|
They are stored in the database you configured for Django. By default you can inspect the pages in the administration interface at /admin/cms/page/. In the database the table for them is by default named cms_page.
| 0 | 0 | 0 | 0 |
2015-10-27T14:12:00.000
| 1 | 1.2 | true | 33,370,301 | 0 | 0 | 1 | 1 |
I am creating the new page in Django CMS. I want to see where the HTML pages get stored?
I tried to find out every where also in the site-package, but I was not able to find it. Can anyone tell me when I create a new page in Django CMS in GUI view, then where it get stored?
|
How to choose an AWS profile when using boto3 to connect to CloudFront
| 57,297,264 | 8 | 214 | 163,599 | 0 |
python,amazon-web-services,boto3,amazon-iam,amazon-cloudfront
|
Just add profile to session configuration before client call.
boto3.session.Session(profile_name='YOUR_PROFILE_NAME').client('cloudwatch')
| 0 | 0 | 1 | 0 |
2015-10-27T21:02:00.000
| 4 | 1 | false | 33,378,422 | 0 | 0 | 1 | 1 |
I am using the Boto 3 python library, and want to connect to AWS CloudFront.
I need to specify the correct AWS Profile (AWS Credentials), but looking at the official documentation, I see no way to specify it.
I am initializing the client using the code:
client = boto3.client('cloudfront')
However, this results in it using the default profile to connect.
I couldn't find a method where I can specify which profile to use.
|
Is there a JavaScript equivalent of the Python pass statement that does nothing?
| 33,383,865 | 200 | 141 | 105,967 | 0 |
javascript,python,function,lexical
|
Python's pass mainly exists because in Python whitespace matters within a block. In Javascript, the equivalent would be putting nothing within the block, i.e. {}.
| 0 | 0 | 0 | 0 |
2015-10-28T05:53:00.000
| 10 | 1.2 | true | 33,383,840 | 0 | 0 | 1 | 2 |
I am looking for a JavaScript equivalent of the Python:
pass statement that does not run the function of the ... notation?
Is there such a thing in JavaScript?
|
Is there a JavaScript equivalent of the Python pass statement that does nothing?
| 61,111,819 | 6 | 141 | 105,967 | 0 |
javascript,python,function,lexical
|
Javascript does not have a python pass equivalent, unfortunately.
For example, it is not possible in javascript to do something like this:
process.env.DEV ? console.log('Connected..') : pass
Instead, we must do this:
if (process.env.DEV) console.log('Connected..')
The advantage of using the pass statement, among others, is that in the course of the development process we can evolve from the above ternary operator example in this case without having to turn it into a full if statement.
| 0 | 0 | 0 | 0 |
2015-10-28T05:53:00.000
| 10 | 1 | false | 33,383,840 | 0 | 0 | 1 | 2 |
I am looking for a JavaScript equivalent of the Python:
pass statement that does not run the function of the ... notation?
Is there such a thing in JavaScript?
|
django: exclude models from migrations
| 44,014,653 | 2 | 17 | 7,061 | 1 |
python,django,django-models,django-migrations
|
So far, I've tried different things, all without any success:
used the managed=False Meta option on both Models
That option (the managed = False attribute on the model's meta options) seems to meet the requirements.
If not, you'll need to expand the question to say exactly what is special about your model that managed = False doesn't do the job.
| 0 | 0 | 0 | 0 |
2015-10-28T07:53:00.000
| 3 | 0.132549 | false | 33,385,618 | 0 | 0 | 1 | 2 |
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database.
I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations:
makemigrations should never detect any changes, and create migrations for them
migrate should never complain about missing migrations for that app
So far, I've tried different things, all without any success:
used the managed=False Meta option on both Models
added a allow_migrate method to my router which returns False for both models
Does anyone have an example of how this scenario can be achieved?
Thanks for your help!
|
django: exclude models from migrations
| 68,460,381 | 1 | 17 | 7,061 | 1 |
python,django,django-models,django-migrations
|
You have the correct solution:
used the managed=False Meta option on both Models
It may appear that it is not working but it is likely that you are incorrectly preempting the final result when you see - Create model xxx for models with managed = False when running makemigrations.
How have you been checking/confirming that migrations are being made?
makemigrations will still print to terminal - Create model xxx and create code in the migration file but those migrations will not actually result in any SQL code or appear in Running migrations: when you run migrate.
| 0 | 0 | 0 | 0 |
2015-10-28T07:53:00.000
| 3 | 0.066568 | false | 33,385,618 | 0 | 0 | 1 | 2 |
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database.
I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations:
makemigrations should never detect any changes, and create migrations for them
migrate should never complain about missing migrations for that app
So far, I've tried different things, all without any success:
used the managed=False Meta option on both Models
added a allow_migrate method to my router which returns False for both models
Does anyone have an example of how this scenario can be achieved?
Thanks for your help!
|
App Engine Returning Error 500 on Post Requests from
| 33,405,768 | 0 | 0 | 150 | 0 |
jquery,python,google-app-engine
|
If you're not seeing anything in your server logs about the error, that suggests to me that you might have a configuration error in one of your .yaml files. Are GET requests working? Are you sure that you are sending your POST requests to an endpoint that is handled by your application? Check for typos in your JavaScript and application Route definitions, and check for a catch-all request handler (e.g. /*) that might be receiving the request and failing to respond.
Sharing the contents of your app.yaml, your server-side URL routes, and a snippet of your JavaScript would really help us to help you.
| 0 | 1 | 0 | 0 |
2015-10-28T19:05:00.000
| 1 | 0 | false | 33,399,526 | 0 | 0 | 1 | 1 |
I am getting error 500 on every second POST request made from a browser (chrome and firefox) irrespective of whether it is a Jquery Post or Form submissions, app engine is alternating between error 500, and successful post. The error 500 are not appearing anyway in the logs.
I have tested this with over 5 different post handlers, the errors are only occurring on production not on the Local SDK server.
Note that the requests are perfectly successful when made from a python script using the requests module.
|
Django: Remove dependency between apps in a project
| 33,403,212 | 1 | 0 | 284 | 0 |
python,django
|
You'll have to create a new Django project, and move app2 to that project.
| 0 | 0 | 0 | 0 |
2015-10-28T23:11:00.000
| 1 | 0.197375 | false | 33,403,197 | 0 | 0 | 1 | 1 |
I have a django project. I have created 2 apps(app1 and app2) under the project.
Each app has its own urls.py and views.py.
Settings.py is under the project folder.
What I want to do is:
When I edit the views.py file for app1. And if I save the file with an incorrect indentation.
It brings down app2 as well.
I want to make them independent, so that no matter what change I do local to app1 it should not affect app2.
Is that possible ?
|
How to deploy a Django/Tornado based web app that's built with platter?
| 33,409,062 | 0 | 1 | 167 | 0 |
python,django,git,deployment,tornado
|
My first rule of deployment is "whatever works". Every production environment has different requirements. But to give opinions on your questions:
Not everything should be in your Python project. Perhaps there is a way to do it, but I think it's using the wrong hammer.
You can create a separate Git repo that handles configuration and asset files for your production deployment (this does not even be managed by Git if you don't care about old, irrelevant configuration files). This does not have to be a Python project, just the files for the production deployment. You may optionally put a Python script or two in here (or just a README.txt or fab files or a Buildout config) to automate tasks such as unpacking your platter or copying config files around.
It's tempting (and possible) to put production config things in your main Git repo. This is even suggested by apps that create boilerplate files for development and production configuration. This doesn't mean it's the best way to do things though.
My rule is that the main Git repo is "development only". It's cloned by developers who are setting up and working in development environments. It conflates a Python project far too much to try and be an Python application and also be a place to manage a production system, IMHO.
Production is managed separately. Sometimes by people different from the developers or at least the developer is wearing a different hat when thinking about a production deployment. This way you can also have a small, clean repo that tracks just changes to your production system.
Playing with symlinks within a single deployment that represents different builds is an extra layer of confusion. And the impetus to do so comes from trying to do everything from a single Python project.
Deploy your python application to something like /var/myapp/build-2015-10-29/. Then create a symlink at /var/myapp/current/ that points to this location. This way you can create a full deployment at /var/myapp/build-2015-11-05/ and tweak the config to start on a separate port, bring the app up and ensure everything works, then just switch from the symlink from the old build to the new build with minimal downtime.
| 0 | 0 | 0 | 0 |
2015-10-29T07:39:00.000
| 1 | 0 | false | 33,408,408 | 0 | 0 | 1 | 1 |
This question is mostly about the technical details + some best practices of how to efficiently deploy a python web app that's built using platter.
Taking Django for instance, I have a project that's already built into a tarball distribution. This includes all wheels of all deps + the package of the app itself.
My repo directory also contains some other files that need to be distributed with the deployed code, such as: manage.py, a fabfile package with fabric utils, and some configuration files (for supervisor, nginx, etc).
So my questions are:
How can I wrap these extra files into the distribution that contains the project?
If I simply use git to clone/pull the project on the server I have these files, but then I have duplicate of the source code being both in the project and zipped in the tarball. How can I avoid that? Committing the tarball into a separate repo?
Perhaps the duplication is not so bad, and I'll end up with multiple tarballs in my dist/ directory and only one symlinked to the current from which I deploy?
Same goes for a Tornado based app.
|
How to get events which can be postponed for a given date from a table with change log and a table for event in Django?
| 33,408,635 | 0 | 1 | 103 | 0 |
python,django,postgresql
|
If there is no other use of EventLog table, just put entries in Entries table and put a date of execution in that. If the date is postponed , update the date of execution in Event table itself.
This also assumes that you dont need the previous dates( as your requirement says that you just need to show the events happening on that day)
| 0 | 0 | 0 | 0 |
2015-10-29T07:44:00.000
| 3 | 0 | false | 33,408,497 | 0 | 0 | 1 | 1 |
I am stuck in a scenario where I am making an API that returns events in the given a month and year combination.
The Database structure:
Event - has basic details of an event.
EventLog - has a foreignkey to the event, a from_date and a to_date.
When Events are created, an entry is made to both Event table and EventLog table (with from_date set as null).
When an Event is postponed an entry is made to EventLog with previous date and current date.
Now given a date I want to show events occurring on that date as well as the postponed events with latest dates that were supposed to happen on that day.
How should I go about it without making too many calls to the database ?
|
pass arguments to `before_first_request_funcs` in flask
| 33,418,843 | 2 | 0 | 262 | 0 |
python,flask
|
Yes, just access the current_app. This is the way to do it. The before_first_request callback run inside the app context.
| 0 | 0 | 0 | 0 |
2015-10-29T14:58:00.000
| 1 | 1.2 | true | 33,417,670 | 0 | 0 | 1 | 1 |
I want the function I pass to before_first_request_funcs the ability to access app.config object.
Can I pass an argument to the function somehow?
Access the "current app object" (it is not really global and I can just access it, right?)
|
Run django application without django.contrib.admin
| 33,421,173 | 16 | 9 | 5,473 | 0 |
python,django,django-admin,django-admin-tools
|
django.contrib.admin is simply a Django app.
Remove or comment django.contrib.admin from INSTALLED_APPS in settings.py file.
Also remove or comment from django.contrib import admin from admin.py',urls.py and all the files having this import statement.
Remove url(r'^admin/', include(admin.site.urls) from urlpatterns in urls.py.
| 0 | 0 | 0 | 0 |
2015-10-29T17:29:00.000
| 2 | 1.2 | true | 33,420,918 | 0 | 0 | 1 | 2 |
I am trying to run my Django Application without Django admin panel because I don't need it right now but getting an exception value:
Put 'django.contrib.admin' in your INSTALLED_APPS setting in order to
use the admin application.
Could I ran my application without django.contrib.admin ? Even if go my localhost:8000 it is showing you need to add django.contrib.admin in your installed_apps?
|
Run django application without django.contrib.admin
| 33,421,110 | 1 | 9 | 5,473 | 0 |
python,django,django-admin,django-admin-tools
|
I have resolved this issue.
I had #url(r'^admin/', include(admin.site.urls)), in my urls.py which I just commented out.
| 0 | 0 | 0 | 0 |
2015-10-29T17:29:00.000
| 2 | 0.099668 | false | 33,420,918 | 0 | 0 | 1 | 2 |
I am trying to run my Django Application without Django admin panel because I don't need it right now but getting an exception value:
Put 'django.contrib.admin' in your INSTALLED_APPS setting in order to
use the admin application.
Could I ran my application without django.contrib.admin ? Even if go my localhost:8000 it is showing you need to add django.contrib.admin in your installed_apps?
|
How to keep a check box "unchecked" in odoo?
| 33,492,872 | 0 | 1 | 1,459 | 0 |
python-2.7,openerp,odoo-8,openerp-8
|
Just make this changes to field.boolean("string",default=False,readonly=False, ,required=False)
It will works, Thanks
| 0 | 0 | 0 | 0 |
2015-10-30T09:59:00.000
| 2 | 0 | false | 33,433,231 | 0 | 0 | 1 | 2 |
I am trying to keep a check box "unchecked" in my custom module,
Any Idea on this?
|
How to keep a check box "unchecked" in odoo?
| 33,433,277 | 1 | 1 | 1,459 | 0 |
python-2.7,openerp,odoo-8,openerp-8
|
If you want to do that, you must set it as readonly, that way no user will be able to set it as True.
| 0 | 0 | 0 | 0 |
2015-10-30T09:59:00.000
| 2 | 0.099668 | false | 33,433,231 | 0 | 0 | 1 | 2 |
I am trying to keep a check box "unchecked" in my custom module,
Any Idea on this?
|
Decoding HappyBase data from HBase
| 38,149,242 | 0 | 3 | 547 | 0 |
python,encoding,decoding,happybase
|
that data is not valid utf-8, so if you really retrieved it as such from the database, you should check who/what put it in there.
| 0 | 0 | 0 | 0 |
2015-10-30T10:00:00.000
| 1 | 0 | false | 33,433,262 | 0 | 0 | 1 | 1 |
While trying to decode the values from HBase, i am seeing an error but it is apparent that Python thinks it is not in UTF-8 format but the Java application that put the data into HBase encoded it in UTF-8 only
a = '\x00\x00\x00\x00\x10j\x00\x00\x07\xe8\x02Y'
a.decode("UTF-8")
Traceback (most recent call last):
File "", line 1, in
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe8 in position 9: invalid continuation byte
any thoughts?
|
App Engine: Few big scripts or many small ones?
| 33,456,580 | 4 | 1 | 419 | 0 |
python,google-app-engine,google-cloud-datastore
|
The are two important considerations here.
The number of roundtrip calls from the client to the server.
One call to update a user profile will execute much faster than 5 calls to update different parts of user profile as you save on roundtrip time between the client and the server and between the server and the datastore.
Write costs.
If you update 5 properties in a user profile and save it, and then update 5 other properties and save it, etc., your writing costs will be much higher because every update incurs writing costs, including updates on all indexed properties - even those you did not change.
Instead of creating a huge user profile with 50 properties, it may be better to keep properties that rarely change (name, gender, date of birth, etc.) in one entity, and separate other properties into a different entity or entities. This way you can reduce your writing costs, but also reduce the payload (no need to move all 50 properties back and forth unless they are needed), and simplify your application logic (i.e. if a user only updates an address, there is no need to update the entire user profile).
| 0 | 1 | 0 | 0 |
2015-10-31T15:44:00.000
| 3 | 0.26052 | false | 33,453,441 | 0 | 0 | 1 | 3 |
I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example).
If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend.
The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers.
So tl;dr:
Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks?
|
App Engine: Few big scripts or many small ones?
| 33,457,227 | 3 | 1 | 419 | 0 |
python,google-app-engine,google-cloud-datastore
|
A single big script would have to be loaded every time an instance for your app starts, possibly hurting the instance start time, the response time of every request starting an instance and the memory footprint of the instance. But it can handle any request immediately, no additional code needs to be loaded.
Multiple smaller scripts can be lazy-loaded, on demand, after your app is started, offering advantages maybe appealing to some apps:
the main app/module script can be kept small, which keeps the instance startup time short
the app's memory footprint can be kept smaller, handler code in lazy-loaded files is not loaded until there are requests for such handlers - interesting for rarely used handlers
the extra delay in response time for a request which requires loading the handler code is smaller as only one smaller script needs to be loaded.
Of course, the disadvantage is that some requests will have longer than usual latencies due to loading of the handler scripts: in the worst case the number of affected requests is the number of scripts per every instance lifetime.
Updating a user profile is not something done very often, I'd consider it a rarely used piece of functionality, thus placing its handlers in a separate file looks appealing. Splitting it into one handler per file - I find that maybe a bit extreme. It's really is up to you, you know better your app and your style.
From the GAE (caching) infra perspective - the file quota is 10000 files, I wouldn't worry too much with just ~100 files.
| 0 | 1 | 0 | 0 |
2015-10-31T15:44:00.000
| 3 | 1.2 | true | 33,453,441 | 0 | 0 | 1 | 3 |
I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example).
If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend.
The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers.
So tl;dr:
Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks?
|
App Engine: Few big scripts or many small ones?
| 33,858,532 | 0 | 1 | 419 | 0 |
python,google-app-engine,google-cloud-datastore
|
Adding to Dan Cornilescu’s answer, writing/saving an instance to the database re-writes to the whole instance (i.e. all its attributes) to the database. If you’re gonna use put() multiple times, you’re gonna re-write the who instance multiple times. Which, aside from being a heavy task to perform, will cost you more money.
| 0 | 1 | 0 | 0 |
2015-10-31T15:44:00.000
| 3 | 0 | false | 33,453,441 | 0 | 0 | 1 | 3 |
I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example).
If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend.
The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers.
So tl;dr:
Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks?
|
Django Migrations - how to insert just one model?
| 33,465,379 | 0 | 0 | 941 | 0 |
python,django,django-migrations
|
I was using a models directory. Adding an import of the model to __init__.py allowed me to control whether it's visible to makemigrations or not. I found that using strace.
| 0 | 0 | 0 | 0 |
2015-11-01T17:43:00.000
| 1 | 1.2 | true | 33,465,153 | 0 | 0 | 1 | 1 |
I just made a mess in my local Django project and realized that somehow I'm out of sync with my migrations. I tried to apply initial and realized that some of the tables already exist, so I tried --fake. This made the migration pass, but now I'm missing the one table I just wanted to add... how can I prepare migration just for one model or make Django re-discover what my database is missing and create that?
|
How to execute Python file
| 33,471,765 | -2 | 1 | 176 | 0 |
python,linux,django,filesystems
|
Save your python code file somewhere, using "Save" or "Save as" in your editor. Lets call it 'first.py' in some folder, like "pyscripts" that you make on your Desktop. Open a prompt (a Windows 'cmd' shell that is a text interface into the computer): start > run > "cmd".
| 0 | 1 | 0 | 0 |
2015-11-02T06:12:00.000
| 3 | -0.132549 | false | 33,471,710 | 0 | 0 | 1 | 1 |
I am learning Python and DJango and I am relatively nub with Linux. When I create DJango project I have manage.py file which I can execute like ./manage.py runserver. However when I create some Python program by hand it looks like that my Linux trying to execute it using Bash, not Python. So i need to write python foo.py instead ./foo.py. Attributes of both files manage.py and foo.py are the same (-rwx--x---). So my Q is: where is difference and how I can execute python program without specifying python? Links to any documentations are very appreciate. Thanks.
|
Can scrapy control and show a browser like Selenium does?
| 33,521,521 | 1 | 1 | 1,636 | 0 |
python,selenium,scrapy
|
Scrapy by itself does not control browsers.
However, you could start a Selenium instance from a Scrapy crawler. Some people design their Scrapy crawler like this. They might process most pages only using Scrapy but fire Selenium to handle some of the pages they want to process.
| 0 | 0 | 1 | 0 |
2015-11-03T23:09:00.000
| 3 | 0.066568 | false | 33,510,814 | 0 | 0 | 1 | 1 |
When I use Selenium I can see the Browser GUI, is it somehow possible to do with scrapy or is scrapy strictly command line based?
|
How to check if content of webpage has been changed?
| 34,488,088 | 2 | 8 | 8,152 | 0 |
python-2.7,hash,compare,web-crawler
|
There is no universal solution.
Use If-modifed-since or HEAD when possible (usually ignored by dynamic pages)
Use RSS when possible.
Extract last modification stamp in site-specific way (news sites have publication dates for each article, easily extractable via XPATH)
Only hash interesting elements of page (build site-specific model) excluding volatile parts
Hash whole content (useless for dynamic pages)
| 0 | 0 | 0 | 0 |
2015-11-04T07:38:00.000
| 6 | 0.066568 | false | 33,516,192 | 0 | 0 | 1 | 3 |
Basically I'm trying to run some code (Python 2.7) if the content on a website changes, otherwise wait for a bit and check it later.
I'm thinking of comparing hashes, the problem with this is that if the page has changed a single byte or character, the hash would be different. So for example if the page display the current date on the page, every single time the hash would be different and tell me that the content has been updated.
So... How would you do this? Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"? Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed?
About last-modified - unfortunately not all servers return this date correctly. I think it is not reliable solution. I think better way - combine hash and content length solution. Check hash, and if it changed - check string length.
|
How to check if content of webpage has been changed?
| 34,488,574 | 2 | 8 | 8,152 | 0 |
python-2.7,hash,compare,web-crawler
|
Safest solution:
download the content and create a hash checksum using SHA512 hash of content, keep it in the db and compare it each time.
Pros: You are not dependent to any Server headers and will detect any modifications.
Cons: Too much bandwidth usage. You have to download all the content every time.
Using Head
Request page using HEAD verb and check the Header Tags:
Last-Modified: Server should provide last time page generated or Modified.
ETag: A checksum-like value which is defined by server and should change as soon as content changed.
Pros: Much less bandwidth usage and very quick update.
Cons: Not all servers provides and obey following guidelines. Need to get real resource using GET request if you find data is need to fetch
Using GET
Request page using GET verb and using conditional Header Tags:
* If-Modified-Since: Server will check if resource modified since following time and return content or return 304 Not Modified
Pros: Still Using less bandwidth, Single trip to receive data.
Cons: Again not all resource support this header.
Finally, maybe mix of above solution is optimum way for doing such action.
| 0 | 0 | 0 | 0 |
2015-11-04T07:38:00.000
| 6 | 0.066568 | false | 33,516,192 | 0 | 0 | 1 | 3 |
Basically I'm trying to run some code (Python 2.7) if the content on a website changes, otherwise wait for a bit and check it later.
I'm thinking of comparing hashes, the problem with this is that if the page has changed a single byte or character, the hash would be different. So for example if the page display the current date on the page, every single time the hash would be different and tell me that the content has been updated.
So... How would you do this? Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"? Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed?
About last-modified - unfortunately not all servers return this date correctly. I think it is not reliable solution. I think better way - combine hash and content length solution. Check hash, and if it changed - check string length.
|
How to check if content of webpage has been changed?
| 34,584,705 | 2 | 8 | 8,152 | 0 |
python-2.7,hash,compare,web-crawler
|
If you're trying to make a tool that can be applied to arbitrary sites, then you could still start by getting it working for a few specific ones - downloading them repeatedly and identifying exact differences you'd like to ignore, trying to deal with the issues reasonably generically without ignoring meaningful differences. Such a quick hands-on sampling should give you much more concrete ideas about the challenge you face. Whatever solution you attempt, test it against increasing numbers of sites and tweak as you go.
Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"?
That's incredibly rough, and I'd avoid that if at all possible. But, you do need to weigh up the costs of mistakenly deeming a page unchanged vs. mistakenly deeming it changed.
Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed?
You can make such a "hash", but it's very hard to tune the sensitivity to meaningful change in the document. Anyway, as an example: you could sort the 256 possible byte values by their frequency in the document and consider that a 2k hash: you can later do a "diff" to see how much that byte value ordering's changed in a later download. (To save memory, you might get away with doing just the printable ASCII values, or even just letters after standardising capitalisation).
An alternative is to generate a set of hashes for different slices of the document: e.g. dividing it into header vs. body, body by heading levels then paragraphs, until you've got at least a desired level of granularity (e.g. 30 slices). You can then say that if only 2 slices of 30 have changed you'll consider the document the same.
You might also try replacing certain types of content before hashing - e.g. use regular expression matching to replace times with "<time>".
You could also do things like lower the tolerance to change more as the time since you last processed the page increases, which could lessen or cap the "cost" of mistakenly deeming it unchanged.
| 0 | 0 | 0 | 0 |
2015-11-04T07:38:00.000
| 6 | 0.066568 | false | 33,516,192 | 0 | 0 | 1 | 3 |
Basically I'm trying to run some code (Python 2.7) if the content on a website changes, otherwise wait for a bit and check it later.
I'm thinking of comparing hashes, the problem with this is that if the page has changed a single byte or character, the hash would be different. So for example if the page display the current date on the page, every single time the hash would be different and tell me that the content has been updated.
So... How would you do this? Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"? Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed?
About last-modified - unfortunately not all servers return this date correctly. I think it is not reliable solution. I think better way - combine hash and content length solution. Check hash, and if it changed - check string length.
|
WebDriverException: Message: unknown error: jQuery is not defined error in robot framework
| 43,115,077 | 0 | 3 | 4,119 | 0 |
jquery,python,selenium-webdriver,robotframework
|
From Selenium 3.0 - Gecko driver is required to run automation scripts in firefox
Selenium version less than 3.0 works.
Try with the following versions:
robotframework (3.0.2)
robotframework-selenium2library (1.8.0)
selenium (2.53.1)
| 0 | 0 | 1 | 0 |
2015-11-04T18:09:00.000
| 1 | 0 | false | 33,529,029 | 0 | 0 | 1 | 1 |
I am using Selenium2Library '1.7.4' and Robot Framework 2.9.2 (Python 2.7.8 on win32). If I try to give locator as jQuery, the following exception occurs: WebDriverException: Message: unknown error: jQuery is not defined. Please advise which version of Selenium2Library and 'Robot Framework' combination works to identify jQuery as a locator.
|
Weird HTML code looks like this b'\xff\xd8\xff\xe0
| 48,959,088 | 4 | 3 | 5,486 | 0 |
python,html
|
This is an image. Specifically a jpeg. Since it's a byte stream python prints it with b'.............'
A jpeg starts with \xff\xd8\xff\
| 0 | 0 | 0 | 0 |
2015-11-05T02:51:00.000
| 2 | 0.379949 | false | 33,535,853 | 0 | 0 | 1 | 1 |
I'm using python to retrieve an HTML source, but what comes out looks like this. What is this, and why am I not getting the actual page source?
b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00C
|
Django textile make a link both red and bold
| 33,550,462 | 0 | 1 | 37 | 0 |
python,django,textile
|
Use <strong> tags, like "*<span style="color:#ff0000"><strong>Text</strong></span>*":http://www.example.com/lifestyle/tpage2
| 0 | 0 | 0 | 0 |
2015-11-05T13:29:00.000
| 1 | 1.2 | true | 33,545,829 | 0 | 0 | 1 | 1 |
I want to make this "*<span style="color:#ff0000">Text</span>*":http://www.example.com/lifestyle/tpage2 in textile become bold and red.
|
How to monitor a AWS S3 bucket with python using boto?
| 33,590,521 | 1 | 2 | 4,530 | 0 |
python,api,amazon-web-services,amazon-s3,boto
|
You are correct that AWS Lambda can be triggered when objects are added to, or deleted from, an Amazon S3 bucket. It is also possible to send a message to Amazon SNS and Amazon SQS. These settings needs to be configured by somebody who has the necessary permissions on the bucket.
If you have no such permissions, but you have the ability to call GetBucket(), then you can retrieve a list of objects in the bucket. This returns up to 1000 objects per API call.
There is no API call available to "get the newest files".
There is no raw code to "monitor" uploads to a bucket. You would need to write code that lists the content of a bucket and then identifies new objects.
How would I approach this problem? I'd ask the owner of the bucket to add some functionality to trigger Lambda/SNS/SQS, or to provide a feed of files. If this wasn't possible, I'd write my own code that scans the entire bucket and have it execute on some regular schedule.
| 0 | 0 | 1 | 1 |
2015-11-05T17:34:00.000
| 1 | 1.2 | true | 33,551,143 | 0 | 0 | 1 | 1 |
I have access to a S3 bucket. I do not own the bucket. I need to check if new files were added to the bucket, to monitor it.
I saw that buckets can fire events and that it is possible to make use of Amazon's Lambda to monitor and respond to these events. However, I cannot modify the bucket's settings to allow this.
My first idea was to sift through all the files and get the latest one. However, there are a lot of files in that bucket and this approach proved highly inefficient.
Concrete questions:
Is there a way to efficiently get the newest file in a bucket?
Is there a way to monitor uploads to a bucket using boto?
Less concrete question:
How would you approach this problem? Say you had to get the newest file in a bucket and print it's name, how would you do it?
Thanks!
|
Download html with dynamic-css as pdf using python or js
| 33,571,398 | 0 | 0 | 371 | 0 |
javascript,jquery,python,html,css
|
I have used JSPDF to download pdf from html. It is easy to use. It should help you in your case:
JS Fiddle Html to pdf: http://jsfiddle.net/xzz7n/1/
JSPDF Site https://parall.ax/products/jspdf
| 0 | 0 | 0 | 0 |
2015-11-06T13:18:00.000
| 1 | 0 | false | 33,567,719 | 0 | 0 | 1 | 1 |
I have a html-content with dynamic-css in python class which will be later passed to a js file. I need to download this html-content as pdf format.
I have gone through various html to pdf convertor tools(pdfkit,pdfcrowd,wkhtmltopdf) but none of them is able to render the dynamic css content.
I have even tried using windows.document.documentElement for obtaining html content with dynamic css rendered.
But this did not work.
My question is: can we generate dynamic css seperately on python or download the complete pdf using js?
Thanks in advance
|
How to wait for a response
| 33,570,899 | 0 | 0 | 455 | 0 |
python,get,python-requests,wait
|
No, since requests is just an HTTP client.
It looks like the page is being modified by JS after finishing other request. You should figure out, what request changes the page and use it (by network inspector in Chrome, for example).
| 0 | 0 | 1 | 0 |
2015-11-06T15:59:00.000
| 1 | 0 | false | 33,570,762 | 0 | 0 | 1 | 1 |
So I'm trying to get some data from a search engine. This search engine returns some results and then, after for example 2 seconds, it changes it's html and show some approximate results.
I want to get these approximate results but that's the problem. I use requests.get which gets first response and does not wait those for example 2 seconds. So I'm curious if it is possible.
I don't want to use a Selenium since it has to be as lite as possible because it will be a part of a web page.
So my question is: Is it possible to make requests.get wait for another data?
|
Different admin backend
| 33,571,196 | 0 | 0 | 62 | 0 |
python,django,admin,backend
|
You can create a second AdminSite etc, but that will still be mainly a standard django admin. If you have more special needs, you'd possibly be better developping a custom one (using forms and ModelForms and possibly a couple generic apps for tables etc) - this is all standard django programming...
wrt/ validations and permissions, they are nothing specific to the admin app.
| 0 | 0 | 0 | 0 |
2015-11-06T16:17:00.000
| 2 | 1.2 | true | 33,571,084 | 0 | 0 | 1 | 1 |
I'm new with django and I was thinking if it's possible to do a thing.
I know I can access to /admin/ and have django default admin but what if I want to create another admin interface with different URL using the default admin model/widget?
To be clear, new different interface will be for example /customer-backend and people who join it (what about permission?) have another graphical interface but using things (like validation) that are available in django admin backend.
Thank you!
|
How do crossover.io, WAMP, twisted (+ klein), and django/flask/bottle interact?
| 34,815,287 | 0 | 0 | 263 | 0 |
python,django,twisted,wamp-protocol,crossbar
|
With a Web app using WAMP, you have two separate mechanisms: Serving the Web assets and the Web app then communicating with the backend (or other WAMP components).
You can use Django, Flask or any other web framework for serving the assets - or the static Web server integrated into Crossbar.io.
The JavaScript you deliver as part of the assets then connects to Crossbar.io (or another WAMP router), as do the backend or other components. This is then used to e.g. send data to display to the Web frontend or to transmit user input.
| 0 | 1 | 0 | 0 |
2015-11-06T23:37:00.000
| 1 | 0 | false | 33,577,252 | 0 | 0 | 1 | 1 |
As I understand it (please do correct misunderstandings, obviously), the mentioned projects/technologies are as follows:-
Crossover.io - A router for WAMP. Cross-language.
WAMP - An async message passing protocol, supporting (among other things) Pub/Sub and RPC. Cross-language.
twisted - An asynchronous loop, primarily used for networking (low-level). Python specific. As far as I can tell, current crossover.io implementation in python is built on top of twisted.
klein - Built on top of twisted, emulating flask but asynchronously (and without the plugins which make flask easier to use). Python specific.
django/flask/bottle - Various stacks/solutions for serving web content. All are synchronous because they implement the WSGI. Python specific.
How do they interact? I can see, for example, how twisted could be used for network connections between various python apps, and WAMP between apps of any language (crossover.io being an option for routing).
For networking though, some form of HTTP/browser based connection is normally needed, and that's where in Python django and alternatives have historically been used. Yet I can't seem to find much in terms of interaction between them and crossover/twisted.
To be clear, there's things like crochet (and klein), but none of these seem to solve what I would assume to be a basic problem, that of saying 'I'd like to have a reactive user interface to some underlying python code'. Or another basic problem of 'I'd like to have my python code update a webpage as it's currently being viewed'.
Traditionally I guess its handled with AJAX and similar on the webpage served by django et. al., but that seems much less scalable on limited hardware than an asynchronous approach (which is totally doable in python because of twisted and tornado et. al.).
Summary
Is there a 'natural' interaction between underlying components like WAMP/twisted and django/flask/bottle? If so, how does it work.
|
How to diff between two HTML codes?
| 33,608,719 | 0 | 0 | 308 | 0 |
python,diff,lxml,difflib
|
You can use the difflib module. It's available as a part of standard python library.
| 0 | 0 | 1 | 0 |
2015-11-09T11:58:00.000
| 1 | 0 | false | 33,608,604 | 0 | 0 | 1 | 1 |
I need to run a diff mechanism on two HTML page sources to kick out all the generated data (like user session, etc.).
I wondering if there is a python module that can do that diff and return me the element that contains the difference (So I will kick him in the rest of my code in another sources)
|
How to allow an user of a group to modify specific parts of a form in Odoo?
| 33,636,388 | 1 | 1 | 2,097 | 0 |
python,xml,python-2.7,odoo-8,odoo
|
You cannot make few or some of the fields as 'readonly' in odoo based on the groups. If you need to do it, you can use the custom module 'smile_model_access_extension'.
For loading appropriate view on menu click you can create record of 'ir.actions.act_window' (view_ids) field of 'ir.action', where you can specify the sequence and type of view to be loaded when the menu action is performed. In your case you can specify the specific 'form view' for your action.
| 0 | 0 | 0 | 0 |
2015-11-10T10:23:00.000
| 2 | 0.099668 | false | 33,627,789 | 0 | 0 | 1 | 1 |
I've created a new group named accountant. If an user of this group opens the res.partner form for example, he must be able to read all, but only modify some specific fields (the ones inside the tab Accountancy, for example).
So I set the permissions create, write, unlink, read to 0, 1, 0, 1 in the res.partner model for the accountant group.
The problem: if I'm an user of the accountant group and I go to the form of res.partner, I will see the Edit button, if I click on it, I will be able to modify any field I want (and I should not, only the ones inside the tab).
So I thought to duplicate the menuitem (put the attribute groups="accountant" to the copy) and the form (put all fields readonly except for the content of the tab).
The problem: if I'm an user of a group over accountant group (with accountant in its implied_ids list), I will see both menuitems (the one which takes to the normal form and the one which takes to the duplicated form with the readonly fields).
Is it possible to create a menuitem which opens a specific set of views depending on the group of the user who is clicking on the mentioned menuitem? Any ideas of how can I succesfully implement this?
|
what is update_against_templates in pootle 2.7?
| 38,104,213 | 1 | 1 | 75 | 0 |
python,django,pootle,translate-toolkit
|
Template updates now happen outside of Pootle. The old update_against_template had performance problems and could get Pootle into a bad state. To achieve the same functionality as update_against_templates do the following. Assuming your project is myproject and you are updating language af:
sync_store --project=myproject --language=af
pot2po -t af template af
update_store --project=myproject --language=af
You can automate that in a script to iterate through all languages. Use list_languages --project=myproject to get a list of all the active languages for that project.
| 0 | 0 | 0 | 1 |
2015-11-10T12:36:00.000
| 1 | 1.2 | true | 33,630,137 | 0 | 0 | 1 | 1 |
I added a new template file from my project. Now I don't know how to make the languages update or get the new template file. I've read that 2.5 has update_against_templates but it's not in 2.7. How will update my languages?
|
Python, Django - how to store http requests in the middleware?
| 33,646,559 | 2 | 1 | 1,698 | 0 |
python,django,middleware
|
You can implement your own RequestMiddleware (which plugs in before the URL resolution) or ViewMiddleware (which plugs in after the view has been resolved for the URL).
In that middleware, it's standard python. You have access to the filesystem, database, cache server, ... the same you have anywhere else in your code.
Showing the last N requests in a separate web page means you create a view which pulls the data from the place where your middleware is storing them.
| 0 | 0 | 0 | 0 |
2015-11-11T07:23:00.000
| 2 | 0.197375 | false | 33,645,899 | 0 | 0 | 1 | 1 |
It might be that this question sounds pretty silly but I can not figure out how to do this I believe the simplest issue (because just start learning Django).
What I know is I should create a middleware file and connect it to the settings. Than create a view and a *.html page that will show these requests and write it to the urls.
how can one store last (5/10/20 or any) http requests in the middleware and show them in a *.html page? The problem is I don't even know what exactly should I write into middlaware.py and views.py in the way it could be displayed in the *.html file. Ideally, this page should be also updated after the new requests occur. I read Django documentation, some other topics with middleware examples but it seems to be pretty sophisticated for me.
I would be really thankful for any insights and elucidates.
P.S. One more time sorry for a dummy question.
|
Upgrading to Django 1.7 from 1.5 missing textile
| 33,649,907 | 1 | 0 | 40 | 0 |
python,django
|
textile import textile is now part of python and the reason django removed it so I've made a template filter that will use it.
| 0 | 0 | 0 | 0 |
2015-11-11T08:46:00.000
| 2 | 0.099668 | false | 33,646,944 | 0 | 0 | 1 | 1 |
Textile has been deprecated in django, I have't found any docs on a replacement for this, what's the alternative now for django?
|
No way to adjust baud rate in connect() with Dronekit 2 and 3DR Radios and Pixhawk?
| 33,693,815 | 0 | 0 | 204 | 0 |
dronekit-python
|
Found out from another website that there was a bug in the releases prior to Dronkit 2.0.0.rc9. They forgot to put in any way to adjust baud rate! Latest release Dronekit 2.0.0.rc10 has the fix.
| 0 | 0 | 0 | 1 |
2015-11-11T18:16:00.000
| 1 | 0 | false | 33,657,161 | 0 | 0 | 1 | 1 |
I am unable to connect to my Pixhawk drone with 3DR radios and Dronekit 2 and Python code.
I am able to connect with a USB cable attached to Pixhawk.
I suspect the problem is the baud rate with the radios are too high.
There seems to be no way to change the baud rate with the radios in the connect command.
Please advise.
Windows 8.1
Thank you!
|
When do we need class-based views in Django?
| 33,677,561 | 1 | 1 | 88 | 0 |
django,python-3.x
|
We use class based views (CBV's) to reduce the amount of code required to do repetitive tasks such as rendering a form or a template, listing the items from a queryset etc.
Using CBV's drastically reduces the amount of code required and should be used where it can be.
| 0 | 0 | 0 | 0 |
2015-11-12T17:12:00.000
| 3 | 0.066568 | false | 33,677,303 | 0 | 0 | 1 | 2 |
I'm learning Django and I've finished 2 tutorials - official and amazing tutorial called Tango With Django. Though, I got everything I need to work, I have one question:
In Tango with Django aren't used class-based views - only links to the official tutorial.
Why didn't they include this information?
When should we use class-based views and is it a good practice?
|
When do we need class-based views in Django?
| 33,679,195 | 0 | 1 | 88 | 0 |
django,python-3.x
|
It is good to be used in a CRM system. You have a a list view, item view, delete view etc, such as an blog. CBV could help you write less code.
But also because it has do too much for you. Sometimes it will be a little bit troublesome to make some customization or add some extra logic. In this situation, it is more suitable for those really experienced and familiar with CBV so that they could change it easily.
| 0 | 0 | 0 | 0 |
2015-11-12T17:12:00.000
| 3 | 0 | false | 33,677,303 | 0 | 0 | 1 | 2 |
I'm learning Django and I've finished 2 tutorials - official and amazing tutorial called Tango With Django. Though, I got everything I need to work, I have one question:
In Tango with Django aren't used class-based views - only links to the official tutorial.
Why didn't they include this information?
When should we use class-based views and is it a good practice?
|
Proper version of boto for Eucalyptus cloud
| 33,695,098 | 0 | 1 | 189 | 0 |
python,boto,eucalyptus
|
2.38 is the right version. boto3 is something totally different and I don't have experience with it.
| 0 | 0 | 1 | 0 |
2015-11-12T17:28:00.000
| 2 | 0 | false | 33,677,594 | 0 | 0 | 1 | 1 |
I'm writing some code to interact with an HP Helion Eucalyptus 4.2 cloud server.
At the moment I'm using boto 2.38.0, but I discovered that also exists
the boto3 version.
Which version should I use in order to keep the code up with the times?
I mean, It seems that boto3's proposal is a ground-up rewrite more focused
on the "official" Amazon Web Services (AWS).
|
Reinstalling Django App - Data tables not re-created
| 33,704,110 | 0 | 1 | 205 | 1 |
python,django
|
I think I might have managed to solve the problem. The command, python manage.py sqlmigrate app_name 0001, produces the SQL statements required for the table creation. Thus, I copied and paste the output into the PostgreSQL console and got the tables created. It seems to work for now, but I am not sure if there will be repercussions later.
| 0 | 0 | 0 | 0 |
2015-11-14T00:37:00.000
| 1 | 0 | false | 33,703,866 | 0 | 0 | 1 | 1 |
I am trying to reinstall one of my apps on my project site. These are the steps that I have followed to do so:
Removing the name of the installed app from settings.py
Manually deleting the app folder from the project folder
Manually removing the data tables from PostgreSQL
Copying the app folder back into the project folder; making sure that all files, except __init__.py is removed.
Run python manage.py sqlmigrate app_name 0001
Run python manage.py makemigrations app_name
Run python manage.py migrate app_name
Run python manage.py makemigrations
Run python manage.py migrate
However, after all these steps the message I am getting is that there are "no changes detected" and the data tables have not been recreated in the database, PostgreSQL.
Am I missing some additional steps?
|
No module named app.scripts.file
| 33,711,994 | 0 | 0 | 559 | 0 |
python,django
|
try also adding a __init__.py file to that folder. It can be a blank file.
| 0 | 0 | 0 | 0 |
2015-11-14T18:39:00.000
| 2 | 0 | false | 33,711,927 | 0 | 0 | 1 | 1 |
I'm trying to put a file to import under the structure of app/scripts/file.py
I then want to call it similar to how I would anything else by doing in my views.py
from app.scripts.file import *
doing so gives the following error -
No module named app.scripts.file
If I put the file.py directly into the app folder there's no issue.
from app.file import *
|
Does PyPy's garbage collector need to stop the world?
| 33,752,634 | 4 | 2 | 735 | 0 |
python,pypy
|
PyPy GC does not stop the world, it's an incremental garbage collector.
| 0 | 0 | 0 | 0 |
2015-11-14T19:52:00.000
| 2 | 0.379949 | false | 33,712,642 | 1 | 0 | 1 | 1 |
I'm a Java developer so I sometimes need to optimize JVM arguments to improve GC performance(for example, reduce the time of STW).
Recently I tried to introduce Python to my new web project, and I decided to use PyPy as Python interpreter. My question is how does PyPy's garbage collector work? Does it also need to stop the world?
I've done some search but there are not so many docs about PyPy's GC mechanism.
|
django - Run tests upon starting server
| 33,714,252 | 1 | 2 | 189 | 0 |
python,django
|
Answer from @limelights:
Create a bash alias or run them in sequence?
I've adapted that answer to this line of code (for bash):
alias runserver="sudo python ~/testsite/manage.py test articles; sudo python ~/testsite/manage.py runserver 192.168.1.245:90 (as one line)
Using runserver runs the test suite and opens the server. An added perk is that I can run it from any location without having to go into the ~/testsite directory.
| 0 | 0 | 0 | 1 |
2015-11-14T21:18:00.000
| 1 | 0.197375 | false | 33,713,481 | 0 | 0 | 1 | 1 |
How can I configure my Django server to run tests from tests.py when starting the server with python manage.py runserver? Right now, I have to run tests through python manage.py test articles. (Note: I am using Django 1.8)
|
Robot Framework - Import library with 2 classes from different location
| 33,723,660 | 0 | 0 | 1,092 | 0 |
python,robotframework
|
You have two choices for importing:
importing a library via PYTHONPATH
importing a library based on the file path to the library.
In the first case you can import each class separately.
In the second case, it's not possible to import multiple classes from a single file. If you give a path to a python file, that file must contain keywords. It can also include classes, but robot won't know about those classes.
| 0 | 0 | 0 | 1 |
2015-11-15T16:17:00.000
| 1 | 1.2 | true | 33,721,893 | 0 | 0 | 1 | 1 |
I have a custom library that is in a different location from the test suite.
Meaning the test suite is in "C:/Robot/Test/test_suite.txt" and my library is in "C:/Robot/Lib/library.py".
The library has 2 different classes and I need to import both of them.
I have tried to import it by "Library | ../Lib/library.py" but I got an error saying that the library contains no keywords.
I also tried to import it by "Library | ../Lib/library.Class1" but got a syntax error.
Is there any way to do it without changing the PYTHONPATH?
Thank you!
|
Storing a PDF file in DB with Flask-admin
| 33,724,438 | -2 | 2 | 3,809 | 1 |
python,mongodb,object,flask-sqlalchemy,flask-admin
|
Flask-Admin doesn't store anything. It's just a window into the underlying storage.
So yes, you can have blob fields in a Flask-Admin app -- as long as the engine of your database supports blob types.
In case further explanation is needed, Flask-Admin is not a database. It is an interface to a database. In a flask-admin app, you connect to a pre-existing database. This might be an sqlite database, PostGresSQL, MySQL, MongoDB or any of a variety of databases.
| 0 | 0 | 0 | 0 |
2015-11-15T16:40:00.000
| 2 | -0.197375 | false | 33,722,132 | 0 | 0 | 1 | 1 |
can I store PDF files in the database, as object or blob, with Flask-Admin?
I do not find any reference in the documentation.
Thanks.
Cheers
|
does it make sense to use apache as web server for django in development mode
| 33,744,800 | 0 | 0 | 176 | 0 |
python,django,apache,distributed
|
There should be nothing stopping you from installing a copy of Apache on your workstation and using it for developing, and since you're working on something that depends on some of that functionality it makes perfect sense for you to use that for your development server instead of ./manage.py runserver.
Most people use Djangos built-in server because they don't need more than that for what they're trying to do - it sounds like your solution does.
Heck, since you're testing distributed you may even want to consider grabbing a virtualization tool (qemu, virtualbox, et al) so you can have a faux-distributed setup to work with (I'd suggest doing a bit of scripting to make it easy to deploy / restart them all at once though - it'll save you from having to track down issues where the code that's running is older than you thought it was).
Your development environment can be what you need it to be for what you're doing.
| 0 | 0 | 0 | 0 |
2015-11-16T19:07:00.000
| 1 | 1.2 | true | 33,742,716 | 0 | 0 | 1 | 1 |
I'm a newbie in django and have a project that involves distributed remote storage and I'm suggested to use mod x-sendfile as part of the project procedure.
In have a django app that receives a file and transforms it into N segments each to be stored on a distinct server. Those servers having a django app receiving and storing the segments.
But since mod x-sendfile works need apache and I am just in developing and trying stage this question occurred to me.
I googled a lot but found nothing in that regard.
So my question being: Is it possible to use apache as django web server during the development of django apps? Does it make sense in development mode to replace apache with django built-in web server?
|
Error Traceback in Jinja2 when Extending Template
| 33,842,204 | 0 | 1 | 400 | 0 |
python,templates,jinja2,extends
|
So after trying a lot of things, I found that the best way to do this is to use iframes instead of the Jinja extend. This way, not only can I locate the source of the error, I don’t have to send the Python values I am using in the frames to each template that I am going to render. I only send them to the original class that creates the iframe template.
| 0 | 0 | 0 | 0 |
2015-11-17T08:04:00.000
| 2 | 1.2 | true | 33,751,881 | 0 | 0 | 1 | 2 |
I’m using template inheritance in jinja2 because I have a top bar in my website that I need to include in all pages. The problem is that whenever there is an error in any page the traceback always points to the line with the {% extends %} tag and I cannot locate the source of the error.
Is there a way to find out which line is causing the error (aside from reading the whole code myself) or another way to do template inheritance than {% extends %}?
|
Error Traceback in Jinja2 when Extending Template
| 33,946,134 | 1 | 1 | 400 | 0 |
python,templates,jinja2,extends
|
Although iframes are more accustomed to importing webpages from different websites, this might be a good idea. You could also use the jinja tag {% include %} and then use sessions to cache the data instead of reloading them in every page.
| 0 | 0 | 0 | 0 |
2015-11-17T08:04:00.000
| 2 | 0.099668 | false | 33,751,881 | 0 | 0 | 1 | 2 |
I’m using template inheritance in jinja2 because I have a top bar in my website that I need to include in all pages. The problem is that whenever there is an error in any page the traceback always points to the line with the {% extends %} tag and I cannot locate the source of the error.
Is there a way to find out which line is causing the error (aside from reading the whole code myself) or another way to do template inheritance than {% extends %}?
|
How can I set limit to the duration of a job with the APScheduler?
| 33,770,050 | 7 | 5 | 1,679 | 0 |
python,apscheduler
|
APScheduler does not have a way to set the maximum run time of a job. This is mostly due to the fact that the underlying concurrent.futures package that is used for the PoolExecutors do not support such a feature. A subprocess could be killed but lacking the proper API, APScheduler would have to get a specialized executor to support this, not to mention an addition to the job API that allowed for timeouts. This is something to be considered for the next major version.
The question is, what do you want to do with the thread that is still running the job? Since threads cannot be forcibly terminated, the only option would be to let it run its course, but then it will still keep the thread busy.
| 0 | 1 | 0 | 0 |
2015-11-17T08:40:00.000
| 1 | 1 | false | 33,752,419 | 0 | 0 | 1 | 1 |
I set the scheduler with the "max_instances=10".There can be 10 jobs to run concurrently.Sometimes some jobs blocked, it wsa hanging there.When more than 10 jobs werr blocking there, the exception of "skipped: maximum number of running instances reached(10)".
Does APScheduler have a way to set the max time of a job's duration.If the job runs beyond the max time, it will be terminated.
If it doesn't have the way, what should I do?
|
Programmatically create confluence content from jira and fisheye
| 33,761,564 | 1 | 1 | 649 | 0 |
python,automation,jira,confluence,asciidoctor
|
I did something similar - getting info from Jira and updating confluence info.
I did it in a bash script that ran on Jenkins. The script:
Got Jira info using the Jira REST API
Parsed the JSON from Jira using jq (wonderful tool)
Created/updated the confluence page using the Confluence REST API
I have not used python but the combination of bash/REST/jq was very simple. Running the script from Jenkins allowed me to run this periodically, so confluence is updated automatically every 2 weeks with the new info from Jira.
| 0 | 0 | 0 | 1 |
2015-11-17T12:06:00.000
| 1 | 0.197375 | false | 33,756,512 | 0 | 0 | 1 | 1 |
I'm curious, how a good automated workflow could look like for the process of automating issues/touched file lists into a confluence page. I describe my current idea here:
Get all issues matching my request from JIRA using REST (DONE)
Get all touched files related to the matching Issues using Fisheye REST
Create a .adoc file with the content
Render it using asciidoctor-confluence to a confluence page
I'm implementing the this in python (using requests etc.) and I wonder how I could provide proper .adoc for the ruby-based asciidoctor. I'm planning to use asciidoctor for the reason it has an option to render directly to confluence using asciidocter-confluence.
So, is there anybody who can kindly elaborate on my idea?
|
How to debug Python script which is automatically called inside a web application?
| 33,758,124 | 1 | 0 | 154 | 0 |
python,debugging,cassandra,pdb,graphite
|
pdb gives control over to gunicorn, which is not what you want. Have a look at rpdb or other remote debugging solutions.
| 0 | 0 | 0 | 0 |
2015-11-17T13:04:00.000
| 1 | 1.2 | true | 33,757,699 | 0 | 0 | 1 | 1 |
I'm developing a cassandra storage finder for graphite-api.
graphite-api is installed via pip and run via gunicorn so I can't just call the script with a debugger but want to use interactive debugging.
When I import pdb in my storage finder and set a breakpoint, the code will halt there, but how can I connect now to the headless running pdb in the script?
Or is my approach to this debugging problem the wrong one and this has to be done in a completely other way?
|
Why app context in flask not a singleton for an app?
| 33,780,922 | 5 | 3 | 1,976 | 0 |
python,flask,singleton,thread-local
|
The app context is not meant for sharing between requests. It is there to share context before the request context is set up, as well as after the request has been torn down already. Yes, this means that there can be multiple g contexts active for different requests.
You can't share 'global' state because a WSGI app is not limited to a single process. Many WSGI servers use multiprocessing to scale request handling, not just threading. If you need to share 'global' state across requests, use something like a database or memcached.
| 0 | 0 | 0 | 0 |
2015-11-18T12:58:00.000
| 1 | 1.2 | true | 33,780,727 | 0 | 0 | 1 | 1 |
I've read flask document and found this:
13.3 Locality of the Context
The application context is created and destroyed as necessary. It never moves between threads and it will not be shared between requests.
This is really odds to me. I think an app context should be persist with the app and share objects for all the requests of the app.
So I dive into the source code and find that when the request context is pushed , an application context will be created and pushed if current app is not the one the request associated with.
So it seems that the app context stack may have multiple different app context for the same app pushed? Why not using a singleton app context? Why the lifetime of the app context is so 'short'? What can be done for such app context?
|
Django migrations best practice
| 33,784,579 | 5 | 0 | 1,818 | 0 |
python,django,version-control,django-migrations
|
you should create migration files locally, migrate locally and test it, and then commit the files to version control. django docs says:
The reason that there are separate commands to make and apply
migrations is because you’ll commit migrations to your version control
system and ship them with your app; they not only make your
development easier, they’re also useable by other developers and in
production.
if multiple developers are working on the same project, they dont have to create the migrate files, they just do migrate and everything is paradise.
| 0 | 0 | 0 | 0 |
2015-11-18T15:43:00.000
| 2 | 0.462117 | false | 33,784,362 | 0 | 0 | 1 | 2 |
I'm using Django 1.7 with migrations, and I'm not sure about what is the best practice, I should add the migrations files to my repository, or this is a bad idea?
|
Django migrations best practice
| 33,784,715 | 0 | 0 | 1,818 | 0 |
python,django,version-control,django-migrations
|
Yes, they must be versioned. If you are alone, it's not problem because you add the right database schema because each time you edit a model, you run makemigrations and migrate.
But how your colleagues can have the database schema that corresponds to the new models that you committed if they can't run your migrations too.
Commit your migrations, to allow your colleagues to run migrate and have the same database schema.
| 0 | 0 | 0 | 0 |
2015-11-18T15:43:00.000
| 2 | 1.2 | true | 33,784,362 | 0 | 0 | 1 | 2 |
I'm using Django 1.7 with migrations, and I'm not sure about what is the best practice, I should add the migrations files to my repository, or this is a bad idea?
|
Session Cookie HTTPOnly flag not set on response from logout (Django)
| 33,787,443 | 3 | 5 | 1,858 | 0 |
python,django,security,httponly
|
On logout, the server sends back a session cookie update with an empty
value to show that the cookie has been destroyed.
The HTTPOnly flag is set to prevent an XSS vulnerability from disclosing the secret session ID. When the cookie is "deleted" by setting it to an empty value, any sensitive data is removed from the cookie. An attacker doesn't have any use for an empty value, so it is not necessary to set the HTTPOnly flag.
On top of that, the expire date is set in the past, and the max-age is set to 0. The client will delete the cookie immediately, leaving any attacker with no chance to read the cookie through an XSS attack.
| 0 | 0 | 0 | 0 |
2015-11-18T17:39:00.000
| 1 | 1.2 | true | 33,786,736 | 0 | 0 | 1 | 1 |
I have a Django application and am configuring some security settings. One of the settings is the SESSION_COOKIE_HTTPONLY flag. I set this flag to True.
On session creation (login) I can see the session HTTPOnly flag set if I inspect cookies. On logout, the server sends back a session cookie update with an empty value to show that the cookie has been destroyed. This empty cookie is not sent back with the httpOnly flag set.
My question: Is this a security concern? Is there a way to force Django to set this flag on logout? Or is this just expected behavior, and is not a security concern, since the session cookie that is returned is blank?
|
Celery restart loss scheduled tasks
| 52,539,351 | 3 | 15 | 3,035 | 0 |
python,django,redis,celery
|
you have to use RabbitMq instead redis.
RabbitMQ is feature-complete, stable, durable and easy to install. It’s an excellent choice for a production environment.
Redis is also feature-complete, but is more susceptible to data loss in the event of abrupt termination or power failures.
Using rabbit mq your problem of lossing message on restart have to gone.
| 0 | 1 | 0 | 0 |
2015-11-19T10:59:00.000
| 1 | 0.53705 | false | 33,801,985 | 0 | 0 | 1 | 1 |
I use Celery to schedule the sending of emails in the future. I put the task in celery with apply_async() and ETA setted sometimes in the future.
When I look in flower I see that all tasks scheduled for the future has status RECEIVED.
If I restart celery all tasks are gone. Why they are gone?
I use redis as a broker.
EDIT1
In documentation I found:
If a task is not acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed.
This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop.
So you have to increase the visibility timeout to match the time of the longest ETA you are planning to use.
Note that Celery will redeliver messages at worker shutdown, so having a long visibility timeout will only delay the redelivery of ‘lost’ tasks in the event of a power failure or forcefully terminated workers.
Periodic tasks will not be affected by the visibility timeout, as this is a concept separate from ETA/countdown.
You can increase this timeout by configuring a transport option with the same name:
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 43200}
The value must be an int describing the number of seconds.
But the ETA of my tasks can be measured in months or years.
EDIT 2
This is what I get when I type:
$ celery -A app inspect scheduled
{u'priority': 6, u'eta': u'2015-11-22T11:53:00-08:00', u'request': {u'args': u'(16426,)', u'time_start': None, u'name': u'core.tasks.action_du
e', u'delivery_info': {u'priority': 0, u'redelivered': None, u'routing_key': u'celery', u'exchange': u'celery'}, u'hostname': u'[email protected]', u'ack
nowledged': False, u'kwargs': u'{}', u'id': u'8ac59984-f8d0-47ae-ac9e-c4e3ea9c4a
c6', u'worker_pid': None}}
If you look closely, task wasn't acknowledged yet, so it should stay in redis after celery restart, right?
|
Scraping model information from a program using python
| 33,815,248 | 1 | 1 | 132 | 0 |
python,revit-api,revitpythonshell
|
Great question - my +1 is definitely for Revit Python Shell (RPS).
Likewise I had a basic understanding of Python and none of the Revit API, but with RPS Ive coded multiple addins for our office (including rich user interfaces using winforms) and had no limitations so far from coding in Python. Its true that there is some translating C# API samples into Python - but the reward is in seeing a few paragraphs of code becoming a few lines...
The maker of RPS (Daren) is also really helpful, so no questions go unanswered.
Disclaimer is that (like you), Im a novice programmer who has simply wanted to use the API to extend Revit. RPS for the win
| 0 | 0 | 0 | 0 |
2015-11-19T11:16:00.000
| 2 | 0.099668 | false | 33,802,391 | 0 | 0 | 1 | 1 |
I'm attempting to pull physical property information (dimensions and resistance values, in particular) from an architectural (Autodesk - Revit) model and organize that information to be exported as specific variables.
To expand slightly, for an independent study I want to perform energy balances on Revit Models, starting simple and building from there. The goal is to write code that collects information from a Revit Model and then organizes it into variables such as "Total Wall Area", "Insulation Resistance", "Drywall depth", "Total Window Area", etc. that could be then sent to a model (or simply a spreadsheet) and stored as such.
I hope that makes some sense.
Given that I am a novice coder and would prefer to write in Python, does anyone have any advice or resources concerning an efficient (simple) path to go about importing and organizing specific parameters from a Revit model?
Is it necessary (or realistically necessary, given the humble extent of my knowledge) to use the API for this program (Revit) to accomplish this task?
I imagine this task is similar to web scraping yet I have no HTML to call and search through and therefore am happily winging my way along, asking folks far more knowledgeable than I if they have any insight.
A brief background, I have next to no knowledge of Revit or APIs in general, basic knowledge of coding in Python and really want to learn more!
Any help you are able to give is absolutely appreciated! I'm also happy to answer any questions that come up.
Thank you for reading and have a terrific day!
|
Socket server performance
| 33,837,539 | 0 | 0 | 611 | 0 |
python,node.js,sockets,nginx,socket.io
|
How many sockets and threads will be created on server?
As many sockets as there are inbound connections. As for threads, it depends on your architecture. Could be one, could be same as sockets, could be in between, could be more. Unanswerable.
Can a socket be shared between different connection?
No, of course not. The question doesn't make sense. A socket is an endpoint of a connection.
Is there any tool to analyze no of socket open?
The netstat tool.
| 0 | 0 | 1 | 0 |
2015-11-20T06:11:00.000
| 2 | 1.2 | true | 33,820,103 | 0 | 0 | 1 | 1 |
I am working on web socket application. From the front-end there would be single socket per application. But I am not sure about back-end. We are using Python and nginx with Flask-socketIO and socket-io client library. This architecture will be used to notify front-end that a change is occurred and it should update data.
Following are my doubts -
How many sockets and threads will be created on server ?
Can a socket be shared between different connection ?
Is there any tool to analyze no of socket open ?
|
Need To Select Only One Checkbox In Group
| 33,853,583 | 0 | 0 | 915 | 0 |
python,checkbox,wxpython
|
Have you considered cycling through them on EVT_CHECKBOX.
Each box can be tested with IsChecked(), if the test is True then you can use SetValue(False) on the others or whatever suits your requirements.
Also, there is nothing to stop you creating a radiobutton with the value None.
| 1 | 0 | 0 | 0 |
2015-11-20T14:43:00.000
| 1 | 1.2 | true | 33,829,421 | 0 | 0 | 1 | 1 |
I am working on a screen written in wxPython, and Python, that has five groups of CheckBoxes. Three of the groups can have between none and all the CheckBoxes selected. However with two of the groups only none or one can be selected. RadioButtons have been considered and disregarded as you cannot select none and their appearance is different making the look and feel of the page inconsistent. Obviously I could write numerous OnCheckBox events that would all be very similar. Is there an easier and more elegant way of achieving this?
|
Google App Engine File Processing
| 33,830,880 | 0 | 0 | 62 | 0 |
python,file,google-app-engine
|
your best bet could upload to blobstore or Cloud Storage, then use Task Queue to process the file which has no time limits.
| 0 | 1 | 0 | 0 |
2015-11-20T15:46:00.000
| 1 | 0 | false | 33,830,715 | 0 | 0 | 1 | 1 |
I am trying to create a process that will upload a file to GAE to interpret it's contents (most are PDFs, so we would use something like PDF Miner), and then store it in Google Cloud Storage.
To my understanding, the problem is that file uploads are limited to both 60 seconds for it to execute, as well as a size limit of I think 10MB. Does anyone have any ideas of how to address this issue?
|
AWS worker daemon locks multiple messages even before the first message is processed
| 33,846,596 | 1 | 1 | 179 | 0 |
python,amazon-web-services,flask,amazon-sqs,worker
|
Set the HTTP Connection setting under Worker Configuration to 1. This should prevent each server from receiving more than 1 message at a time.
You might want to look into changing your autoscaling configuration to monitor your SQS queue depth or some other SQS metric instead of worker CPU utilization.
| 0 | 1 | 0 | 1 |
2015-11-21T17:29:00.000
| 1 | 1.2 | true | 33,846,425 | 0 | 0 | 1 | 1 |
I have deployed a python-flask web app on the worker tier of AWS. I send some data into the associated SQS queue and the daemon forwards the request data in a POST request to my web app. The web app takes anywhere between 5 mins to 6 hours to process the request depending upon the size of posted data. I have also configured the worker app into an auto scaling group to scale based on CPU utilization metrics. When I send 2 messages to the queue in quick succession, both messages start showing up as in-flight. I was hoping that the daemon will forward the first message to the web app and then wait for it to be processed before pulling the second message out. In the meantime, auto scaling will spin up another instance (which it is but since the second message is also in-flight, it is not able to pull that message) and the new instance will pull and process the second message. Is there a way of achieving this?
|
How to get a standalone python script to get data from my django app?
| 33,852,782 | 3 | 2 | 832 | 0 |
python,django
|
If I understand you correctly, you're looking to have an external program communicate with your server. To do this, the server needs to expose an API (Application Interface) that communicates with the external program. That interface will receive a message and return a response.
The request will need to have two things:
identifying information for the user - usually a secret key - so that other people can't access the user's data.
a query of some sort indicating what kind of information to return.
The server will get the request, validate the user's secret key, process the query, and return the result.
It's pretty easy to do in Django. Set up a url like /api/cards and a view. Have the view process the request and return the response. Often, these days, these back and forth messages are encoded in JSON - an easy way to encapsulate and send data. Google around with the terms django, api, and json and you'll find a lot of what you need.
| 0 | 0 | 0 | 0 |
2015-11-22T05:45:00.000
| 1 | 1.2 | true | 33,852,035 | 0 | 0 | 1 | 1 |
I am currently learning how to use django. I have a standalone python script that I want to communicate with my django app. However, I have no clue how to go about doing this. My django app has a login function and a database with usernames and passwords. I want my python script to talk to my app and verify the persons user name and password and also get some account info like the person's name. How do I go about doing this? I am very new to web apps and I am not really sure where to begin.
Some Clarifications: My standalone python program is so that the user can access some information about their account. I am not trying to use the script for login functionality. My django app already handles this. I am just trying to find a way to verify that they have said account.
For example: If you have a flashcards web app and you want the user to have a program locally on their computer to access their flashcards, they need to login and download the cards from the web app. So wouldn't the standalone program need to communicate with the app to get login information and access to the cards on that account somehow? That's what I am trying to accomplish.
|
Sharing install files between virtualenv instances
| 33,852,065 | 0 | 1 | 39 | 0 |
python,pip,virtualenv
|
The whole point of virutalenv is to isolate and compartmentalize dependencies. What you are describing directly contradicts its use case. You could go into each individual project and modify the environmental variables but that's a hackish solution.
| 0 | 1 | 0 | 0 |
2015-11-22T05:47:00.000
| 1 | 0 | false | 33,852,048 | 1 | 0 | 1 | 1 |
I have 2-3 dozen Python projects on my local hard drive, and each one has its own virtualenv. The problem is that adds up to a lot of space, and there's a lot of duplicated files since most of my projects have similar dependencies.
Is there a way to configure virtualenv or pip to install packages into a common directory, with each package namespaced by the package version and Python version the same way Wheels are?
For example:
~/.cache/pip/common-install/django_celery-3.1.16-py2-none-any/django_celery/
~/.cache/pip/common-install/django_celery-3.1.17-py2-none-any/django_celery/
Then any virtualenv that needs django-celery can just symlink to the version it needs?
|
passenger stop kill orphan process
| 33,891,874 | 0 | 0 | 361 | 0 |
python,ruby-on-rails,linux
|
I have solve my problem by
restart my app instead of restart passenger
restart app command: passenger-config restart-app [path of my app]
| 0 | 1 | 0 | 0 |
2015-11-23T07:02:00.000
| 1 | 0 | false | 33,865,344 | 0 | 0 | 1 | 1 |
My app is rails and python .
In rails I create a new thread and start a shell command which executes python scripts.
This python script (parent process) will exit quickly, but before it exits it will fork a child process, and the child process will be an orphan process after the parent process exits.
Situation 1:
If I start app by rails: rails s -d
When the python parent process exits and python child process is going:
kill pid(./tmp/pids/server.pid)
Then the child process will be ok and not be killed. This is what I want.
Situation 2:
If I start app by passenger:
passenger start -e production -d
When the python parent process exits and python child process is going:
passenger stop;
then the child process will be killed.
So I want to know in situation 2, the child orphan process could not be killed? Has anyone experienced this or knows how to solve it?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.