Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to destroy and create a Session dynamically in QuickFix?
| 38,184,550 | 0 | 0 | 336 | 0 |
python,quickfix
|
You could get the source for quickfix and derive from FileLog. Then overide:
void onIncoming( const std::string& value )
Change this to check for size/date/something else and roll based on your criteria.
| 0 | 0 | 0 | 0 |
2015-11-23T15:38:00.000
| 1 | 0 | false | 33,874,877 | 0 | 0 | 1 | 1 |
I am trying to deal with the issue that QuickFix logs grow indefinitely by scheduling a cron-like job to stop the initiator, copy the log file (which looks like 'FIX.4.2-XXX-YYY.messages.current.log') to a different location, then start the initiator again.
This works fine, except that QuickFix does not automatically create a new messages.current.log file to save to. If I create the file manually, QF does not save to it. QF only behaves properly when it is shut down and restarted, in other words, when the Session is destroyed and then created again.
Rather than shutting down my entire application and restarting it (which I am not sure I can do automatically very easily) is there some way of destroying and creating the Session objects from within a running QF instance?
I am using the Python bindings but should be able to figure out QF/J instructions or those from other languages.
|
Can Django's development server route mutliple sites? If yes, how?
| 33,891,222 | 1 | 0 | 49 | 0 |
python,django
|
No, you can't.
The SITE_ID is cached in various places, so you can't change it at runtime. You need a separate process for each site, but you can't bind more than one process to a single port. Neither can the development server act as a reverse proxy for separate processes.
Running each site on a different port is the closest you can get. This is what happens with any HTTP-based app server, but in a production environment you use a reverse proxy to forward all requests from port 80 to the appropriate port for that site.
| 0 | 0 | 0 | 0 |
2015-11-23T16:35:00.000
| 1 | 1.2 | true | 33,876,005 | 0 | 0 | 1 | 1 |
I'm starting with Django and want to try out the features of django.contrib.sites. I have added some aliases for 127.0.0.1 in my /etc/hosts, and can run different sites by providing a DJANGO_SETTINGS_MODULE when running manage.py runserver.
What I haven't managed to do is to have both sites available at once, on the same port. I have seen solutions that use WSGI and Apache or similar, but none using the development server.
Can the Django Development server serve multiple sites at once, switching by domain name, or is the nearest I'll get to start multiple servers on different ports?
|
How do I override delete method for an inline model within django?
| 33,892,490 | 1 | 5 | 1,216 | 0 |
python,django,django-admin
|
I was able to get around by overriding the save() method for my model in the models.py itself.
| 0 | 0 | 0 | 0 |
2015-11-24T09:36:00.000
| 2 | 0.099668 | false | 33,890,020 | 0 | 0 | 1 | 1 |
I have a model 'B' that is linked to another model 'A' as an inline model, for use in my admin site. Now, whenever I delete an object of model 'B' associated with the corresponding object of model 'A' (via the admin site), I want to perform some more tasks at the backend. I was able to override the save function using a formset and then overriding the save_existing and save_new methods. How do I go about overriding the delete method for the inline admin model?
|
Unrecognized VM option 'MaxPermSize=350m'
| 33,914,580 | 0 | 1 | 1,193 | 0 |
python,python-2.7,python-3.x,pycharm
|
pycharm 64 bit not work with
java 8
above ......
maximum size allowed 350
Use Java 7 instead.
| 0 | 0 | 0 | 0 |
2015-11-25T09:53:00.000
| 1 | 0 | false | 33,913,318 | 1 | 0 | 1 | 1 |
Pycharm 5
Problem : Unrecognized VM option 'MaxPermSize=350m'
Java version
java version "1.9.0-ea" Java(TM) SE Runtime Environment (build
1.9.0-ea-b91) Java HotSpot(TM) 64-Bit Server VM (build 1.9.0-ea-b91, mixed mode)
I have commented the line by watching the Google solution in
pycharm64.vmoptions
file. But unfortunately Pycharm is not opening. What is the idea to gear it up ?
|
Odoo 8 function call on opening (tree) view
| 34,075,307 | 3 | 1 | 2,610 | 0 |
python,treeview,action,odoo-8,function-call
|
There is one way to make it happen.
Just add that functional field in tree view and make it invisible
So it will be also called in tree view
| 0 | 0 | 0 | 0 |
2015-11-26T07:54:00.000
| 2 | 1.2 | true | 33,933,195 | 0 | 0 | 1 | 1 |
Is there a way to call a python function (server action) to a view being opened. So when I click a menuitem not only a tree view opens (window action) but also a python function executes (server action).
Maybe something like an onload() function? Or a server action from within the tree view?
Thanks
|
Django: can I block new entries until one is completed?
| 33,936,374 | 0 | 0 | 65 | 0 |
python,django
|
You have to create some kind of lock that indicates that the price is currently assigned to a user and that prevents other users from going to the same form.
You could create a random token, store it in the DB (or redis), and add it a hidden field to the form. I suggest you also add an expiration date.
As long as a valid token exists, no other user can access the form. When user 1 submits the form, you check that it contains a valid token.
| 0 | 0 | 0 | 0 |
2015-11-26T10:02:00.000
| 2 | 0 | false | 33,935,637 | 0 | 0 | 1 | 2 |
Sorry for the vague question, here's what's going on:
I will be giving away "win codes" to people. My django app is written so that the first one to enter a valid code XX hours after the last win will again be a winner.
If the user is a winner he will be redirected to a page with a form to claim his prize.
1) User enters code
2) I check the datetime of the last win
3) If it's a winner again, go to the form page
The problem is: if someone wins and then another person enters a code before the first one has filled in the form to claim the prize, the second one will get to that form as well because the last winner is still more than XX hours ago.
How can I avoid this? Can I somehow check if someone already made it to that form?
|
Django: can I block new entries until one is completed?
| 33,937,381 | 0 | 0 | 65 | 0 |
python,django
|
Other approach is to write last win datetime right away in step 3, so
3) If it's a winner again, create win record and give it (or other form) to user to fill the fields
As mentioned, after some time expiration you can check/remove empty win records
| 0 | 0 | 0 | 0 |
2015-11-26T10:02:00.000
| 2 | 1.2 | true | 33,935,637 | 0 | 0 | 1 | 2 |
Sorry for the vague question, here's what's going on:
I will be giving away "win codes" to people. My django app is written so that the first one to enter a valid code XX hours after the last win will again be a winner.
If the user is a winner he will be redirected to a page with a form to claim his prize.
1) User enters code
2) I check the datetime of the last win
3) If it's a winner again, go to the form page
The problem is: if someone wins and then another person enters a code before the first one has filled in the form to claim the prize, the second one will get to that form as well because the last winner is still more than XX hours ago.
How can I avoid this? Can I somehow check if someone already made it to that form?
|
How do I disable IPython when opening a Django shell
| 37,856,346 | 0 | 5 | 769 | 0 |
django,ipython
|
you can uninstall ipython from the project virtualenv, that all you need
| 0 | 0 | 0 | 0 |
2015-11-27T03:47:00.000
| 3 | 0 | false | 33,949,964 | 1 | 0 | 1 | 1 |
If it is a multi-user environment and uninstall IPython is not an option, how would you go about launching a Django shell without IPython.
|
Boto3: Configuration file location
| 33,959,240 | 9 | 11 | 13,555 | 0 |
python,python-3.x,boto,boto3
|
It's not clear from the question whether you are talking about boto or boto3. Both allow you to use environment variables to tell it where to look for credentials and configuration files but the environment variables are different.
In boto3 you can use the environment variable AWS_SHARED_CREDENTIALS_FILE to tell boto3 where your credentials file is (by default, it is in ~/.aws/credentials. You can use AWS_CONFIG_FILE to tell it where your config file is (by default, it is in ~/.aws/config.
In boto, you can use BOTO_CONFIG to tell boto where to find its config file (by default it is in /etc/boto.cfg or ~/.boto.
| 0 | 0 | 1 | 1 |
2015-11-27T06:43:00.000
| 3 | 1.2 | true | 33,951,619 | 0 | 0 | 1 | 1 |
Is there any way to have Boto seek for the configuration files other than the default location, which is ~/.aws?
|
Using OpenCV with Django
| 33,954,610 | 1 | 3 | 4,890 | 0 |
python,django,opencv
|
Am I right that you dream about Django application able to capture video from your camera? This will not work (at least not in a way you expect).
Did you check any stack traces left by your web server (the one hosts Django app or the one started as Django built-in)?
I suggest you start playing with OpenCV a bit just from Python command line. If you're on Windows use IDLE. Observe behaviour of your calls from there.
Django application is running inside WSGI application server where there are several constraints what a module of particular type can and cannot do. I didn't try to repeat what you've done (I don't have camera I can access).
Proper way of handling camera in web application requires browser side handling in JavaScript.
Small disclaimer at the end: I'm not saying you cannot use OpenCV at all in Django application, but attempt to access the camera is not a way to go.
| 0 | 0 | 0 | 0 |
2015-11-27T09:41:00.000
| 2 | 0.099668 | false | 33,954,438 | 0 | 1 | 1 | 2 |
I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library.
When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my browser, nothing happens: the template does not load and no traceback in the terminal and the application remains loading forever.
Don't know why but the cv2 function call is not executing as expected. Since there is no traceback, I am not able to understand what is the problem. If anyone can suggest what is wrong ? Is it the right way to use OpenCV with Django ?
|
Using OpenCV with Django
| 35,443,792 | 2 | 3 | 4,890 | 0 |
python,django,opencv
|
Use a separate thread for the cv2 function call and the app should work like a charm. From what I figure..infinite loading is probably because the video never ceases recording and hence the code further up ahead is never taken into account, ergo an infinite loading page. Threads should probably do it.
:) :)
| 0 | 0 | 0 | 0 |
2015-11-27T09:41:00.000
| 2 | 0.197375 | false | 33,954,438 | 0 | 1 | 1 | 2 |
I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library.
When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my browser, nothing happens: the template does not load and no traceback in the terminal and the application remains loading forever.
Don't know why but the cv2 function call is not executing as expected. Since there is no traceback, I am not able to understand what is the problem. If anyone can suggest what is wrong ? Is it the right way to use OpenCV with Django ?
|
how to manually install modules in qpython
| 34,068,622 | 1 | 0 | 2,382 | 0 |
python,django,qpython
|
You can upload the django or other dependency modules into the mobile's /sdcard/com.hipipal.qpyplus/lib/python2.7/site-packages/
BTW: you can install django through QPython's pip_console.py easily .
| 0 | 0 | 0 | 0 |
2015-11-27T13:09:00.000
| 1 | 0.197375 | false | 33,958,132 | 1 | 0 | 1 | 1 |
I have downloaded django and other modules for qpython, then here comes the confusion, I have no idea how to manually install django for qpython, I thaught this should had been automatically. Please how can django be install manually. Thanks
|
As codified the limit of 12 connections appengine to cloudsql
| 33,978,178 | 2 | 0 | 223 | 1 |
python,google-app-engine,google-cloud-sql
|
Each single app engine instance can have no more than 12 concurrent connections to Cloud SQL -- but then, by default, an instance cannot service more than 8 concurrent requests, unless you have deliberately pushed that up by setting the max_concurrent_requests in the automatic_scaling stanza to a higher value.
If you've done that, then presumably you're also using a hefty instance_class in that module (perhaps the default module), considering also that Django is not the lightest-weight or fastest of web frameworks; an F4 class, I imagine. Even so, pushing max concurrent requests above 12 may result in latency spikes, especially if serving each and every request also requires other slow, heavy-weight operations such as MySQL ones.
So, consider instead using many more instances, each of a lower (cheaper) class, serving no more than 12 requests each (again, assuming that every request you serve will require its own private connection to Cloud SQL -- pooling those up might also be worth considering). For example, an F2 instance costs, per hour, half as much as an F4 one -- it's also about half the power, but, if serving half as many user requests, that should be OK.
I presume, here, that all you're using those connections for is to serve user requests (if not, you could dispatch other, "batch-like" uses to separate modules, perhaps ones with manual or basic scheduling -- but, that's another architectural issue).
| 0 | 1 | 0 | 0 |
2015-11-28T22:26:00.000
| 1 | 0.379949 | false | 33,977,130 | 0 | 0 | 1 | 1 |
I want to connect my project from app engine with (googleSQL), but I get that error exceeded the maximum of 12 connections in python, I have a CLOUDSQL D8 1000 simultaneous connections
how can i change this number limit conexions, I'm using django and python
thanks
|
Python APNs background connection
| 34,002,070 | 0 | 0 | 64 | 0 |
python,google-app-engine
|
You can use the datastore (eventually shadowed by memcache for performance) to persist all the necessary APN (or any other) connection/protocol status/context info such that multiple related requests can share the same connection as if your app would be a long-living one.
Maybe not trivial, but definitely feasible.
Some requests may need to be postponed temporarily, depending on the shared connection status/context, that's true.
| 0 | 1 | 0 | 0 |
2015-11-30T06:59:00.000
| 3 | 0 | false | 33,993,034 | 0 | 0 | 1 | 2 |
What would be the best practice in this scenario?
I have an App Engine Python app, with multiple cron jobs. Instantiated by user requests and cron jobs, push notifications might be sent. This could easily scale up to a total of +- 100 pushes per minute.
Setting up and tearing down a connection to APNs for every batch is not what I want. Neither is Apple advising to do this. So I would like to keep the connection alive, even when user requests finish or when a cron finishes. Possibly with a timeout (2 minutes no pushes, then close then connection).
Reading the GAE documentation, I couldn't figure out if there even is such a thing available. Also, I might need this to be available in different apps and/or modules.
|
Python APNs background connection
| 33,993,066 | 0 | 0 | 64 | 0 |
python,google-app-engine
|
You can put the messages in a pull taskqueue and have a backend instance (or a cron job) to process the tasks
| 0 | 1 | 0 | 0 |
2015-11-30T06:59:00.000
| 3 | 0 | false | 33,993,034 | 0 | 0 | 1 | 2 |
What would be the best practice in this scenario?
I have an App Engine Python app, with multiple cron jobs. Instantiated by user requests and cron jobs, push notifications might be sent. This could easily scale up to a total of +- 100 pushes per minute.
Setting up and tearing down a connection to APNs for every batch is not what I want. Neither is Apple advising to do this. So I would like to keep the connection alive, even when user requests finish or when a cron finishes. Possibly with a timeout (2 minutes no pushes, then close then connection).
Reading the GAE documentation, I couldn't figure out if there even is such a thing available. Also, I might need this to be available in different apps and/or modules.
|
Django and apache on different dockers
| 33,995,927 | 4 | 5 | 807 | 0 |
python,django,apache
|
mod_wsgi would be the wrong technology if you want to do this. It runs as part of Apache itself, so there literally is nothing to run in the Django container.
A better way would be to use gunicorn to run Django in one container, and have the other running the webserver as a proxy - you could use Apache for this, although it's more common to use nginx.
| 0 | 1 | 0 | 0 |
2015-11-30T10:02:00.000
| 1 | 1.2 | true | 33,995,862 | 0 | 0 | 1 | 1 |
We have an application written in django. We are trying a deployment scenario which will have one docker running apache, the second docker running django and the third docker running the DB server. In most of the documentation it is mentioned that apache and django will sit on the same machine (django in virtualenv to be precise), is there any way we can ask apache to talk to mod_wsgi sitting on a remote machine which has the django application?
|
after.each_scenario hook is not working(not available) in aloe_django
| 34,083,462 | 1 | 1 | 167 | 0 |
django,python-3.x,bdd,lettuce
|
I have used before/after.each_example() hook available in Aloe_django.
You put this piece of code into your terrain.py file.
@before.each_example
def before_each_example(scenario,outline,steps):
call_command(#your command#)
| 0 | 0 | 0 | 0 |
2015-11-30T11:18:00.000
| 2 | 0.099668 | false | 33,997,369 | 0 | 0 | 1 | 1 |
I wanted to do some operations(clear cookies, clear database etc) after each scenario in one feature, but the after.each_feature is not available in aloe_django. How did you deal with this problem. Any suggestions to handle this. The following hook is not available in aloe_django.
@before.each_scenario
def setup_some_scenario(scenario):
populate_test_database()
I need this because I want to have several scenarios in one feature, when first feature is completed I log out from admin and need to log in again in the next scenario(not logging out does not help), but in the next scenario it gives an error telling that my credentials are not valid(in the first scenario it was valid).
When I put this scenarios as different feature and reset my db and migrate it works fine.
I think when it jumps from one scenario to another within the feature it messes up the db or uses different one, so I need after.each_scenario() hook to reset and migrate my db.
|
Python selenium test - Facebook code generator XPATH
| 34,019,477 | 0 | 1 | 231 | 0 |
python,facebook,selenium,xpath
|
This error usually comes if element is not present in the DOM.
Or may be element is in iframe.
| 0 | 0 | 1 | 0 |
2015-12-01T10:46:00.000
| 1 | 0 | false | 34,018,450 | 0 | 0 | 1 | 1 |
I'm trying to get the XPATH for Code Generator field form (Facebook) in order to fill it (of course before I need to put a code with "numbers").
In Chrome console when I get the XPATH I get:
//*[@id="approvals_code"]
And then in my test I put:
elem = driver.find_element_by_xpath("//*[@id='approvals_code']")
if elem: elem.send_keys("numbers")
elem.send_keys(Keys.ENTER)
With those I get:
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
What means wrong field name. Does anyone know how to properly get a XPATH?
|
Should I downgrade Python 3.5 to 3.4?
| 34,031,385 | 1 | 1 | 1,774 | 0 |
python,django,python-3.4,mezzanine,python-3.5
|
As of today, yes, it is probably best to downgrade to Python 3.4. With Django 1.8, the current release of Django, Python 3.5 is not officially supported.
The 1.9 release of Django will officially support Python 3.5, but that is not a guarantee that your 3rd party libraries will as well. Ensuring that will likely come down to a matter of testing, and checking the compatibility of each of your 3rd party apps.
EDIT: As noted by knbk, Django 1.8.6 did add official support for Python 3.5. However, this does not invalidate the possibility that your other libraries may not yet support Python 3.5.
| 0 | 0 | 0 | 0 |
2015-12-01T22:21:00.000
| 3 | 1.2 | true | 34,031,327 | 0 | 0 | 1 | 1 |
I just installed Python 3.5 and created a virtual environment with it. Installed Mezzanine (Django CMS) and tried to run the manage.py file and migrate and syncdb etc.
I've been getting constant errors with 3.5 and I think the reason is that the 3.5 have changed some things that Mezzanine depends on.
Is it a good idea to downgrade 3.5 to 3.4? Or will I have more problems when upgrading later if I don't adapt to the changes now. Maybe a very fuzzy question, but I come from 2.7 and I think a lot have changed.
I don't know what to do :)
|
Long running Django task with text updates
| 34,048,733 | 2 | 0 | 173 | 0 |
python,django,celery,task-queue
|
user submits data
start a celery job with the data
the celery job posts text updates to a database
the django web app queries the database periodically and displays the text update to the user
| 0 | 0 | 0 | 0 |
2015-12-02T17:03:00.000
| 1 | 1.2 | true | 34,048,666 | 0 | 0 | 1 | 1 |
With a Django web app what would be the easiest way of having a long running task run in the background but be able to provide the user with progress updates in text and percentage done/ETA?
I've looked at Celery and I couldn't see a way to do regular text updates, only a progress update with percentage.
|
How to use Django to manage GIS points easily?
| 34,397,621 | 0 | 0 | 119 | 0 |
python,django,gis,geodjango
|
Yes, it is relatively easy to manage geometry data in the Django Admin, and it's all included. You can do any of the CRUD tasks relatively simply using the Geo Model manager in much the same way as any Django model or you can use the map interface you get in the admin.
From time to time I find I want to investigate my data in more detail, and then I simply connect to my PostGIS database using QGIS and have a panoply of GIS tools at my disposal.
I would strongly recommend using PostGIS from the start. If there is any 'mission creep' towards more geo-functionality in the future then it will save you oodles of time. It sounds like the sort of project where spatial queries might be very useful at some point.
| 0 | 0 | 0 | 0 |
2015-12-03T09:01:00.000
| 1 | 0 | false | 34,061,651 | 0 | 0 | 1 | 1 |
I'm going to write a web system to add and manage the position of driver and shops. GEO searching is not required, so it would be easier to use SQLite instead of PostgreSQL.
The core question here is that is there any easy way to manage GIS points using Django admin. I know Django have GeoModelAdmin to manage maps based on MapBox, but I could not find out how to use it just to save, delete, and update these points?
|
Run Python Code from Android
| 34,069,875 | 0 | 0 | 231 | 0 |
android,python,kivy
|
You can probably render LaTeX in Kivy fairly easily using a png exporter (such as presumably exists for web export tools, and modes like emacs' preview mode).
If you need to run python as part of a java app, probably a practical way to do it is to use kivy's python-for-android tools with your own java frontend, invoking the python interpreter via JNI. This would require some thinking and experimentation but should be possible. There are also other projects for building python for android, which might be able to do the same things.
| 1 | 0 | 0 | 0 |
2015-12-03T15:11:00.000
| 1 | 0 | false | 34,069,428 | 0 | 0 | 1 | 1 |
I'm writing Android app.
The problem is that it should execute some calculations and library for this is written in Python.
What is the best way to invoke Python from Android/Java?
I heard about Kivy and even managed to run application, but python code returns latex formulae, that can't be rendered within Kivy app.
|
Running python script on the apache server end
| 34,103,974 | 0 | 0 | 84 | 0 |
php,python,node.js,apache
|
Write your processing code as a completely independent software not tied to web server at all.
Web server application will only add tasks into some database and return immediately. Your process will run as a service, polling database for new tasks, executing them and pushing some in-progress updates and final results back to the database. Web server application could see that the processing started, display in-progress and final results only looking up the database, which is fast.
| 0 | 0 | 0 | 1 |
2015-12-04T17:27:00.000
| 1 | 0 | false | 34,094,063 | 0 | 0 | 1 | 1 |
I am working on an algorithm in python for a problem which take multiple hours to finish. I want to accept some details using html/php from user and then use those to run the algorithms in python. Even when the user closes the browser, I want the python script to be running at the server side and when user logins again, and then it displays the result. Is this possible using apache server and php? Can server created using nodejs be solution? Plz help. Any help would be appreciated.
|
Display each section (h1, h2, h3) in a new page in Sphinx
| 39,006,565 | -3 | 1 | 1,308 | 0 |
python-sphinx
|
So, we were able to make it work by adjusting the HTML template and the globaltoc setting.
| 0 | 0 | 0 | 0 |
2015-12-04T22:30:00.000
| 2 | -0.291313 | false | 34,098,567 | 0 | 0 | 1 | 1 |
When I build HTML output using sphinx, it is possible to display h1 and h2 on separate pages, however, h3 is always displayed on the same page as h2. Does anyone know how to make sphinx display the content of h3 on a separate page? The same way traditional online help systems do this.
For example:
Section
Sub-section
Sub-section
Sub-sub-section
Sub-sub-section
Sub-section
So, I want when I click on sub-sub-section see the content only under that sub-sub-section and not from Sub-section above or sub-sub-section below.
Thanks in advance!
|
Configuring my web app to be used simultaneously
| 34,102,645 | 0 | 0 | 24 | 0 |
python,django,web
|
You should separate your games. I would do this by making a Game model that has its own board. If you are planning to make this a multi-player app, I would also give the model player1 and player2 attributes so that you can determine which board to show to a particular user.
I don't have a great way to keep the games in sync across multiple tabs other than to have some javascript that refreshes the board at a certain interval.
| 0 | 0 | 0 | 0 |
2015-12-05T07:42:00.000
| 1 | 0 | false | 34,102,541 | 0 | 0 | 1 | 1 |
I am building a tic-tac-toe game using django. Different users will be simuntaneously playing the game at different places. So, how does the server store the states of game board for different persons?
|
Django testing method, fixture or mock?
| 34,551,957 | 0 | 1 | 653 | 0 |
python,unit-testing,integration-testing,pytest-django
|
The main difference between unit tests and integration tests are that integration testing deal with the interactions between two or more "units". As in, a unit test doesn't particularly care what happens with the code surrounding it, just as long as the code within the unit test operates as it's designed to.
As for your second question, if you feel the database and fixtures in your unit test suite is taking too long to run, mocking is a great solution.
| 0 | 0 | 0 | 1 |
2015-12-05T09:15:00.000
| 1 | 1.2 | true | 34,103,210 | 0 | 0 | 1 | 1 |
In my project, I use pytest to write unit test cases for my program. But later I find I there are many db operation, ORM stuff in my program.
I known unit-testing should be run fast, but what is the different between unit-testing and auto integration-testing except fast.
Should I just use the database fixture instead of mocking them?
|
Is it bad practice to use template tags to retrieve data in Django?
| 34,116,556 | 1 | 1 | 198 | 0 |
python,django
|
If all you're looking for is to have some hints weather some data or some apps exist at template-render time, you could use a template context processor, as this is what they're for - loading something into every template.
I definitely wouldn't recommend implementing template tags to retrieve data, this would break the MVC rules for once, but then you might get in trouble while trying to debug slow db queries and other things like that.
If you're doing some db queries in the context processor, bear in mind that those will be executed every time a template is rendered, even if it doesn't need that data.
To shave some time of that processing, you could use some sort of manual caching with an appropriate invalidation scheme.
An alternative route if you are using class based views is to implement a mixin that will just add the data you need to the context (in the get_context_data method). If you're doing this, make sure to call super to also get the context of the class base view you're normally extending.
| 0 | 0 | 0 | 0 |
2015-12-05T21:00:00.000
| 1 | 0.197375 | false | 34,110,807 | 0 | 0 | 1 | 1 |
I have a thirdparty app (let's call it app A) that, in its views.py, it uses Context processors for sending data to specific urls. The data it sends is used in its templates to determine how the nav-bar is like. For example if there exists an A.project entry in the db, it will show the <i> Projects </i> in it's template.
Now I'd like to extend that app, and use the nav-bar it uses but add an extra parameter blog to it where the blog app is a thirdparty app. The problem is that now whenever you go to the url associated with the blog app, e.g. (/blog), Any items from app A in the nav-bar will be missing because the context sent from the blog app is different and missing data from app A.
I can probably create custom template tags to check if A.project, etc exist, but I'm not sure if that's really the best way to do it.
Is there any better way of doing it?
|
Pure tones in Psychopy end with unwanted clicks
| 34,122,112 | 1 | 2 | 481 | 0 |
python,psychopy
|
Clicks in the beginning and end of sounds often occur because the sound is stopped mid-way so that the wave abruptly goes from some value to zero. This waveform can only be made using high-amplitude high-frequency waves superimposed on the signal, i.e. a click. So the solution is to make the wave stop while on zero.
Are you using an old version of psychopy? If yes, then upgrade. Newer versions add a Hamming window (fade in/out) to self-generated tones which should avoid the click.
For the .wav files, try adding (extra) silence in the end, e.g. 50 ms. It might be that psychopy stops the sound prematurely.
| 0 | 0 | 0 | 1 |
2015-12-06T03:50:00.000
| 2 | 1.2 | true | 34,113,812 | 0 | 0 | 1 | 1 |
Pure tones in Psychopy are ending with clicks. How can I remove these clicks?
Tones generated within psychopy and tones imported as .wav both have the same problem. I tried adding 0.025ms of fade out in the .wav tones that I generated using Audacity. But still while playing them in psychopy, they end with a click sound.
Now I am not sure how to go ahead with this. I need to perform a psychoacoustic experiment and it can not proceed with tone presentation like that.
|
Disable Jupyter Keyboard Shortcuts
| 43,498,018 | 7 | 8 | 2,360 | 0 |
html,ipython,ipython-notebook,jupyter
|
You can use Jupyter.keyboard_manager.disable() to disable the shortcuts temporarily, and use Jupyter.keyboard_manager.enable() to activate again.
| 0 | 0 | 0 | 0 |
2015-12-07T04:09:00.000
| 3 | 1.2 | true | 34,126,296 | 1 | 0 | 1 | 1 |
One of my Jupyter notebooks uses an html <input> tag that expects typed user input, but whenever I type in the text box, command mode keyboard shortcuts activate.
Is it possible to turn off keyboard shortcuts for a single cell or notebook?
|
Django-fobi on google app engine
| 34,309,480 | 0 | 0 | 139 | 0 |
django,python-2.7,google-app-engine
|
You have likely configured something wrong. Could you post your project setup somewhere?
| 0 | 0 | 0 | 0 |
2015-12-07T06:30:00.000
| 1 | 0 | false | 34,127,714 | 0 | 0 | 1 | 1 |
I am successfully able to install django-fobi in my virtual environment but when I hit localhost:8080/admin it gives me the following error:
ImportError: No module named fobi.contrib.plugins.form_handlers.mail
I get this error when i run my django project on google app engine.
|
Subclassing and overriding Django Class based views
| 34,129,711 | 0 | 0 | 138 | 0 |
python,django,django-allauth
|
You can of course subclass the views, as long as you change your URLs to point to the overridden versions. However, there is no need to do this just to use your own templates; Django's template loader is specifically written with this use case in mind. Simply create your own directory inside your templates folder to match the one allauth is using, and create your own template files inside it; Django will find yours first and use them.
| 0 | 0 | 0 | 0 |
2015-12-07T08:11:00.000
| 1 | 0 | false | 34,129,016 | 0 | 0 | 1 | 1 |
I'm building a website using django-all auth for it's authentication and social authentication functions. The forms that come bundled with the app are hardly great to look at and hence I decided to create my own views.
The problem is: How do I create them while ensuring that the backend of Django all auth is still available to me? I've dug into the source code and found that it uses class based views for rendering and performing CRUD operations.
I want to know if I can subclass those views in my own app/views.py and just change their template_name field to my own templates. Any advice would be most helpful.
Thanks.
|
How do I build the database for my P2P rental marketplace?
| 34,133,045 | 0 | 0 | 657 | 1 |
python,mysql,ruby-on-rails,ruby,database
|
Learn a good book on software development methodologies before you get into this . Then read some simple tutorial online on mysql . Then it will be a lot more easy to do this .
| 0 | 0 | 0 | 0 |
2015-12-07T09:07:00.000
| 2 | 0 | false | 34,129,887 | 0 | 0 | 1 | 1 |
I'm self-teaching programming through the plethora of online resources to build a startup idea I've had for awhile now. Currently, I'm using the SaaS platform at sharetribe.com for my business but I'm trying to build my own platform as share tribe does not cater to the many options I'd like to have available to my users.
I'm try to setup the database at this time and I'm currently working on the architecture. I plan to use MySQL for my database.
The website will feature an online inventory management system where users can track all their items, update availability, pricing, delivery, payments, analytical tools, etc. This is so the user can easily monitor their current items, create new listings, etc. so it creates more of a "business" feel for the users.
Here is a simple explanation of the work flow. Users will create their profile having access to rent or rent out their items. Once their account is created they can search listing based on the category, subcategory, location, price, etc. When rental is placed, the user will request the rental at specified time, once approved, the rental process will begin.
My question is how should I set up the infrastructure/architecture for the database? I have this as my general workings but I know I'm missing a lot of queries and criteria to suit the application.
User queries:
-user_ID
-name
-email
-username
-encrypted_password
-location
-social_media
-age
-photo
Product queries:
-item_ID
-user_ID
-category_ID
-subcategory_ID
-price
-description
-availability
-delivery_option
As you can see, I'm new to this but as many of the resources I've used for my research, all have said the best way to learn is to do. I'm probably taking on a bigger project that I should for my beginning stages but there will be plenty of mistakes made that will assist my learning.
Any and all recommendations and assistance are appreciated.
For general knowledge, I intend to utilize Rails as my server language. If you recommend Python/Django over Ruby/Rails, could you please explain why this would be more beneficial to me?
Thanks.
|
Can't retrive data from webpage for onchange fields in odoo?
| 34,148,043 | 0 | 0 | 257 | 0 |
python,xml,openerp,xml-rpc
|
Chandu,
Well you can call on_change method on through xml-rpc which will give you desire data and you can pass those data back to the server to store correct values.
Bests
| 0 | 0 | 0 | 0 |
2015-12-07T14:30:00.000
| 1 | 0 | false | 34,135,973 | 0 | 0 | 1 | 1 |
I have used xml-rpc in my Odoo ERP so whenever some user inputs data in external website that will come to my ERP. Everything working fine i.e. getting data which user inputs from website like personal details, But the problem is i've some onchange selection fields in custom model.for that data is not getting updated over here. Got my point?? I would like to know how to resolve this issue. At least i need to know someone's approach.
Thanks in advance
|
Django how to allow people to connect my file in the application
| 34,143,416 | 0 | 1 | 46 | 0 |
python,django
|
Nginx will solve it problem. Set static folder in the Nginx.
| 0 | 0 | 0 | 0 |
2015-12-07T16:28:00.000
| 1 | 1.2 | true | 34,138,415 | 0 | 0 | 1 | 1 |
Currently, I am building the Django on the local. I developed the program to allow people download and reconnect some files. But now I am using the static address. If I set up the Django on the website. What should I do? Do I need to set up my base url and how? Do I need to set the address in the view.py and urls.py?
|
Django app with logged in users
| 34,156,302 | 1 | 0 | 74 | 0 |
python,django,admin
|
The admin pages (as the name indicates) should be reserved for admins. It is designed to give access to the 'raw' data stored in the database.
For your users, you should create views, templates and forms to log in and view/change their information. This way you can choose how their info is displayed and how they are allowed to use it (validation, permissions...).
| 0 | 0 | 0 | 0 |
2015-12-08T12:07:00.000
| 2 | 0.099668 | false | 34,155,481 | 0 | 0 | 1 | 2 |
I am building my first django application, I have set up a custom user and profile, I would like the users to be able to edit some of their own content and view their own pages of analytics data.
Currently my users are being created and logged in to the admin area I am using a a custom back end to allow them to see / edit the content.
My question: Should I allow my users log into the django admin area or should I build a separate login form that authenticates them and build authenticated pages, so I would end up with two admin areas the main area where I can control users and billing etc the other where the customer can view and edit profile information and interact with the application.
|
Django app with logged in users
| 34,155,953 | 1 | 0 | 74 | 0 |
python,django,admin
|
Of course it's better to create another page for the users to get control from, so you set up the authentications and all of the custom permissions that you want to give them. By giving them permissions that you set explicitly you make sure the users don't temper with anything that you don't want them to touch. So the best thing to do, is to create a custom admin panel for them. A more controlled environment for you and your users.
| 0 | 0 | 0 | 0 |
2015-12-08T12:07:00.000
| 2 | 1.2 | true | 34,155,481 | 0 | 0 | 1 | 2 |
I am building my first django application, I have set up a custom user and profile, I would like the users to be able to edit some of their own content and view their own pages of analytics data.
Currently my users are being created and logged in to the admin area I am using a a custom back end to allow them to see / edit the content.
My question: Should I allow my users log into the django admin area or should I build a separate login form that authenticates them and build authenticated pages, so I would end up with two admin areas the main area where I can control users and billing etc the other where the customer can view and edit profile information and interact with the application.
|
Web page structure comparison using python
| 34,160,545 | 0 | 1 | 192 | 0 |
python,dom,data-science
|
First you would need to identify which elements in the page actually uniquely identify a page as being of a specific webpage-class.
Then you could use a library like BeautifulSoup to actually look through the document to see if those elements exist.
Then you would just need a series of if/elifs to determine if a page has the qualifying elements, if so classify it as the appropriate webpage-class.
| 0 | 0 | 1 | 0 |
2015-12-08T12:13:00.000
| 1 | 0 | false | 34,155,609 | 0 | 0 | 1 | 1 |
I want to classify a given set of web pages to different classes, mainly to 3 classes(product page, index page and product-related items page). I think it can be done using analyzing their structure. I just look for comparing the web pages based on their DOM(Document Object Model) structure. I want to whether there is library in python for resolving this problem.
Thanks in advance.
|
Real-time backend for IoT App
| 34,178,035 | 1 | 0 | 586 | 0 |
python,firebase,backend,iot,real-time-data
|
You're comparing apples to oranges here in your options. The first three are entirely under your control, because, well, you own the server. There are many ways to get this wrong and many ways to get this right, depending on your experience and what you're trying to build.
The last three would fall under Backend-As-A-Service (BaaS). These let you quickly build out the backend of an application without worrying about all the plumbing. Your backend is operated, maintained by a third party so you lose control when compared to your own server.
... and of course at the best price
AWS, Azure, GAE, Firebase, PubNub all have free quotas. If your application becomes popular and you need to scale, at some point, the BaaS options might end up being more expensive.
| 0 | 1 | 0 | 1 |
2015-12-09T11:02:00.000
| 2 | 0.099668 | false | 34,177,156 | 0 | 0 | 1 | 1 |
I'm working on an IoT App which will do majority of the basic IoT operations like reading and writing to "Things".
Naturally, it only makes sense to have an event-driven server than a polling server for real-time updates. I have looked into many options that are available and read many articles/discussions too but couldn't reach to a conclusion about the technology stack to use for the backend.
Here are the options that i came across:
Meteor
Python + Tornado
Node.js + Socket.io
Firebase
PubNub
Python + Channel API (Google App Engine)
I want to have as much control on the server as possible, and of course at the best price. What options do i have? Am i missing something out?
Personally, i prefer having a backend in Python from my prior experience.
|
Run a Django app on PyPy on Amazon AWS
| 34,180,615 | 2 | 3 | 1,018 | 0 |
python,django,amazon-web-services,pypy,aws-cli
|
The best way to run PyPy (also on AWS) is to install it (pypy is bundled these days with the default AWS distribution) and use virtualenv to manage python dependencies.
| 0 | 0 | 0 | 0 |
2015-12-09T13:04:00.000
| 1 | 1.2 | true | 34,179,566 | 0 | 0 | 1 | 1 |
I have a Django application, that does some intensive computational tasks. To make its execution faster I run it with PyPy (the Python alternative extension to run scripts faster).
I have to deploy it on amazon-aws (Elastic Beanstalk). I want to deploy it, such that it runs on PyPy on aws, (and not on conventional/default Python).
|
How to use the global variables in django?
| 34,184,791 | 0 | 1 | 273 | 0 |
python,django,dictionary,global-variables
|
You can put the dictionary in your static directory and put the path in your settings.py file. Then when you try to use it, you load the dictionary in your views.py.
| 0 | 0 | 0 | 0 |
2015-12-09T17:05:00.000
| 2 | 0 | false | 34,184,735 | 1 | 0 | 1 | 1 |
I want to use word (english) dictionary in my Django application. However Django does not recommend using Global variables because of its threading model. This dictionary does not have thread-safety issues, I want to load the dictionary at the beginning and after it is constant (will be reading that from different Django views).
Is there any way to achieve this ?
|
Running django migrations on multiple databases simultaneously
| 34,878,492 | 1 | 1 | 510 | 1 |
python,django,django-migrations
|
First, I'd really look (very hard) for a way to launch a script that does as masnun suggests on the client side, really hard.
Second, if that does not work, then I'd try the following:
Configure on your local machine all client databases in the settings variable DATABASES
Make sure you can connect to all the client databases, this may need some fiddling
Then you run the "manage.py migrate" process with the extra flag --database=mydatabase (where "mydatabase" is the handle provided in the configuration) for EACH client database
I have not tried this, but I don't see why it wouldn't work ...
| 0 | 0 | 0 | 0 |
2015-12-10T08:32:00.000
| 2 | 0.099668 | false | 34,197,011 | 0 | 0 | 1 | 2 |
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want one client's state to affect the others)
Each of these installations are binded to a central repository. If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application.
If we update the database schema on the other hand, currently, we need to run migrations manually by connecting to each db instance one by one (settings.py file reads the database settings from an external file which is not in the repo, we add this file manually upon installation).
Can we automate this process? i.e. given a list of databases, is it possible to run migrations on these databases with a single command?
|
Running django migrations on multiple databases simultaneously
| 34,197,250 | 3 | 1 | 510 | 1 |
python,django,django-migrations
|
If we update the application code, when we push to the master branch,
all installations detect this, pull the latest version of the code and
restart the application.
I assume that you have some sort of automation to pull the codes and restart the web server. You can just add the migration to this automation process. Each of the server's settings.py would read the database details from the external file and run the migration for you.
So the flow should be something like:
Pull the codes
Migrate
Collect Static
Restart the web server
| 0 | 0 | 0 | 0 |
2015-12-10T08:32:00.000
| 2 | 0.291313 | false | 34,197,011 | 0 | 0 | 1 | 2 |
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want one client's state to affect the others)
Each of these installations are binded to a central repository. If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application.
If we update the database schema on the other hand, currently, we need to run migrations manually by connecting to each db instance one by one (settings.py file reads the database settings from an external file which is not in the repo, we add this file manually upon installation).
Can we automate this process? i.e. given a list of databases, is it possible to run migrations on these databases with a single command?
|
Are settings unique to projects in PyCharm
| 34,209,473 | 0 | 1 | 71 | 0 |
python,pycharm
|
PyCharm support explained that (as of v 4.5.3), there is a checkbox option in Deployment Settings for "Visible only for this project".
| 0 | 0 | 0 | 0 |
2015-12-10T14:35:00.000
| 1 | 1.2 | true | 34,204,574 | 1 | 0 | 1 | 1 |
I have two projects in PyCharm 4.5.2. When I change some deployment settings (e.g., add or delete a server) in one project, they also change in the other. Is this the way it is supposed to work? Is there a way I should be doing this so that the settings are specific to each project?
|
Django - guidance needed
| 34,215,469 | 1 | 0 | 34 | 0 |
python,django,django-forms,django-views
|
You could use either option.
Option #1: In the post method (if using Class-based-views, otherwise check for "post" as the request type), just instantiate the form with MessageForm(request.POST), and then check the form's is_valid() method. If the form is valid, save the Message object and redirect back to the same view using HttpResponseRedirect within the if form.is_valid(): code block.
If you're checking for the related Messages objects in your template, the newly created message should be there.
Option #2: Very similar to Option #1, except if the form is not valid, re-render the same template that is used for the product_view with the non-valid form instance included in the template context.
| 0 | 0 | 0 | 0 |
2015-12-10T23:26:00.000
| 1 | 0.197375 | false | 34,213,644 | 0 | 0 | 1 | 1 |
I am developing a web-site using Django/Python. I am quite new to this technology and I want to do the web-site in a right way.
So here is my problem:
Imagine, that there is a Product entity and product view to display the Product info.
I use (product_view in my views.py ).
There is also Message entity and the Product might have multiple of them.
In Product view page ( I use "product_view" action in my views.py ) I also query for the messages and display them.
Now, there should be a form to submit a new message ( in product view page ).
Question #1: what action name should form have ( Django way, I do understand I might assign whatever action I want )?
Option #1: it might be the same action "product_view". In product_view logic I might check for the HTTP method ( get or post ) and handle form submit or just get request. But it feels a bit controversial for me to submit a message to the "product_view" action.
Option #2: create an action named "product_view_message_save". ( I don't want to create just "message_save", because there might be multiple ways to submit a message ). So I handle the logic there and then I make a redirect to product_view. Now the fun part is: if the form is invalid, I try to put this form to the session, make the redirect to the "product_view", get the form there and display an error near the message field. However, the form in Django is not serializable. I can find a workaround, but it just doesn't feel right again.
What would you say?
Any help/advice would be highly appreciated!
Best Regards,
Maksim
|
Linux/Python: How can I hide sensitive information in a Python file, so that developers on the environment won't be able to access it?
| 34,215,479 | 1 | 1 | 885 | 0 |
python,mysql,linux,django,passwords
|
I'd place the password file in a directory with 600 permissions owned by the Django user. The Django user would be able to do what it needed to and nobody else would even be able to look in the directory (except root and Django)
Another thing you could do would be to store it in a database and set it so that the root user and the Django user in the DB have unique passwords, that way only the person with those passwords could access it. IE system root is no longer the same as DB root.
| 0 | 0 | 0 | 1 |
2015-12-11T01:49:00.000
| 4 | 0.049958 | false | 34,214,908 | 0 | 0 | 1 | 1 |
I've got a Django project running on an Ubuntu server. There are other developers who have the ability to ssh into the box and look at files. I want to make it so that the mysql credentials and api keys in settings.py are maybe separated into a different file that's only viewable by the root user, but also usable by the django project to run.
My previous approach was to make the passwords file only accesible to root:root with chmod 600, but my settings.py throws an ImportError when it tries to import the password file's variables. I read about setuid, but that doesn't seem very secure at all. What's a good approach for what I'm trying to do? Thanks.
|
Unable to install a specific version of django on virtualenv
| 34,242,990 | 2 | 0 | 487 | 0 |
python,django,pip,virtualenv
|
Can you try this command:
sudo pip install django==1.8
| 0 | 0 | 0 | 0 |
2015-12-12T17:39:00.000
| 3 | 0.132549 | false | 34,242,949 | 1 | 0 | 1 | 1 |
I have been trying to install django 1.8 on virtualenv, i performed the following steps:
changed to my project directory
changed to scripts folder of virtual environment which I created
activated the virtual env
typed command: pip install django == 1.8
Nothing worked
also, tried pip install django and easy_install django, however, none worked.
Could you please help me out ?
|
Is there a way to only allow for images to be uploaded when using django-multiupload?
| 34,243,402 | -1 | 0 | 389 | 0 |
python,django,django-models,django-forms,django-uploads
|
Have a fast look a the source code. No, it doesn't provide a support for that.
| 0 | 0 | 0 | 0 |
2015-12-12T18:24:00.000
| 2 | -0.099668 | false | 34,243,376 | 0 | 0 | 1 | 1 |
as the title says, is there a way to only allow for images to be uploaded when using django-multiupload? At the moment my users can upload any file but I want to limit them to only images.
Any help/advice would be much appreciated :-)
|
Running Python Script in HTML
| 34,251,582 | 0 | 4 | 281 | 0 |
python,html,python-2.7
|
I is not possible to import Python code in html as you import JavaScript code. JavaScript is executed by the browser of the client and the browsers don't have an included Python interpreter. You have to do it with JavaScript if you want to do it on the Client side.
| 0 | 0 | 0 | 1 |
2015-12-13T13:24:00.000
| 3 | 0 | false | 34,251,551 | 1 | 0 | 1 | 1 |
I'm working on a School Project. I've done a lot of Python Script before and I was wondering if I could like import python in html like javascript? How should I do it? Example is importing time. I want to show a Time clock in my webpage from python script.
|
Run Flask alongside PHP [sharing session]
| 34,272,457 | 1 | 2 | 1,770 | 1 |
php,python,session,flask
|
I'm not sure this is the answer you are looking for, but I would not try to have the Flask API access session data from PHP. Sessions and API do not go well together, a well designed API does not need sessions, it is instead 100% stateless.
What I'm going to propose assumes both PHP and Flask have access to the user database. When the user logs in to the PHP app, generate an API token for the user. This can be a random sequence of characters, a uuid, whatever you want, as long as it is unique. Write the token to the user database, along with an expiration date if you like. The login process should pass that token back to the client (use https://, of course).
When the client needs to make an API call, it has to send that token in every request. For example, you can include it in the Authorization header, or you can use a custom header as well. The Flask API gets the token and searches the user database for it. If it does not find the token, it returns 401. If the token is found, it now knows who the user is, without having to share sessions with PHP. For the API endpoints you will be looking up the user from the token for every request.
Hope this helps!
| 0 | 0 | 0 | 1 |
2015-12-14T11:37:00.000
| 1 | 1.2 | true | 34,266,083 | 0 | 0 | 1 | 1 |
As the title says, I’ am trying to run Flask alongside a PHP app.
Both of them are running under Apache 2.4 on Windows platform. For Flask I’m using wsgi_module.
The Flask app is actually an API. The PHP app controls users login therefore users access to API. Keep in mind that I cannot drop the use of the PHP app because it controls much more that the logging functionality [invoicing, access logs etc].
The flow is:
User logs in via PHP app
PHP stores user data to a database [user id and a flag indicating if user is logged in]
User makes a request to Flask API
Flask checks if user data are in database: If not, redirects to PHP login page, otherwise let user use the Flask API.
I know that between steps 2 and 3, PHP have to share a session variable-cookie [user id] with Flask in order Flask app to check if user is logged in.
Whatever I try fails. I cannot pass PHP session variables to Flask.
I know that I can’t pass PHP variables to Flask, but I’m not sure for that.
Has anyone tried something similar?
What kind of user login strategy should I implement to the above setup?
|
324 error::empty response, django
| 37,083,575 | -1 | 0 | 649 | 0 |
python,django,google-chrome
|
This could be specific to the server you are using. First try clearing your cookies, but if that does not work, that means you have a faulty server and I don't know how to fix that other than getting another one.
| 0 | 0 | 1 | 0 |
2015-12-14T12:47:00.000
| 1 | -0.197375 | false | 34,267,461 | 0 | 0 | 1 | 1 |
I have a django api which returns content fine in my localhost. But when I run it production. it giving me 324 error [ empty content response error].
I had printed api response which is fine. But even before api runs for completions, chrome browser throwing 324 error.
When I researched a bit. it look like socket connection is dead in client side. I am not sure how to fix it.
|
Error : "Django not found"
| 34,288,921 | 1 | 0 | 405 | 0 |
python,django,pydev
|
ok, I found a solution which always works :
uninstall and reinstall everything (python, Django, pydev) whitout using the pip
| 0 | 0 | 0 | 0 |
2015-12-14T16:18:00.000
| 1 | 1.2 | true | 34,271,752 | 0 | 0 | 1 | 1 |
I'm working with Eclipse and suddenly I could not use Django anymore.
I tried to make a new project, but an error occurred : "Django not found".
I checked the interpreters like it is said in the forums.
I have uninstalled and installed Django multiple times, change the pythonpath thousands times, I reinstalled pydev, nothing has fixed the issue.
I really don't understand the fact that I was just typing usual code, and suddenly nothing worked again.
Edit : In the python command, I can import django.config but i cannot import Django.config.admin for example.
|
Key Error sure.AssertionBuilder object at
| 34,333,554 | 0 | 0 | 77 | 0 |
python,django,bdd,lettuce
|
I feel like a lonely person asking and answering her own question :D
The problem was in importing, which we were not even using, so deleting this line resolved our problem. Hope it would be helpful for someone in the future
from sure import basestring
| 0 | 0 | 0 | 0 |
2015-12-15T11:19:00.000
| 1 | 0 | false | 34,287,867 | 0 | 0 | 1 | 1 |
After pulling with rebase changes from VCS I am getting a Key Error when trying to run my Aloe_Django(porting from Lettuce) tests. Before it was working fine, now we can not figure out what we did wrong.
The Error is
KeyError:< sure.AssertionBuilder object at 0x7fbf588172e8>
The error occurs in registry.py file in lines :
def append_to(self, what, when, function, name=None, priority=0):
"""
Add a callback for a particular type of hook.
"""
if name is None:
name = self._function_id(function)
funcs = self[what][when].setdefault(priority, OrderedDict()) #HAPPENS HERE
funcs.pop(name, None)
funcs[name] = function
# pylint:enable=too-many-arguments
|
run celery task using a django management command
| 34,311,859 | 1 | 1 | 1,782 | 0 |
python,django,celery,celery-task
|
Executing Celery tasks from a command line utility is the same as executing them from views. If you have a task called foo, then in both cases:
Calling foo(...) executes the code of the task as if foo were just a plain Python function.
Calling foo.delay(...) executes the code of the task asynchronously, through a Celery worker.
| 0 | 1 | 0 | 0 |
2015-12-16T09:26:00.000
| 1 | 1.2 | true | 34,308,221 | 0 | 0 | 1 | 1 |
I'm trying to run a task, using celery 3.1, from a custom management command.
If I call my task from a view it works fine but when starting the same task from my management command, the task will only run synchronous in current context (not async via celery).
I don't have djcelery installed.
What do I need to add to my management command to get async task processing on command line?
|
Issues with Multi-lingual website data caching - Python - Google App Engine
| 34,314,025 | 1 | 0 | 47 | 0 |
python,google-app-engine,caching,server-side,multilingual
|
I assume the individual product rendering in a particular language accounts for the majority (or at least a big chunk) of the rendering effort for the entire page.
You could cache server-side the rendered product results for a particular language, prior to assembling them in a complete results page and sending them to the client, using a 2D product x language lookup scheme.
You could also render individual product info offline, on a task queue, whenever products are added/modified, and store/cache them on the server ahead of time. Maybe just for the most heavily used languages?
This way you avoid individual product rendering on the critical path - in response to the client requests, at the expense of added memcache/storage.
You just need to:
split your rendering in 2 stages (individual product info and complete results page assembly)
add logic for cleanup/update of the stored/cached rendered product info when products add/change/delete ops occur
(maybe) add logic for on-demand product info rendering when pre-rendered info is not yet available when the client request comes in (if not acceptable to simply not display the info)
You might want to check if it's preferable to cache/store the rendered product info compressed (html compresses well) - balancing memcache/storage costs vs instance runtime costs vs response time performance (I have yet to do such experiment).
| 0 | 0 | 0 | 0 |
2015-12-16T11:23:00.000
| 1 | 0.197375 | false | 34,310,736 | 0 | 0 | 1 | 1 |
Question:
What are the most efficient approaches to multi-lingual data caching on a web server, given that clients want the same base set of data but in their locale format. So 1000 data items max to be cached and then rendered on demand in specific locale format.
My current approach is as follows:
I have a multilingual python Google App Engine project. The multi-lingual part uses Babel and various language .po and .mo files for translation. This is all fine and dandy. Issues, start to arise when considering caching of data. For example, let's say I have 1000 product listings that I want clients to be able to access 100 at a time. I use memcache with a datastore backup entity if the memcache gets blasted. Again, all is fine and dandy, but not multilingual. Each product has to be mapped to match the key with the particular locale of any client, English, French, Turkish, whatever. The way I do it now is to map the products under a specific locale, say 'en_US', and render server side using jinja2 templates. Each bit of data that is multilingual specific is rendered using the locale settings for date, price formatting title etc. etc. in the 'en_US' format and placed into the datastore and memcache all nicely mapped out ready for rendering. However, I have an extra step to take for getting those multilingual data into the correct format for a clients locale, and that is by way of standard {{ }} translations and jinja2 filters, generally for stuff like price formatting and dates. Problem is this is slowing things up as this all has to be rendered on the server and then passed back to the client. The initial 100 products are always server side rendered, however, before caching I was rendering the rest client side from JSON data via ajax calls to the server. Now it's all server side rendering.
I don't want to get into a marathon discussion regarding server vs client side rendering, but I would appreciate any insights into how others have successfully handled multi-lingual caching
|
Determine if response results from redirect
| 34,317,060 | 0 | 2 | 62 | 0 |
python,django,redirect,logging,django-middleware
|
Not really, at least in any official way. HTTP requests are independent of each other. You cant tell that one request followed another. That is a reason if you need to maintain state between pages, you end up using sessions and passing session IDs around.
For your purposes using session ID to track pages is not reliable since a user can have multiple pages open.
The only semi-reliable solution I can think of is appending a tracking querystring to the URL upon redirects.
For example if some view processing request to /foo/ returns a redirect to /bar/ in your middleware you change that URL to /bar/?tracking=<something random>. The random part can be a uuid or something similar. Then when the user will go to that page, you can match the random bit and hence correlate that the request came from the original page /foo/. Note that in order for this to work the random bit will have to be unique for all requests.
Whether you should use above approach, probably not. Its probably not very reliable and probably has many edge cases where it will break. Maybe you can change your requirements to reflect HTTP-nature a bit better hence you will not need to come up with hacks to do what you are trying to do?
| 0 | 0 | 0 | 0 |
2015-12-16T15:50:00.000
| 1 | 0 | false | 34,316,369 | 0 | 0 | 1 | 1 |
I have Django event logging application with middleware, which logs user page views. Currently, if response code is 200, log "User X visited page Y" is saved, but in case of redirect the log should be "User X has been redirected to page Y".
Is it possible to determine if response (200) occurred after 302 response redirect?
|
Where to implement python classes in Django?
| 34,322,974 | 1 | 6 | 1,687 | 0 |
python,django
|
It depends on the scope of the Alphabet class. If it is a utility class then I would suggest to put in a utils.py file, for example. But it is perfectly fine to have classes in the views.py file, mainly those dealing with UI processing. Up to you.
| 0 | 0 | 0 | 0 |
2015-12-16T21:15:00.000
| 4 | 0.049958 | false | 34,322,216 | 1 | 0 | 1 | 2 |
I'm learning Django on my own and I can't seem to get a clue of where I implement a regular Python class. What I mean is, I don't know where do the Python classes I write go. Like they go in a separate file and then are imported to the views.py or are the classes implemented inside the views.py file?
Example I want to implement a Class Alphabet, should I do this in a separate file inside the module or just implement the functions inside the views.py file?
|
Where to implement python classes in Django?
| 34,324,293 | 1 | 6 | 1,687 | 0 |
python,django
|
Distinct to similar frameworks, you can put your Python code anywhere in your project, provided you can reference them later by their import path (model classes are partially an exception, though):
Applications are referenced by their import path (or an AppConfig import path). Although there's some magic involving test.py and models.py, most of the times the import / reference is quite explicit.
Views are referenced by urls.py files, but imported as regular python import path.
Middlewares are referenced by strings which denote an import path ending with their class name.
Other settings you normally don't configure are also full import paths.
The exception to this explicitness is:
models.py, test.py, admin.py : They have special purposes and may not exist, providing:
You will not need any model in your app, and will provide an AppConfig (instead of just the app name) in your INSTALLED_APPS.
You will not rely on autodiscovery for admin classes in your app.
You don't want to make tests on your app, or will specify a non-default path for your app-specific test command run.
templates and static files: your project will rely on per-app loaders for your static files and for your templates files, and ultimately there's a brute-force search in each of your apps: their inner static/ and templates/ directories, if exist, are searched for those files.
Everything else is just normal python code and, if you need to import them from any view, you just do a normal import sentence for them (since view code is imported with the normal Python import mechanism).
| 0 | 0 | 0 | 0 |
2015-12-16T21:15:00.000
| 4 | 0.049958 | false | 34,322,216 | 1 | 0 | 1 | 2 |
I'm learning Django on my own and I can't seem to get a clue of where I implement a regular Python class. What I mean is, I don't know where do the Python classes I write go. Like they go in a separate file and then are imported to the views.py or are the classes implemented inside the views.py file?
Example I want to implement a Class Alphabet, should I do this in a separate file inside the module or just implement the functions inside the views.py file?
|
How to make user can only access their own records in odoo?
| 34,328,053 | 2 | 0 | 6,109 | 1 |
python,xml,openerp
|
Providing access rule is one part of the solution. If you look at "Access Control List" in "Settings > Technical > Security > Access Controls Lists", you can see that the group Hr Employee has only read access to the model hr.employee. So first you have to provide write access also to model hr.employee for group Employee. After you have allowed write access to the group Employee for model hr.employee,
Create a new record rule from Settings > Technical > Security > Record Rules named User_edit_own_employee_rule (As you wish).
Provide domain for this group User_edit_own_employee_rule as [('user_id', '=', user.id)]. And this domain should apply for Read and Write. ie; by check "Apply for Read" and "Apply for Write" Boolean field.
Create another record rule named User_edit_own_employee_rule_1
Provide domain for this group User_edit_own_employee_rule as [('user_id', '!=', user.id)]. And this domain should apply for Read only. ie; check "Apply for Read".
Now by creating two record rule for the group Employee, we can provide access to read and write his/her own record but only to read other employee records.
Detail:
Provide write access in access control list to model hr.employee for group Employee. Create two record rule:
User_edit_own_employee_rule :
Name : User_edit_own_employee_rule
Object : Employee
Apply for Read : Checked
Apply for Write : Checked
Rule Definition : [('user_id', '=', user.id)]
Groups : Human Resources / Employee
User_edit_own_employee_rule_1 :
Name : User_edit_own_employee_rule_1
Object : Employee
Apply for Read : Checked
Apply for Write : Un Checked
Rule Definition : [('user_id', '!=', user.id)]
Groups : Human Resources / Employee
I hope this will help you.
| 0 | 0 | 0 | 0 |
2015-12-17T06:03:00.000
| 2 | 0.197375 | false | 34,327,655 | 0 | 0 | 1 | 1 |
I have created groups to give access rights everything seems fine but I want to custom access - rights for module issue. When user of particular group logins, I want that user only able to create/edit their own issue and can't see other users issue.Please help me out!!
Thanks
|
Upgrading Django to 1.8 produces irrelevant Sites-framework warnings
| 34,339,800 | 0 | 5 | 78 | 0 |
python,django,django-authentication,django-sites
|
Had the same issue, created default site entry (Id=1) and never had any issue ever since
| 0 | 0 | 0 | 0 |
2015-12-17T16:08:00.000
| 1 | 1.2 | true | 34,339,172 | 0 | 0 | 1 | 1 |
The following warning appears twice when I run ./manage.py runserver after upgrading Django from 1.7 to 1.8.
.../django/contrib/sites/models.py:78: RemovedInDjango19Warning: Model class django.contrib.sites.models.Site doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
The project still runs fine, but I want to get rid of the warning. I'm not using the Sites framework in my project, but the warning disappeared when I added 'django.contrib.sites' to the INSTALLED_APPS list in the project's settings.py. So that took care of the warning and I was happy.
But then the project starts demanding a Site in the database at the login prompt. Now, the whole thing is that I don't want the Sites framework at all. But now I seem forced to manage a database entry and need to consider it during installation and such when I'm just trying to get rid of a warning.
It appears that the login code in django.contrib.auth relies on it from the code. However, in Django's documentation I found this assertion: "site_name: An alias for site.name. If you don’t have the site framework installed, this will be set to the value of request.META['SERVER_NAME']. For more on sites, see The “sites” framework."
So it appears that the authors of django.contrib.auth consider the Sites framework optional, but judging from my situation, it isn't.
Hence my question. Is it possible to use Django's (presumably contributed) authentication system without using the Sites framework at all and still getting rid of that warning and everything related to the Sites framework?
|
Multitenant SAAS using Django
| 34,340,595 | 0 | 0 | 363 | 0 |
python,django,django-models,multi-tenant,saas
|
I'm not familiar with Django. But If you going to build a SAAS, one of the main things you need to think about from the beginning is scalability, which of course suggests the 2nd option. 1st one will be a nightmare when your SAAS is expanding.
| 0 | 0 | 0 | 0 |
2015-12-17T17:15:00.000
| 1 | 0 | false | 34,340,496 | 0 | 0 | 1 | 1 |
I have been mainly focused on other frameworks like laravel and express.js. so i am new to django but have build several projects. I need to build a SAAS product. So which is the best approach.
Seperate database of each customers
Same db with tenant_id mapping
Or any other best solutions by SO gurus?
|
Delete the large object in ironpython, and instantly release the memory?
| 34,381,901 | 2 | 1 | 499 | 0 |
collections,ironpython,garbage
|
In general managed enviroments relase there memory, if no reference is existing to the object anymore (from connection from the root to the object itself). To force the .net framework to release memory, the garbage collector is your only choice. In general it is important to know, that GC.Collect does not free the memory, it only search for objects without references and put the in a queue of objects, which will be released. If you want to free memory synchron, you also need GC.WaitForPendingFinalizers.
One thing to know about large objects in the .net framework is, that they are stored seperatly, in the Large Object Heap (LOH). From my point of few, it is not bad to free those objects synchron, you only have to know, that this can cause some performance issues. That's why in general the GC decide on it's own, when to collect and free memory and when not to.
Because gc.collect is implemented in Python as well as in IronPython, you should be able to use it. If you take a look at the implementation in IronPython, gc.collect does exactly what you want, call GC.Collect() and GC.WaitForPendingFinalizer. So in your case, i would use it.
Hope this helps.
| 0 | 0 | 0 | 0 |
2015-12-20T00:46:00.000
| 1 | 1.2 | true | 34,376,936 | 1 | 0 | 1 | 1 |
I am a creating a huge mesh object (some 900 megabytes in size).
Once I am done with analysing it, I would like to somehow delete it from the memory.
I did a bit of search on stackoverflow.com, and I found out that del will only delete the reference to mentioned mesh. Not the mesh object itself.
And that after some time, the mesh object will eventually get garbage collected.
Is gc.collect() the only way by which I could instantly release the memory, and there for somehow remove the mentioned large mesh from the memory?
I've found replies here on stackoverflow.com which state that gc.collect() should be avoided (at least when it comes to regular python, not specifically ironpython).
I've also find comments here on stackoverflow which claim that in IronPython it is not even guaranteed the memory will be released if nothing else is holding a reference.
Any comments on all these issues?
I am using ironpython 2.7 version.
Thank you for the reply.
|
Django saving models by JS rather than form submissions?
| 34,403,444 | 0 | 0 | 51 | 0 |
javascript,python,django,forms
|
There is some terminology confusion here, as SColvin points out; it's really not clear what you mean by "custom variables", and how those relates to models.
However your main confusion seems to be around forms. There is absolutely no requirement to use them: they are just one method of updating models. It is always possible to edit the models directly in code, and the data from that can of course come from Javascript if you want. The tutorial has good coverage of how to update a model from code without using a form.
If you're doing a lot of work via JS though, you probably want to look into the Django Rest Framework, which simplifies the process of converting Django model data to and from JSON to use in your client-side code. Again though DRF isn't doing anything you couldn't do manually in your own code, all without the use of forms.
| 0 | 0 | 0 | 0 |
2015-12-21T17:07:00.000
| 1 | 0 | false | 34,400,922 | 0 | 0 | 1 | 1 |
I have a contract job for editing a Django application, and Django is not my main framework to use, so I have a question regarding models in it.
The application I am editing has a form that each user can submit, and every single model in the application is edited directly through the form.
From this perspective, it seems every model is directly a form object, I do not see any model fields that I could use for custom variables. Meaning instead of a "string" that I could edit with JS, I only see a TextField where the only way it could be edited is by including it on a form directly.
If I wanted to have some models that were custom variables, meaning I controlled them entirely through JS rather than form submissions, how would I do that in Django?
I know I could, for example, have some "hidden" form objects that I manipulated with JS. But this solution sounds kind of hacky. Is there an intended way that I could go about this?
Thanks!
(Edit: It seems most responses do not know what I am referring to. Basically I want to allow the client to perform some special sorting functions etc, in which case I will need a few additional lists of data. But I do not want these to be visible to the user, and they will be altered exclusively by js.
Regarding the response of SColvin, I understand that the models are a representation of the database, but from how the application I am working on is designed, it looks as if the only way the models are being used is strictly through forms.
For example, every "string" is a "TextField", and lets say we made a string called "myField", the exclusive use of this field would be to use it in templates with the syntax {{ form.myField|attr:"rows:4" }}.
There are absolutely no use of this model outside of the forms. Every place you see it in the application, there is a form object. This is why I was under the impression that is the primary way to edit the data found in the models.
I did the Django tutorial prior to accepting this project but do not remember seeing any way to submit changes to models outside of the forms.
So more specifically what I would like to do in this case: Let's say I wanted to add a string to my models file, and this string will NOT be included/edited on the form. It will be invisible to the user. It will be modified browser-side by some .js functions, and I would like it to be saved along when submitting the rest of the form. What would be the intended method for going about doing this?
If anyone could please guide me to documentation or examples on how to do this, it would be greatly appreciated! )
(Edit2: No responses ever since the first edit? Not sure if this post is not appearing for anyone else. Still looking for an answer!)
|
What data interchange formats can be used for a python and a java application to talk to each other?
| 34,405,239 | 2 | 1 | 365 | 0 |
java,python,json,serialization,ipc
|
I don't think so.
You seem like you are heading in the right direction when you said .
I don't want to get into plain text processing that could potentially
be buggy.
Which is absolutely true and why you should consider formatted text like JSON.
And unfortunately any formatting means overhead : increasing the size of the data you are sending.
So you either need to improvise your own format that has the least amount of "extra stuff" in it. Or use the available ones like Json , XML ...
| 0 | 0 | 0 | 0 |
2015-12-21T22:10:00.000
| 1 | 0.379949 | false | 34,405,180 | 1 | 0 | 1 | 1 |
I have a python application that will be talking to a Java server. The python application will be sending out simple messages continuously to the java server with a handful of values [ For eg: Name, studentRollNumber, marks ]
I considered having this communication take place in json format since I don't want to get into plain text processing that could potentially be buggy. However, if I use json I'm going to keep transferring the names of the fields [ such as "name", "studentRollNumber" ] etc. multiple times. Is there a better way to do this ?
TL;DR
What is a good way to serialize/deserialize an object into text that works in both Java and Python without being too verbose ?
|
why "models.py" is not required in django when using mongodb as backend?
| 34,410,315 | 2 | 1 | 517 | 0 |
python,django,mongodb
|
models.py is the Django ORM way of inspecting a fixed relational schema and generating the relevant SQL code to initialize (or modify) the database. "ORM" stands for "Object-Relational Mapping".
Mongo is not relational, hence you don't need this type of schema.
(Of course, that can cause a lot of other problems if the needs of your project change later...)
But you don't need a relational schema since you're not using a relational DB.
| 0 | 0 | 0 | 0 |
2015-12-22T07:04:00.000
| 2 | 1.2 | true | 34,410,203 | 0 | 0 | 1 | 1 |
Recetly I've seen an app powered with django and mongodb as backend,thing is that app doesn't have a models.py file.All the datas are inserted directly in views.py.I Just need a little clarification about this particular things "Using django without models.py with mongodb."
|
How change from ForeignKey to ManyToManyField in Django?
| 34,412,549 | 0 | 2 | 624 | 0 |
python,django
|
There are really two ways to go here (that I can think of off top of my head):
Create a temporary field to store the current data of videos.Video.machine, remove videos.Video.machine field, add videos.Video.machine back as a m2m field, migrate the data from the temporary field into this new field, and remove the temporary field.
Create a new field, i.e. videos.Video.machines that is m2m, copy the current field videos.Video.machine into it, and then remove the videos.Video.machine field.
I would personally go with the second since it is not only easier but the naming makes more sense anyway!
| 0 | 0 | 0 | 0 |
2015-12-22T09:18:00.000
| 1 | 0 | false | 34,412,266 | 0 | 0 | 1 | 1 |
I'm trying to change one variable from ForeignKey to ManyToManyField. Obtained the following error when I try to do a command migrate:
"ValueError: Cannot alter field videos.Video.machine into videos.Video.machine - they are not compatible types (you cannot alter to or from M2M fields, or add or remove through= on M2M fields)"
How can solve this problem?
|
Regarding the database in Pythonanywhere
| 34,435,254 | 1 | 3 | 194 | 0 |
pythonanywhere
|
That's kind of odd; if the SQLite DB was in the git repository, and was uploaded correctly, I'd expect it to work. Perhaps the database is in a different directory? On PythonAnywhere, the working directory of your running web app might be (actually, probably is) different to your local machine. And if you're specifying the database using a relative path (which you probably are) then that might mean that the one you created locally is somewhere different to where it is on PythonAnywhere.
BTW, from my memories of the Django Girls tutorial (I coached for one session a few months ago) you're not actually expected to put the database in your Git repository. It's not how websites are normally managed. You'd normally have one database locally, for testing, where you'd be able to put random testing data, and then a completely different one on your live site, with posts for public consumption.
| 0 | 0 | 0 | 0 |
2015-12-22T09:50:00.000
| 1 | 0.197375 | false | 34,412,869 | 0 | 0 | 1 | 1 |
I am following the Djangogirls tutorial according to which I added new posts in the blog on the Django admin. I created a template using Django templates to display this Dynamic data. I checked it by opening 127.0.0.1:8000 in browser and I was able to see the data. Then for deploying this site on Pythonanywhere, I pushed the data to github from my local rep using git push and did git pull on Pythonanywhere from github.All the files including the db.sqlite3(database) file were updated properly in pythonanywhere but still I could not the see the data after running my webapp on pythonanywhere.Then , I manually removed the db.sqlite3 file from pythonanywhere and uploaded the same file from my local desktop and it worked. Why did this work? and is there an alternative for this?
|
Etags used in RESTful APIs are still susceptible to race conditions
| 34,428,792 | 2 | 6 | 1,416 | 0 |
python,database,rest,concurrency,etag
|
This is really a question about how to use ORMs to do updates, not about ETags.
Imagine 2 processes transferring money into a bank account at the same time -- they both read the old balance, add some, then write the new balance. One of the transfers is lost.
When you're writing with a relational DB, the solution to these problems is to put the read + write in the same transaction, and then use SELECT FOR UPDATE to read the data and/or ensure you have an appropriate isolation level set.
The various ORM implementations all support transactions, so getting the read, check and write into the same transaction will be easy. If you set the SERIALIZABLE isolation level, then that will be enough to fix race conditions, but you may have to deal with deadlocks.
ORMs also generally support SELECT FOR UPDATE in some way. This will let you write safe code with the default READ COMMITTED isolation level. If you google SELECT FOR UPDATE and your ORM, it will probably tell you how to do it.
In both cases (serializable isolation level or select for update), the database will fix the problem by getting a lock on the row for the entity when you read it. If another request comes in and tries to read the entity before your transaction commits, it will be forced to wait.
| 0 | 1 | 0 | 0 |
2015-12-23T03:13:00.000
| 3 | 0.132549 | false | 34,428,046 | 0 | 0 | 1 | 3 |
Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem.
|
Etags used in RESTful APIs are still susceptible to race conditions
| 63,120,699 | 1 | 6 | 1,416 | 0 |
python,database,rest,concurrency,etag
|
You are right that you can still get race conditions if the 'check last etag' and 'make the change' aren't in one atomic operation.
In essence, if your server itself has a race condition, sending etags to the client won't help with that.
You already mentioned a good way to achieve this atomicity:
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example.
You could do something else, like using a mutex lock. Or using an architecture where two threads cannot deal with the same data.
But the database check seems good to me. What you describe about ORM checks might be an addition for better error messages, but is not by itself sufficient as you found.
| 0 | 1 | 0 | 0 |
2015-12-23T03:13:00.000
| 3 | 0.066568 | false | 34,428,046 | 0 | 0 | 1 | 3 |
Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem.
|
Etags used in RESTful APIs are still susceptible to race conditions
| 34,428,187 | 1 | 6 | 1,416 | 0 |
python,database,rest,concurrency,etag
|
Etag can be implemented in many ways, not just last updated time. If you choose to implement the Etag purely based on last updated time, then why not just use the Last-Modified header?
If you were to encode more information into the Etag about the underlying resource, you wouldn't be susceptible to the race condition that you've outlined above.
The only fool proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
That's your answer.
Another option would be to add a version to each of your resources which is incremented on each successful update. When updating a resource, specify both the ID and the version in the WHERE. Additionally, set version = version + 1. If the resource had been updated since the last request then the update would fail as no record would be found. This eliminates the need for locking.
| 0 | 1 | 0 | 0 |
2015-12-23T03:13:00.000
| 3 | 0.066568 | false | 34,428,046 | 0 | 0 | 1 | 3 |
Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem.
|
How do I share some session variables across Django Subdomain sessions
| 34,428,403 | 0 | 0 | 211 | 0 |
python,django
|
It depends on how you are persisting your sessions. If you are using cookies to persist your sessions which seems likely, and you aren't willing to use a .site.com cookie domain, then you need to offload your session storage to something like Redis or some other key/store sort of server agnostic option.
| 0 | 0 | 0 | 0 |
2015-12-23T03:50:00.000
| 1 | 0 | false | 34,428,351 | 0 | 0 | 1 | 1 |
I have a Django app that has wildcard subdomains. The user has multiple login sessions across these subdomains. For example, he goes to sd1.site.com and logs in(HTTP POST request to sd1.site.com/login/) with credentials username1 and password1. This creates a session for the user on sd1.site.com. He then goes to sd2.site.com and logs in with credentials username2 and password2. This creates a session for the user on sd2.site.com.
My end goal is to tell sd1.site.com that the user is logged in from sd2.site.com as well. My plan is to store a session variable called 'domains_logged_in' with value ['sd1','sd2']. Both sd1 and sd2 should be able to access 'domains_logged_in'.
Setting SESSION_COOKIE_DOMAIN = '.site.com' is not an option as it makes it difficult to manage multiple sessions and is not entirely secure. Am I missing something?
|
Should I take steps to ensure a Django app can scale before writing it?
| 34,440,072 | 1 | 0 | 132 | 0 |
python,angularjs,django,postgresql,python-3.x
|
I don't think you need to start worrying about the setup right away. I would discourage premature optimizations. Rather, run the app in production, profile it. See what affects the performance when you hit scale - you would know what's the bottleneck.
| 0 | 0 | 0 | 0 |
2015-12-23T16:18:00.000
| 3 | 0.066568 | false | 34,439,775 | 0 | 0 | 1 | 2 |
So, I'm looking at writing an app with python2 django(-rest-framework), postgres and angular.
I'm aware there are lots of things that can be done
multi-server setup behind load balancer
DB replication/sharding?
caching (in various ways)
swapping DRF serialiser for serpy
running on python3
running on pypy
my question is - Which of these (or other things) should really be done right at the start of the project?
|
Should I take steps to ensure a Django app can scale before writing it?
| 34,440,311 | 1 | 0 | 132 | 0 |
python,angularjs,django,postgresql,python-3.x
|
The first and main things you have to get right are a clean and correct db schema and clear, readable and correctly factored (DRY... unless it's accidental duplication) and decoupled code. If you know to design a relational DB schema and learn to use Python and Django properly you shouldn't have much problems so far, and if you get both these things right it will (well it should) be easy to scale - by adding cache where needed (Redis, Memcache, or an intermediary NoSQL document database storing "pre-processed" versions of your often accessed data), adding servers, load-balancing etc, depending on your application's needs. Django is built to scale easily, and unless you do stupid things it does scale easily.
| 0 | 0 | 0 | 0 |
2015-12-23T16:18:00.000
| 3 | 0.066568 | false | 34,439,775 | 0 | 0 | 1 | 2 |
So, I'm looking at writing an app with python2 django(-rest-framework), postgres and angular.
I'm aware there are lots of things that can be done
multi-server setup behind load balancer
DB replication/sharding?
caching (in various ways)
swapping DRF serialiser for serpy
running on python3
running on pypy
my question is - Which of these (or other things) should really be done right at the start of the project?
|
How to remove users from chat in odoo8?
| 34,459,305 | 0 | 1 | 354 | 0 |
python,xml,openerp
|
You can set chat rules via security section in your im_chat addon folder. (/openerp/addons/im_chat/security).
| 0 | 0 | 0 | 0 |
2015-12-24T10:11:00.000
| 2 | 0 | false | 34,451,066 | 0 | 0 | 1 | 1 |
I just wanna ask you that How to remove users from instant messaging in odoo. These are the users are who don't belongs to any of groups in my module.Please help me out.
Thanks in advance
|
Can celery assign task to specify worker
| 34,469,957 | -2 | 8 | 12,753 | 0 |
python,celery
|
Just to answer your second question CELERY_TASK_RESULT_EXPIRES is the time in seconds that the result of the task is persisted. So after a task is over, its result is saved into your result backend. The result is kept there for the amount of time specified by that parameter. That is used when a task result might be accessed by different callers.
This has probably nothing to do with your problem. As for the first solution, as already stated you have to use multiple queues. However be aware that you cannot assign the task to a specific Worker Process, just to a specific Worker which will then assign it to a specific Worker Process.
| 0 | 1 | 0 | 0 |
2015-12-26T02:33:00.000
| 2 | -0.197375 | false | 34,468,024 | 0 | 0 | 1 | 1 |
Celery will send task to idle workers.
I have a task will run every 5 seconds, and I want this task to only be sent to one specify worker.
Other tasks can share the left over workers
Can celery do this??
And I want to know what this parameter is: CELERY_TASK_RESULT_EXPIRES
Does it means that the task will not be sent to a worker in the queue?
Or does it stop the task if it runs too long?
|
Multi-master database replication with Django webapp and MySQL
| 34,841,926 | 0 | 0 | 1,628 | 1 |
python,mysql,django,multi-master-replication
|
Your idea of the router is great! I would add that you need automatically detect whether a databases is [slow] down. You can detect that by the response time and by connection/read/write errors. And if this happens then you exclude this database from your round-robin list for a while, trying to connect back to it every now and then to detect if the databases is alive.
In other words the round-robin list grows and shrinks dynamically depending on the health status of your database machines.
The another important notice is that luckily you don't need to maintain this round-robin list common to all the web servers. Each web server can store its own copy of the round-robin list and its own state of inclusion and exclusion of databases into this list. This is just because a database server can be seen from one web server and can be not seen from another one due to local network problems.
| 0 | 0 | 0 | 0 |
2015-12-26T02:34:00.000
| 1 | 0 | false | 34,468,030 | 0 | 0 | 1 | 1 |
I am working on scaling out a webapp and providing some database redundancy for protection against failures and to keep the servers up when updates are needed. The app is still in development, so I have chosen a simple multi-master redundancy with two separate database servers to try and achieve this. Each server will have the Django code and host its own database, and the databases should be as closely mirrored as possible (updated within a few seconds).
I am trying to figure out how to set up the multi-master (master-master) replication between databases with Django and MySQL. There is a lot of documentation about setting it up with MySQL only (using various configurations), but I cannot find any for making this work from the Django side of things.
From what I understand, I need to approach this by adding two database entries in the Django settings (one for each master) and then write a database router that will specify which database to read from and which to write from. In this scenario, both databases should accept both reads and writes, and writes/updates should be mirrored over to the other database. The logic in the router could simply use a round-robin technique to decide which database to use. From there on, further configuration to set up the actual replication should be done through MySQL configuration.
Does this approach sound correct, and does anyone have any experience with getting this to work?
|
Django uwsgi subprocess and permissions
| 34,545,562 | 0 | 0 | 240 | 0 |
python,django,permissions,uwsgi,cherokee
|
As I've said in my comments this issue was related to supervisord. I've solved it assigning the right path and user into "environment" variable of supervisord's config file.
| 0 | 1 | 0 | 1 |
2015-12-26T11:52:00.000
| 1 | 0 | false | 34,471,080 | 0 | 0 | 1 | 1 |
I'm trying to generate PDF file from Latex template. I've done it in development environment (running python manage.py straight from eclipse)... but I can't make it work into the server, which is running using cherokee and uwsgi.
We have realized that open(filename) creates a file owning to root (also root group). This isn't taking place in development environment... but the most strange thing about this issue is that somewhere else in our code we are creating a text file (latex uses is a text file too), but it's created with the user cherokee is supposed to use, not root!
What happened? How can we fix it?
We are running this code on ubuntu linux and a virtual environment both in development and production.
We started following some instructions to do it using python's temporary file and folder creation functions, but we thought that it could be something related with them, and created them "manually" in order to try to solve this issue... but it didn't work.
|
Unable to correctly restore postgres data: I get the same error I usually get if I haven't run syncdb and migrate
| 34,480,125 | 1 | 0 | 119 | 1 |
python,django,database,postgresql,database-migration
|
Try those same steps WITHOUT running syncdb and migrate at all. So overall, your steps will be:
heroku pg:backups capture
curl -o latest.dump heroku pg:backups public-url
`scp -P latest.dump [email protected]:/home/myuser
drop database mydb;
create database mydb;
pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump
| 0 | 0 | 0 | 0 |
2015-12-26T15:18:00.000
| 1 | 1.2 | true | 34,472,609 | 0 | 0 | 1 | 1 |
I have a Django app with a postgres backend hosted on Heroku. I'm now migrating it to Azure. On Azure, the Django application code and postgres backend have been divided over two separate VMs.
Everything's set up, I'm now at the stage where I'm transferring data from my live Heroku website to Azure. I downloaded a pg_dump to my local machine, transferred it to the correct Azure VM, ran syncdb and migrate, and then ran pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump. The data got restored (11 errors were ignored, pertaining to 2 tables that get restored, but which my code now doesn't use).
When I try to access my website, I get the kind of error that usually comes in my website if I haven't run syncdb and migrate:
Exception Type: DatabaseError Exception Value:
relation "user_sessions_session" does not exist LINE 1:
...last_activity", "user_sessions_session"."ip" FROM "user_sess...
^
Exception Location:
/home/myuser/.virtualenvs/myenv/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py
in execute, line 54
Can someone who has experienced this before tell me what I need to do here? It's acting as if the database doesn't exist and I had never run syncdb. When I use psql, I can actually see the tables and the data in them. What's going on? Please advise.
|
Django determine if client hasn't made a request in X seconds
| 34,477,073 | 1 | 0 | 54 | 0 |
android,django,python-3.x
|
You can try using Django's cache mechanism (either memcached or redis) to store the timestamp of the last communication for a given Android App Client with its ID as cache key and an expiration time of whatever you want the timeout to be.
Setting it up like this you are able to simply check if the cache has a record of the current Android App's ID to determine if it errored out.
| 0 | 0 | 0 | 0 |
2015-12-26T17:20:00.000
| 1 | 1.2 | true | 34,473,506 | 0 | 0 | 1 | 1 |
I have an android app sending requests to a Django back-end asking whether it should perform a certain operation. These act as heartbeats. There is a client-side page that will allow the user to tell the android app to perform those operations. However, I would like to be able to tell the client-side page, whether the phone app has died for some unexpected reason, or has stopped sending the server heartbeats.
Is there a way in Django to add a timer to a view such that a signal will be triggered if the client doesn't send a request after X seconds? Is there a Android Websockets library for Django that would do this better?
|
Django way to modify a database table using the contents of another table
| 34,477,438 | 1 | 0 | 884 | 1 |
python,mysql,django
|
I am pretty sure there is no built-in way for something this specific. Finding single words in a text alone is a quite complex task if you take into consideration misspelled words, hyphen-connected words, quotes, all sorts of punctuation and unicode letters.
Your best bet would be using a regex for each text and save the matches on a second model manually.
| 0 | 0 | 0 | 0 |
2015-12-27T02:18:00.000
| 2 | 0.099668 | false | 34,477,062 | 0 | 0 | 1 | 1 |
Edited to clarify my meaning:
I am trying to find a method using a Django action to take data from one database table and then process it into a different form before inserting it into a second table. I am writing a kind of vocabulary dictionary which extracts data about students' vocabulary from their classroom texts. To do this I need to be able to take the individual words from the table field containing the content and then insert the words into separate rows in another table. I have already written the code to extract the individual words from the record in the first database table, I just need a method for putting it into a second database table as part of the same Django action.
I have been searching for an answer for this, but it seems Django actions are designed to handle the data for only one database table at a time. I am considering writing my own MySQL connection to inject the words into the second table in the database. I thought I would write here first though to see if anyone knows if I am missing a built-in way to do this in Django.
|
What is the difference between get_all_reserved_instances and get_all_reserved_instances_offerings?
| 34,494,713 | 2 | 0 | 178 | 0 |
python,amazon-web-services,amazon-s3,boto,boto3
|
The get_all_reserved_instance_offerings method in boto returns a list of all reserved instance types that are available for purchase. So, if you want to purchase reserved instances you would look through the list of offerings, find the instance type, etc. that you want and then you would be able to purchase that offering with the purchase_reserved_instance_offering method or via the AWS console.
So, perhaps a simple way to say it is get_all_reserved_instance_offerings tells you what you can buy and get_all_reserved_instances tells you what you have already bought.
| 0 | 0 | 1 | 0 |
2015-12-28T11:54:00.000
| 1 | 1.2 | true | 34,493,061 | 0 | 0 | 1 | 1 |
Both belongs to boto.ec2 . From the documentation i found that get_all_reserved_instances returns all reserved instances, but i am not clear about get_all_reserved_instances_offerings . What is it mean by offering.
One other thing that i want to know is,what is recurring_charges ?
Please clarify ?
|
Celery Tasks with eta get removed from RabbitMQ
| 35,126,618 | 1 | 9 | 1,055 | 0 |
python,django,multithreading,celery
|
As far as I know Celery does not rely on RabbitMQ's scheduled queues. It implements ETA/Countdown internally.
It seems that you have enough workers that are able to fetch enough messages and schedule them internally.
Mind that you don't need 200 workers. You have the prefetch multiplier set to the default value so you need less.
| 0 | 1 | 0 | 0 |
2015-12-28T14:25:00.000
| 1 | 0.197375 | false | 34,495,318 | 0 | 0 | 1 | 1 |
I'm using Django 1.6, RabbitMQ 3.5.6, celery 3.1.19.
There is a periodic task which runs every 30 seconds and creates 200 tasks with given eta parameter. After I run the celery worker, slowly the queue gets created in RabbitMQ and I see around 1200 scheduled tasks waiting to be fired. Then, I restart the celery worker and all of the waiting 1200 scheduled tasks get removed from RabbitMQ.
How I create tasks:
my_task.apply_async((arg1, arg2), eta=my_object.time_in_future)
I run the worker like this:
python manage.py celery worker -Q my_tasks_1 -A my_app -l
CELERY_ACKS_LATE is set to True in Django settings. I couldn't find any possible reason.
Should I run the worker with a different configuration/flag/parameter? Any idea?
|
Flask using Nginx?
| 52,591,986 | 1 | 1 | 159 | 0 |
python,nginx,flask
|
On a development machine flask can be run without a webserver (nginx, apache etc) or an application container (eg uwsgi, gunicorn etc).
Things are different when you want to handle the load on a production server. For starters python is relatively very slow when it comes to serving static content where as apache / nginx do that very well.
When the application becomes big enough to be broken into multiple separate services or has to be horizontally scaled, the proxy server capabilities of nginx come in very handy.
In the architectures I build, nginx serves as the entry point where ssl is terminates and the rest of the application is behind a VPN and firewall.
Does this help?
| 0 | 0 | 0 | 0 |
2015-12-28T20:57:00.000
| 3 | 0.066568 | false | 34,500,669 | 0 | 0 | 1 | 1 |
I am a .net developer coming over to python. I have recently started using Flask and have some quick questions about serving files.
I noticed a lot of tutorials focused on nginix and flask. However, I am able to run flask without nginx. I'm just curious as to why this is used together (nginx and flask). Is nginx only for static files?
|
Delete migrations that haven't been migrated yet
| 61,643,148 | 0 | 0 | 512 | 1 |
python,django,django-migrations,django-1.9
|
Simply delete 0005-0008 migration files from migrations/ folder.
Re. database tables, you won't need to delete anything from there if migrations weren't applied. You can check yourself django_migrations table entries to be sure.
| 0 | 0 | 0 | 0 |
2015-12-28T23:37:00.000
| 1 | 0 | false | 34,502,379 | 0 | 0 | 1 | 1 |
I set a key that I have now realizes is wrong. It is set at migration 0005. The last migration I did was 0004. I'm now up to 0008. I want to rebuild the migrations with the current models.py against the current database schema. Migration 0005 is no longer relevant and has been deleted from models.py. Migration 0005 is also an IntegrityError, so it cannot be applied without deleting data that shouldn't be deleted.
How do I get past migration 0005 so I can migrate?
|
How to install beautifulsoup into python3, when default dir is python2.7?
| 70,827,357 | 1 | 27 | 73,703 | 0 |
python,python-3.x,beautifulsoup,pip
|
I had some mismatch between Python version and Beautifulsoup. I was installing this project
Th3Jock3R/LinuxGSM-Arma3-Mod-Update
to a linux centos8 dedicated Arma3 server. Python3 and Beautifulsoup4 seem to match.So I updated Python3, removed manually Beautifulsoup files and re-installed it with: sudo yum install python3-beautifulsoup4 (note the number 3). Works. Then pointing directories in Th3Jock3R:s script A3_SERVER_FOLDER = "" and A3_SERVER_DIR = "/home/arma3server{}".format(A3_SERVER_FOLDER) placing and running the script in same folder /home/arma3server with python3 update.py. In this folder is also located new folder called 'modlists' Now the lightness of mod loading blows my mind. -Bob-
| 0 | 0 | 0 | 0 |
2015-12-29T08:59:00.000
| 6 | 0.033321 | false | 34,507,744 | 1 | 0 | 1 | 2 |
I have both Python 2.7 and Python 3.5 installed. When I type pip install beautifulsoup4 it tells me that it is already installed in python2.7/site-package directory.
But how do I install it into the python3 dir?
|
How to install beautifulsoup into python3, when default dir is python2.7?
| 63,598,946 | 0 | 27 | 73,703 | 0 |
python,python-3.x,beautifulsoup,pip
|
If you are on windows, this works for Python3 as well
py -m pip install bs4
| 0 | 0 | 0 | 0 |
2015-12-29T08:59:00.000
| 6 | 0 | false | 34,507,744 | 1 | 0 | 1 | 2 |
I have both Python 2.7 and Python 3.5 installed. When I type pip install beautifulsoup4 it tells me that it is already installed in python2.7/site-package directory.
But how do I install it into the python3 dir?
|
How to upload files to ec2 instance through flask
| 34,514,724 | 1 | 0 | 747 | 0 |
python,amazon-web-services,amazon-ec2,flask
|
You will not be able to upload directly to the /dev/xvda/upload/hello.txt path, as this is a block device, not a mounted file system (raw hard drive).
You will need to use the path like /upload.
It is likely you are running into permission issues with the /upload folder. As a test I would suggest using the /tmp/ folder for your uploads, that should have open file permissions. If that works then you know it was permission issues preventing /upload from working. To make the /upload folder work, you will need to chown it to the same user that your flask app is running as. (There are other ways to make it work, but this is probably the easiest).
chown flask_user /upload
| 0 | 0 | 0 | 0 |
2015-12-29T15:56:00.000
| 1 | 1.2 | true | 34,514,532 | 0 | 0 | 1 | 1 |
I have a web app that has to upload a file from local system to flask app on ec2 instance. I defined the upload path and when I access it I get an IOError saying:
IOError: [Errno 20] Not a directory: '/dev/xvda/upload/hello.txt'
I've also tried to use only: /upload
Both of them do not work, I have created the folder on the instance using mkdr command
|
Completing Spotify Authorization Code Flow via desktop application without using browser
| 34,520,316 | 2 | 6 | 1,073 | 0 |
python,api,heroku,oauth-2.0,spotify
|
I once ran into a similar issue with Google's Calendar API. The app was pretty low-importance so I botched a solution together by running through the auth locally in my browser, finding the response token, and manually copying it over into an environment variable on Heroku. The downside of course was that tokens are set to auto-expire (I believe Google Calendar's was set to 30 days), so periodically the app stopped working and I had to run through the auth flow and copy the key over again. There might be a way to automate that.
Good luck!
| 0 | 1 | 1 | 0 |
2015-12-29T22:40:00.000
| 2 | 0.197375 | false | 34,520,233 | 0 | 0 | 1 | 1 |
Working on a small app that takes a Spotify track URL submitted by a user in a messaging application and adds it to a public Spotify playlist. The app is running with the help of spotipy python on a Heroku site (so I have a valid /callback) and listens for the user posting a track URL.
When I run the app through command line, I use util.prompt_for_user_token. A browser opens, I move through the auth flow successfully, and I copy-paste the provided callback URL back into terminal.
When I run this app and attempt to add a track on the messaging application, it does not open a browser for the user to authenticate, so the auth flow never completes.
Any advice on how to handle this? Can I auth once via terminal, capture the code/token and then handle the refreshing process so that the end-user never has to authenticate?
P.S. can't add the tag "spotipy" yet but surprised it was not already available
|
Python Pydub AudioSegment laggy export
| 34,548,666 | 3 | 2 | 674 | 0 |
python,amazon-s3,pydub
|
The delay is caused by the transcoding step (converting the raw data to mp3). You can avoid that by exporting WAV files.
A WAV file is essentially just the raw data with some header information at the beginning so exporting with format="wav" will avoid the need to transcode, and should be significantly faster.
However, without any compression, the files will be larger (like 40MB instead of 5MB). You'll probably lose more than 2 seconds due to transferring 5 to 10 times more data over the network.
Some codecs are slower than others, so you may want to experiment with other encodings to strike a different speed/file size balance than mp3 and wav do (or you could try just using regular file compression like gzip, bz2, or a "zip" file on your wav output)
| 0 | 0 | 1 | 0 |
2015-12-31T11:50:00.000
| 1 | 1.2 | true | 34,546,117 | 0 | 0 | 1 | 1 |
I need to upload AudioSegment object to S3 after doing some editing. What I'm doing is to edit the audio, then export it and then send it to S3.
However, export to mp3 is taking like 2 seconds for 2 minutes song.
So, I'm just wondering if it's possible to send the file to S3 without export it. Note: I see there is raw_data, however, I need to be able to play the saved clip.
|
nested-inlines and django-suit
| 34,578,120 | 0 | 1 | 700 | 0 |
python,django,django-admin,django-suit
|
Nested inlines is something that isn't supported across the board, as it's not really part of the Django forms system (which is what the Django admin is based on). I'm sure this may change in the future, but for now the simplest thing you can do is just use multiple admins. It means saving in one form, then going into another to add data which links back to what you've just saved, but you'll probably find that more functionally reliable than, what might end up being a hacky way of getting nested inlines to work.
You could create your own workflow by overriding some of the model admins' view methods, so if the admin has just created a user, they'd be redirected to the admin for assigning books to that user, etc. You can edit the change templates for each model to add extra buttons, so you could have "Save and Manage Books" to the standard array of "Save" buttons in the user model admin, etc.
| 0 | 0 | 0 | 0 |
2016-01-03T12:24:00.000
| 1 | 0 | false | 34,576,501 | 0 | 0 | 1 | 1 |
I have installed django-suit for my admin. the main reason was django suit tabs.My model contains Students, each student can have multiple Projects and multiple Books, each book or project have multiple specific deadlines ( in the future ). so I need a nested-inline, I found plenty over pypi and tested some.
I have some questions:
Why there isn't a built-in nested-inline for django? is there a reasonable explanation?
I had problems integrating nested-inline packages with django suit? anyone have experience doing that?
Is there an alternative to using nested-inline packages? (I found one, it includes creating a link to second level model, but it will mess up the process )
the admin user wants to create a student, then add for example two projects and two books, then for each book the admin would like to add 10 reports ( with a deadline ), the only way of doing that is using inlines? or I can find some other ways?
|
Create SVG and save it to datastore(GAE + Python)
| 34,583,572 | 4 | 3 | 260 | 0 |
python-2.7,google-app-engine,google-cloud-datastore
|
Create your "file" in memory (use e.g io.BytesIO) and then use the getvalue method of the in-memory "file" to get the blob of bytes for the datastore. Do note that a datastore entity is limited to a megabyte or so, thus it's quite possible that some SVG file might not fit in that space -- in which case, you should look into Google Cloud Storage. But, that's a different issue.
| 0 | 1 | 0 | 0 |
2016-01-04T00:51:00.000
| 1 | 1.2 | true | 34,583,385 | 0 | 0 | 1 | 1 |
i have a doubt, i need to create some svg files (in a sequence) and upload to data store. I know how to create the svg, but it save to filesystem, and i have understood that GAE cannot use it.
So, i don't know how to create and put it on the datastore.
|
How to find total number of reserved instances under particular tag?
| 34,590,510 | 2 | 4 | 986 | 0 |
python,amazon-web-services,amazon-ec2,boto,boto3
|
There isn't a straight way. More appropriately, I would say there are couple of things which aren't fully right in your assumptions.
The EC2 instances aren't directly tied to the Reserved Instances. It is more of a non-technical - pure billing concept and end of the month, AWS counts the number of instance hours and checks it with number instance reservation hours and discounts the billing. This way no instance reservation is linked or associated with the EC2 instances which are running.
Reserved Instances aren't tagging enabled. Only the EC2 instances have tagging support.
To answer your question on the approach, the following the pseudo code would help
Get the list of Reserved Instances Listings (instance-platform, size, availability zone)
Get the list of EC2 Instance and filter it by Tags [group or name]
With the relieved 2 lists [ list of reservations & list of EC2 instance ] - for each of the (instance-platform, size, availability-zone) matched records may fetch you the list of Reservations & associated EC2 instance.
| 0 | 0 | 1 | 0 |
2016-01-04T11:37:00.000
| 1 | 0.379949 | false | 34,590,285 | 0 | 0 | 1 | 1 |
I am query AWS using boto ec2 in python. Firstly I find all reserved instances by get_all_reserved_instances then I am also able to find total count of each instance_type by instance_count. I am trying to calculate total number of reserved instances under tags.
Eg. We have two tags group and name. Then I want to show total number of reserved instances of particular type (Eg. i2.xlarge) under group tag. How to do this, I did not find this in AWS console also ?
|
Django project structure in production
| 34,608,314 | 0 | 1 | 578 | 0 |
python,django
|
The reason some people put static files in /var/www is simply that that's where the default Apache configuration puts the DocumentRoot on Debian-based systems. But I wouldn't recommend doing that; much better to add an alias pointing at the STATICFILES dir inside your project itself.
As for permissions, your suggestion of using www-data and adding your users to that group is sensible. The recommendation to put the code under /srv was probably mine in the first place, but I think it's a good idea; it's an easily identifiable location which is probably not used by anything else.
You really shouldn't be using sqlite in production. Install a proper RDBMS; preferably Postgres, but otherwise MySQL if you must. Neither is very hard to configure.
| 0 | 0 | 0 | 0 |
2016-01-05T07:17:00.000
| 2 | 1.2 | true | 34,606,134 | 0 | 0 | 1 | 2 |
I'm ready to deploy my completely revamped Django website, and I'm trying to figure out the best folder structure to use. When I had it deployed last time I did it all wrong, even though it was functional, so this time I'm trying to do it the "right" way. Unfortunately, it seems sometimes like there are too many "right" ways for any one way to make sense, so I'm trying to get some clarity on that. I'm using a Linode VPS with Ubuntu 14.04 LTS and nginx. I've been looking through walkthroughs, forum threads, and answered stackoverflow questions, and amazingly, nothing has been able to answer my questions. So here goes.
I'm planning to have the Django root (the top level project root folder generated with startproject) live at /srv/www/<site name>/djcode. What permissions should I assign this folder, what user should own it, and how should the groups be set up? Theoretically, someone else will be helping me maintain this project in the near future, so it doesn't seem to make sense that I would chown it to my personal user, even though that was suggested in several posts that I saw. Would I let www-data own the folder and all the files and then add myself and my future collaborator into the www-data group? (I'm also very weak on my understanding of how groups work in Linux, so any pointers to clear explanations of that system would also be welcome.)
Backing up a bit, does it make sense for this to be where my Django code lives? I saw only one answer in a good hour or so of searching that had a suggestion for where the code could live (the rest of the examples had the most unhelpful path /path/to/django/root in place of an actual path), so I'm not entirely sure what is the right thing to do.
Also, I've noticed that people sometimes seem to use /var/www/ for static HTML files, which doesn't make sense. Isn't the point of /var that the file sizes could be subject to change? This then begs the question, though, of why we have a separate directory for files of variable size. I assume I'll want to put my sqlite database file in there somewhere, but where exactly would it live, and what would the advantages be of putting it there?
Thank you in advance!
|
Django project structure in production
| 34,608,455 | 0 | 1 | 578 | 0 |
python,django
|
Django is not PHP or ColdFusion, where you should keep project files inside /var/www in root.
Read chapter from Two Scoops of Django: Best Practices for Django 1.8 about project structure - you should have customized settings for each environment.
Also very good and reasonable idea is to have the same SQL engine on every environment- it wouldn't corrupt data or make migrations unstable.
If your project doesn't require extraordinary DB, choose PostgreSQL. MySQL is other option, but MySQL doesn't support migrations in transactions.
| 0 | 0 | 0 | 0 |
2016-01-05T07:17:00.000
| 2 | 0 | false | 34,606,134 | 0 | 0 | 1 | 2 |
I'm ready to deploy my completely revamped Django website, and I'm trying to figure out the best folder structure to use. When I had it deployed last time I did it all wrong, even though it was functional, so this time I'm trying to do it the "right" way. Unfortunately, it seems sometimes like there are too many "right" ways for any one way to make sense, so I'm trying to get some clarity on that. I'm using a Linode VPS with Ubuntu 14.04 LTS and nginx. I've been looking through walkthroughs, forum threads, and answered stackoverflow questions, and amazingly, nothing has been able to answer my questions. So here goes.
I'm planning to have the Django root (the top level project root folder generated with startproject) live at /srv/www/<site name>/djcode. What permissions should I assign this folder, what user should own it, and how should the groups be set up? Theoretically, someone else will be helping me maintain this project in the near future, so it doesn't seem to make sense that I would chown it to my personal user, even though that was suggested in several posts that I saw. Would I let www-data own the folder and all the files and then add myself and my future collaborator into the www-data group? (I'm also very weak on my understanding of how groups work in Linux, so any pointers to clear explanations of that system would also be welcome.)
Backing up a bit, does it make sense for this to be where my Django code lives? I saw only one answer in a good hour or so of searching that had a suggestion for where the code could live (the rest of the examples had the most unhelpful path /path/to/django/root in place of an actual path), so I'm not entirely sure what is the right thing to do.
Also, I've noticed that people sometimes seem to use /var/www/ for static HTML files, which doesn't make sense. Isn't the point of /var that the file sizes could be subject to change? This then begs the question, though, of why we have a separate directory for files of variable size. I assume I'll want to put my sqlite database file in there somewhere, but where exactly would it live, and what would the advantages be of putting it there?
Thank you in advance!
|
what is the better way to implement scrapy if we have multiple sites?
| 34,612,388 | 2 | 0 | 297 | 0 |
python,python-2.7,scrapy
|
Different website - > different script in same project if scraping same data so in a same project both the scripts can reside and use the same pipeline
Same website - > Same project
Different website ,Different Data - > Different project
Same website, different data - > Use 2 functions using callback
| 0 | 0 | 1 | 0 |
2016-01-05T12:36:00.000
| 2 | 0.197375 | false | 34,611,880 | 0 | 0 | 1 | 2 |
If we have multiple site which have different html structure so what is the better way to implement scrapy?
should I create multiple spider according to site in single project?
should I create multiple projects according to site?
or another way, please define.
|
what is the better way to implement scrapy if we have multiple sites?
| 34,612,562 | 1 | 0 | 297 | 0 |
python,python-2.7,scrapy
|
Usually you should create multiple spiders in one project, one for each website, but this depends.
A scrapy spider also decides how to jump from page to page, than it applies a parser callback, the parser callback method will extract the data from a page. Because pages are not the same you need a parser callback method for each page.
The websites usually have different sitemaps, therefore you need multiple spiders, one for each website, that will decide how to jump from page to page. Than, spiders will apply their callbacks that decides how to scrape that page.
Usually You don't need to create multiple projects for multiple websites but this depends.
If your websites share some logical characteristics, put them in one project so they can use the same scrapy settings. It is also more easier in this way, you can create base spiders and inherit common methods.
| 0 | 0 | 1 | 0 |
2016-01-05T12:36:00.000
| 2 | 1.2 | true | 34,611,880 | 0 | 0 | 1 | 2 |
If we have multiple site which have different html structure so what is the better way to implement scrapy?
should I create multiple spider according to site in single project?
should I create multiple projects according to site?
or another way, please define.
|
Django Makemigrations and Migrate are slow
| 43,460,655 | 0 | 7 | 1,805 | 0 |
python,django,migration
|
This is a known issue with Django 1.8 unfortunately the only solution supported by django is to upgrade.
| 0 | 0 | 0 | 0 |
2016-01-05T15:42:00.000
| 1 | 0 | false | 34,615,600 | 0 | 0 | 1 | 1 |
Ever since the project updated to Django 1.8 (1.8.7 to be precise) from Django 1.7.6, makemigrations and migrate are super slow (it takes about 15 minutes to migrate around 10 migrations).
When I make 'manage.py migrate' 90% of the time is making 'Rendering model states...', before giving me 'DONE'.
Anyone knows why this is happening?
|
Kivy native spinbox number widget for insert number
| 34,631,685 | 1 | 0 | 547 | 0 |
python,html,kivy,qspinbox
|
You can use a Spinner widget._
| 1 | 0 | 0 | 0 |
2016-01-06T09:04:00.000
| 1 | 0.197375 | false | 34,629,190 | 0 | 0 | 1 | 1 |
I am making an app with Kivy and Python,is there a native way to create a field with a spinbox to select the number like in HTML for the tag input (type = "number") ?? I see there are checkboxes for Kivy but not this spinbox number; should I use a normal text input provided by Kivy and get the number from that?
|
How can I use scan/scroll with pagination and sort in ElasticSearch?
| 60,524,939 | 3 | 1 | 3,478 | 0 |
python-2.7,sorting,elasticsearch,scroll,pagination
|
When using the elasticsearch.helpers.scan function, you need to pass preserve_order=True to enable sorting.
(Tested using elasticsearch==7.5.1)
| 0 | 0 | 0 | 0 |
2016-01-07T14:33:00.000
| 2 | 0.291313 | false | 34,657,738 | 0 | 0 | 1 | 1 |
I have a ES DB storing history records from a process I run every day. Because I want to show only 20 records per page in the history (order by date), I was using pagination (size + from_) combined scroll, which worked just fine. But when I wanted to used sort in the query it didn't work. So I found that scroll with sort don't work. Looking for another alternative I tried the ES helper scan which works fine for scrolling and sorting the results, but with this solution pagination doesn't seem to work, which I don't understand why since the API says that scan sends all the parameters to the underlying search function. So my question is if there is any method to combine the three options.
Thanks,
Ruben
|
Python - On form submit send email and save record in database taking huge time
| 34,673,727 | 0 | 1 | 427 | 1 |
python,django,forms,performance,amazon-s3
|
The usual solution to tasks that are too long to be handled synchronously and can be handled asynchronously is to delegate them to some async queue like celery.
In your case, saving the form's data to db should be quite fast so I would not bother with this part, but moving the uploaded file to s3 and sending mails are good candidates.
| 0 | 0 | 0 | 0 |
2016-01-08T09:28:00.000
| 2 | 0 | false | 34,673,515 | 0 | 0 | 1 | 1 |
I am writing a form submit in my application written in python/Django.Form has an attachment(upto 3MB) uploaded. On submit it has to save the attachment in aws s3, save the other data in database and also send emails.
This form submit is taking too much time and the UI is hanging.
Is there any other way to do this in python/django?
|
Unable to do bulk indexing for large file in elasticsearch
| 36,937,080 | 2 | 3 | 2,316 | 0 |
java,python,elasticsearch
|
You have to increase the content uploading length which is by default 100mb.
Go to elasticsearch.yml in config folder
add/update -
http.max_content_length: 300M
| 0 | 0 | 0 | 0 |
2016-01-08T21:11:00.000
| 1 | 1.2 | true | 34,686,119 | 0 | 1 | 1 | 1 |
I am trying to do bulk indexing in elasticsearch using Python for a big file (~800MB). However, everytime I try
[2016-01-08 15:06:49,354][WARN ][http.netty ] [Marvel Man] Caught exception while handling client http tra
ffic, closing connection [id: 0x2d26baec, /0:0:0:0:0:0:0:1:58923 => /0:0:0:0:0:0:0:1:9200]
org.jboss.netty.handler.codec.frame.TooLongFrameException: HTTP content length exceeded 104857600 bytes.
at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:169)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeli
ne.java:791)
at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:135)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeli
ne.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeli
ne.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Can anyone please help me understand what is happening here, and how I can solve this issue?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.