Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Django Check if Users are Logged In
21,296,452
0
0
339
0
python,django
You need to make sure their sessions expire after they log out. Then you need to query the django session model and to see who's online you need to match the friend with the session in the django session model sorry not much help code wise
0
0
0
0
2014-01-22T22:49:00.000
1
0
false
21,295,705
0
0
1
1
To be clear, I'm not trying to check if the user is authenticated. On my app, I want users to be able to see whether other users they are friends with are currently logged in or not. Can someone point me in the right direction.
How to avoid that that a user has two opened issues in Jira?
28,475,805
0
1
53
0
jira,python-jira
Technically, you can do that with Behaviours plugin. It needs a Groovy script that looks up the reporter's other issues and transitions them to a paused. Hovewer, I don't advise doing this, because you'll also need a carefully crafted workflow that supports your "Pause" transition on all statuses, a working Groovy script (needs programming experience, and intense knowledge how JIRA API works). Also needs another script that reopens the previous issue when the newest one is closed, etcetc, there are a lot of pitfalls.
0
0
0
0
2014-01-23T20:48:00.000
1
0
false
21,319,006
0
0
1
1
How to avoid that that a user has two opened issues in Jira? It's possible that Jira treat issue management in this way: when a user has a issue opened and will open another issue, this first one must be automatically paused?
downloading or displaying BlobProperties from Google App Engine
21,323,348
0
0
97
0
python,google-app-engine
It entirely depends on what you are currently storing in the BlobProperty. Since it is typically used to store data with an upper size limit of 1 MB, I am assuming that you are storing it for images or even some files, which are under that limit. In all probability, you might want to either provide a link to the user via your web application to either download the document or if it is an image, you might want to render it yourself in the web application (for e.g. a user's avatar or something).
0
1
0
0
2014-01-24T01:28:00.000
2
0
false
21,322,741
0
0
1
1
How would I go about selecting and downloading or displaying individual entries from the Datastore. Specifically if those entries contain a BlobProperty.
Bitcoinrpc connection to remote server
21,972,946
-1
0
1,399
0
python,django,heroku,rpc,bitcoin
You can use SSL with RPC to hide the password. rpcssl=1
0
0
0
0
2014-01-24T03:48:00.000
2
-0.099668
false
21,324,050
0
0
1
1
Hey I was wondering if anyone knew how to connect to a bitcoin wallet located on another server with bitcoinrpc I am running a web program made in django and using a python library called bitcoinrpc to make connections. When testing locally, I can use bitcoinrpc.connect_to_local), or even bitcoinrpc.connect_to_remote('account','password') and this works as well as long as the account and password match the values specified in my 'bitcoin.conf' file. I can then use the connection object to get values and do some tasks in my django site. The third parameter in connect_to_local is default localhost. I was wondering: A) What to specify for this third parameter in order to connect from my webserver to the wallet stored on my home comp (is it my IP address?) B) Because the wallet is on my PC and not some dedicated server, does that mean that my IP will change and I won't be able to access the wallet? C) The connection string is in the django app - which is hosted on heroku. Heroku apps are launched by pushing with git but I believe it is to a private repository. Still, if anyone could see the first few lines of my 'view' they would have all they need to take my BTC (or, more accurately, mBTC). Anyone know how bad this is - or any ways to go about doing btc payments/movements in a more secure way. Thanks a lot.
Ironworker job done notification
21,348,604
-1
1
209
0
python,heroku,notifications,worker
Easiest way - push message to your api from worker - it's log or anything you need to have in your app
0
0
0
1
2014-01-24T16:59:00.000
2
-0.099668
false
21,338,216
0
0
1
1
I'm writing python app which currently is being hosted on Heroku. It is in early development stage, so I'm using free account with one web dyno. Still, I want my heavier tasks to be done asynchronously so I'm using iron worker add-on. I have it all set up and it does the simplest jobs like sending emails or anything that doesn't require any data being sent back to the application. The question is: How do I send the worker output back to my application from the iron worker? Or even better, how do I notify my app that the worker is done with the job? I looked at other iron solutions like cache and message queue, but the only thing I can find is that I can explicitly ask for the worker state. Obviously I don't want my web service to poll the worker because it kind of defeats the original purpose of moving the tasks to background. What am I missing here?
Avoiding a 100ms http request to slow down the REST API where it's called from
21,350,980
1
0
1,501
0
android,python,web,google-cloud-messaging,bottle
The problem is that your clients are waiting for your server to send the GCM push notifications. There is no logic to this behavior. You need to change your server-side code to process your API requests, close the connection to your client, and only then send the push notifications.
0
0
1
0
2014-01-25T10:39:00.000
3
0.066568
false
21,349,462
0
0
1
2
I'm developing a multiplayer Android game with push notifications by using Google GCM. My web server has a REST API. Most of the requests sent to this API send a request to Google GCM server to send a notification to the opponent. The thing is on average, a call to my API is ~140 ms long, and ~100 ms is due to the http request sent to Google server. What can I do to speed up this? I was thinking (I have full control of my server, my stack is Bottle/gunicorn/nginx) of creating an independent process with a database that will try to send a queue of GCM requests, but maybe there's a much simpler way to do that directly in bottle or in pure python.
Avoiding a 100ms http request to slow down the REST API where it's called from
21,349,485
0
0
1,501
0
android,python,web,google-cloud-messaging,bottle
The best thing you can do is making all networking asynchronous, if you don't do this yet. The issue is that there will always be users with a slow internet connection and there isn't a generic approach to bring them fast internet :/. Other than that, ideas are to send only few small packets or one huge in favor of many small packets (that's faster) use UDP over TCP, UDP being connectionless and naturally faster
0
0
1
0
2014-01-25T10:39:00.000
3
0
false
21,349,462
0
0
1
2
I'm developing a multiplayer Android game with push notifications by using Google GCM. My web server has a REST API. Most of the requests sent to this API send a request to Google GCM server to send a notification to the opponent. The thing is on average, a call to my API is ~140 ms long, and ~100 ms is due to the http request sent to Google server. What can I do to speed up this? I was thinking (I have full control of my server, my stack is Bottle/gunicorn/nginx) of creating an independent process with a database that will try to send a queue of GCM requests, but maybe there's a much simpler way to do that directly in bottle or in pure python.
Best way to use ZeroRPC on Heroku server
35,026,440
0
5
464
0
python,heroku,flask
According to Heroku spec you are supposed to listen to single PORT which is given to your app in env. variable. In case your application needs only single port (for the ZeroRPC), you might be luck. But you shall expect your ZeroRPC being served on port 80. Possible problems: not sure, if Heroku allows other than HTTP protocols. It shall try to connect to your application after it gets started to test, it is up and running. It is possible, the test will attempt to do some HTTP request which is likely to fail with ZeroRPC service. what about authentication of users? You would have to build some security into ZeroRPC itself or accept providing the service publicly to anonymous clients. Proposed steps: try providing the ZeroRPC services on the port, Heroku provides you. rather than setting up HTTP proxy in front of ZeroRPC, check PyPi for "RPC". There is bunch of libraries serving already over HTTP.
0
0
0
1
2014-01-26T04:03:00.000
1
0
false
21,359,542
0
0
1
1
We're using Heroku for historical reasons and I have this awesome ZeroRPC based server that I'd love to put up on the Heroku service. I'm a bit naive around exactly the constraints imposed for these 'cloud' based platforms but most do not allow the opening of an arbitrary socket. So I will either have to do some port-forwarding trick or place a web front-end (like Flask) to receive the requests and forward them onto the ZeroRPC backend. The reason I haven't just done Flask/ZeroRPC is that it feels awkward (my front-end experience is basically zero), but I'm assuming I would set up RESTful routes and then just forward stuff to ZeroRPC...head scratch.... Perhaps asking the question in a more opening-ended way; I'm looking for suggestions on how best to deploy a ZeroRPC based service on Heroku (btw I know dotCloud/Docker uses zeroRPC internally, but I'm also not sure if I can deploy my own ZeroRPC server on it).
Django / Python, continuosly add data to database via command line
21,368,318
1
0
408
0
python,django,streaming
You have a couple of options I can see, and for various projects I've used both of them without any significant problems (but not at the same time) Creating custom management command (as you mentioned). The only issue I've run into was that I had a log file that was by default owned by apache (since that was running django through WSGI) but if someone else was running the manage.py command (e.g. root through crontab), I'd occasionally have an issue that the log file would get rotated and the new owner would be root; workaround was to add a chown to the log file as part of the crontab command or else run everything as the same user. Otherwise, this has been working like a champ. Have django create your models and then write generic python (or whatever language you'd prefer) to write to the database (and just use django for a front end). You do need to be slightly careful to make sure you're not breaking the django model links (e.g. if you have a many to many relationship and add something to one table, also update the corresponding other tables)
0
0
0
0
2014-01-26T16:40:00.000
1
0.197375
false
21,366,217
0
0
1
1
I'm in the process to write a new application, I was thinking in using django for the http side but I was thinking on the best way for handling the data. My problem is that I need to acquire continue data from different processes, save them as files and insert every related info into the database. The main scope is to have surveillance camera recording video, splitting them in per hour basis and save them in the data dir. From a script take every new files and add data to the database so the view in html can show new files. My great doubt is that handling files like ./manage.py do_something_with_new_data could be a pida. I googled a lot for other ways for doing it but I didn't found anything. Do someone here the same problem? How did you solved it?
custom _id fields Django MongoDB MongoEngine
21,498,341
6
0
2,413
1
python,django,mongodb,mongoengine
You can set the parameter primary_key=True on a Field. This will make the target Field your _id Field.
0
0
0
0
2014-01-26T23:51:00.000
1
1.2
true
21,370,889
0
0
1
1
is it possible to use custom _id fields with Django and MongoEngine? The problem is, if I try to save a string to the _id field it throws an Invalid ObjectId eror. What I want to do is using my own Id's. This never was a problem without using Django because I caught the DuplicateKeyError on creation if a given id was already existing (which was even necessary to tell the program, that this ID is already taken) Now it seems as if Django/MongoEngine won't even let me create a custom _id field :-/ Is there any way to work arround this without creating a second field for the ID and let the _id field create itself? Greetings Codehai
Writing a java server for queueing incoming HTTP Request and processing them a little while later?
21,372,962
5
3
361
0
python,scala,clojure,netty,java-server
I did something similar with Scala Akka actors. Instead of HTTP Request I had unlimited number of job requests come in and get added to a queue (regular Queue). Worker Manager would manage that queue and dispatch work to worker actors whenever they are done processing previous tasks. Workers would notify Worker Manager that task is complete and it would send them a new one from the queue. So in this case there is no busy waiting or looping, everything happens on message reception. You can do the same with your HTTP Requests. Akka can be used from Scala or Java and a process I described is easier to implement than it sounds. As a web server you could use anything really. It can be Jetty, or some Servlet Container like Tomcat, or even Spray-can. All it needs to do is to receive a request and send a message to Worker Manager. The whole system would be asynchronous and non-blocking.
0
0
0
0
2014-01-27T04:12:00.000
1
1.2
true
21,372,906
0
0
1
1
I want to write an Java Server may be using Netty or anything else suggested. The whole purpose is that I want to queue incoming HTTP Request for a while because the systems I'm targeting are doing Super Memory and Compute intensive tasks so if they are burdened with heavy load they eventually tend to get crashed. I want to have a queue in place that will actually allow only max upto 5 requests passed to destination at any given time and hold the rest of the requests in queue. Can this be achieved using Netty in Java, I'm equally open for an implementation in Scala, Python or clojure.
Django from Virtualenv Multiple processes running
21,577,197
0
0
311
0
python,django,bash,process,virtualenv
I've solved the mistery: Django was trying to send emails but it could not because of improper configuration, so it was hanging there forever trying to send those emails. Most probably (I'm not sure here) Django calls an OS function or a subprocess to do so. The point is that the main process was forking itself and giving the job to a subprocess or thread, or whatever, I'm not expert in this. It turns out that when your python is forked and you kill the father, the children can apparently keep on living after it. Correct me if I'm wrong.
0
0
0
0
2014-01-27T10:36:00.000
1
1.2
true
21,378,559
0
0
1
1
I'm running a local django development server together with virtualenv and it's been a couple of days that it behaves in a weird way. Sometimes I don't see any log in the console sometimes I see them. A couple of times I've tried to quit the process and restart it and I've got the port already taken error, so I inspected the running process and there was still an instance of django running. Other SO answers said that this is due to the autoreload feature, well so why sometimes I have no problem at all and sometimes I do? Anyway For curiosity I ps aux| grep python and the result is always TWO running process, one from python and one from my activated "virtualenv" python: /Users/me/.virtualenvs/myvirtualenv/bin/python manage.py runserver python manage.py runserver Is this supposed to be normal?
business logic in Django
21,378,863
1
7
7,262
0
python,django
If the functionality fits well as a method of some model instance, put it there. After all, models are just classes. Otherwise, just write a Python module (some .py file) and put the code there, just like in any other Python library. Don't put it in the views. Views should be the only part of your code that is aware of HTTP, and they should stay as small as possible.
0
0
0
0
2014-01-27T10:40:00.000
4
0.049958
false
21,378,653
0
0
1
3
I'd like to know where to put code that doesn't belong to a view, I mean, the logic. I've been reading a few similar posts, but couldn't arrive to a conclusion. What I could understand is: A View is like a controller, and lot of logic should not put in the controller. Models should not have a lot of logic either. So where is all the logic based stuff supposed to be? I'm coming from Groovy/Grails and for example if we need to access the DB or if we have a complex logic, we use services, and then those services are injected into the controllers. Is it a good practice to have .py files containing things other than Views and Models in Django? PS: I've read that some people use a services.py, but then other people say this is a bad practice, so I'm a little confused...
business logic in Django
24,603,223
7
7
7,262
0
python,django
Having a java background I can relate with this question. I have been working on python for quite some time. Even though I do my best to treat Java as Java and Python as Python, some times I mix them both so that I can get a good deal out of both. In short Put all model related stuff in models app, it could be from simply models definition to custom save , pre save hooks ..... Put any request/ response related stuff in views, and some logic like verifying Jon schema, validation request body ... handling exceptions and so on .... Put your business logic in separate folder/ app or module per views directory/ app. Meaning have separate middle module between your models and views. There isn't strict rule to organise your code as long as you are consistent. Project : Ci Models: ci/model/device.py Views: ci/views/list_device.py Business logic: (1) ci/business_logic/discover_device.py Or (2) ci/views/discover_device.py
0
0
0
0
2014-01-27T10:40:00.000
4
1
false
21,378,653
0
0
1
3
I'd like to know where to put code that doesn't belong to a view, I mean, the logic. I've been reading a few similar posts, but couldn't arrive to a conclusion. What I could understand is: A View is like a controller, and lot of logic should not put in the controller. Models should not have a lot of logic either. So where is all the logic based stuff supposed to be? I'm coming from Groovy/Grails and for example if we need to access the DB or if we have a complex logic, we use services, and then those services are injected into the controllers. Is it a good practice to have .py files containing things other than Views and Models in Django? PS: I've read that some people use a services.py, but then other people say this is a bad practice, so I'm a little confused...
business logic in Django
21,378,901
7
7
7,262
0
python,django
I don't know why you say we can't put a lot of logic in the controller, and we cannot have the models with a lot of logic either You can certainly put logic in either of those places. It depends to a great extent what that logic is: if it's specifically related to a single model class, it should go in the model. If however it's more related to a specific page, it can go in a view. Alternatively, if it's more general logic that's used in multiple views, you could put it in a separate utility module. Or, you could use class-based views with a superclass that defines the logic, and subclasses which inherit from it.
0
0
0
0
2014-01-27T10:40:00.000
4
1
false
21,378,653
0
0
1
3
I'd like to know where to put code that doesn't belong to a view, I mean, the logic. I've been reading a few similar posts, but couldn't arrive to a conclusion. What I could understand is: A View is like a controller, and lot of logic should not put in the controller. Models should not have a lot of logic either. So where is all the logic based stuff supposed to be? I'm coming from Groovy/Grails and for example if we need to access the DB or if we have a complex logic, we use services, and then those services are injected into the controllers. Is it a good practice to have .py files containing things other than Views and Models in Django? PS: I've read that some people use a services.py, but then other people say this is a bad practice, so I'm a little confused...
django replace old image uploaded by ImageField
21,394,103
1
0
1,118
0
python,django
This might get tricky, And a lot of times depends on what are your constraints. 1. Write your own save method for the model and then delete the old image a and replace with a new one. os.popen("rm %s" % str(info.photo.path)) Write a cron job to purge all the unreferenced files. But then again if disk space is a issue, you might want to do the 1st one, that will get you some delay in page load.
0
0
0
0
2014-01-27T23:09:00.000
2
0.099668
false
21,393,655
0
0
1
1
How I can replace one image uploaded by an ImageField (in my models) with the new one selected by the same ImageField? And, can I delete all images uploaded by an ImageField when I delete the model object (bulk delete)?
Why is it taking more time, When i upgrade a module in Openerp
21,398,386
0
0
155
0
python-2.7,openerp,base
why you going in installed modules and search for base module and update it? you have to only update that module in which you have done changes in xml file not event py file. if you have changes in xml file of those module you have to update only those module. if you going to update base module it will update all module which installed in your databae, because every module depend on base, we can call base is the kernal of our all modules, all module depend on this module, if you update base it will going to update all modules if you have done some changes in sale then you have to search for sale and update the only sale module not go to update base module regards,
0
0
0
1
2014-01-28T05:30:00.000
2
0
false
21,397,605
0
0
1
2
I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module. Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed?? Thanks in advance.
Why is it taking more time, When i upgrade a module in Openerp
21,398,452
1
0
155
0
python-2.7,openerp,base
As you have said that You are new to OpenERP, Let me tell you something which would be very helpful to you. i.e Never Do changes in Standard modules not in base. If you want to add or remove any functionality of any module, you can do this by creating a customzed module. in which inherit the object you want, and do the changes as per your requirement. Now regarding the time spent when upgrading base module, This is because when you update base module it will automatically update all the other modules which are already installed (in your case - Sales,Invoicing,Human-resource,Tools and Reporting) as base is the main module on which all the other modules are dependedent. So, Better is to do your changes in customized module and upgrade that perticular module only, not the base. Hope this will help you.
0
0
0
1
2014-01-28T05:30:00.000
2
1.2
true
21,397,605
0
0
1
2
I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module. Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed?? Thanks in advance.
Any python web framework with the following features?
21,403,116
-1
0
343
0
python,mongodb,deployment,architecture
Try Django.. im not sure but worth trying
0
0
0
0
2014-01-28T10:25:00.000
3
-0.066568
false
21,402,952
0
0
1
1
As the basic framework core: Request Lifecycle REST Routing Requests & Input Views & Responses Controllers Errors & Logging And these features would make our dev easy and fast: Authentication Cache Core Extension Events Facades Forms & HTML IoC Container Mail Pagination Queues Security Session SSH Templates Unit Testing Validation And really good support for MongoDB. Is there any such framework?
Do I need to user schemamigration and migrate commands after changing a field name or a field data type in a model?
21,417,983
2
2
54
0
python,django
Yes, Django won't recognize the field if you change the name. I will say that the "field does not exist", so YES, you have to run Django's South migrate / schemamigration as you asked. Datatype YES as well. Django may be okay at first if you only change the field type depending, but may run into problems later depending on what you have in that field.
0
0
0
0
2014-01-28T21:49:00.000
2
1.2
true
21,417,637
0
0
1
2
The problem is pretty self-explanatory in the title. Do I need to do that or I just need to edit the existing migration file?
Do I need to user schemamigration and migrate commands after changing a field name or a field data type in a model?
21,418,000
1
2
54
0
python,django
You need to do a schemamigration every time you change your models. On every call of python manage.py migrate command south record number of the latest migration applied into database migrationhistory table. So if you just change existing migration it won't be applied because south would think it's already applied. You can make a backward migration, fix next migration, even delete it and make a new one and only then migrate forward.
0
0
0
0
2014-01-28T21:49:00.000
2
0.099668
false
21,417,637
0
0
1
2
The problem is pretty self-explanatory in the title. Do I need to do that or I just need to edit the existing migration file?
One login for multiple products
21,426,172
0
1
386
0
python,django,authorization,server-side
The authentification method in every application connects to the same webservice for autentification.
0
0
1
0
2014-01-29T08:44:00.000
3
0
false
21,426,024
0
0
1
2
There are multiple mobile apps. I want people using one app to login with their same login credentials into all other apps. What is the best approach to implement this? I'm thinking to create a separate authorization server that will issue tokens/secrets on registering and logins. It will have a validation API that will be used by mobile app servers to validate requests.
One login for multiple products
21,427,549
1
1
386
0
python,django,authorization,server-side
First check if OAuth could be adapted to using this, that would save you a lot of work. Of course all the services and apps would have to talk to some backend network server to sync tokens issued to apps. Half-secure/maybe-abusable solution: have symmetric cipher encrypted cookie that webpages (and apps?) hold and use it for authorization with different network services (which again have to verify cookie for authorization with authorization service that knows the passphrase used to encrypt the cookie) I've used approach #2 on internal systems but I am not sure if it is advisable to use it in in the wild - this may pose some security risks.
0
0
1
0
2014-01-29T08:44:00.000
3
1.2
true
21,426,024
0
0
1
2
There are multiple mobile apps. I want people using one app to login with their same login credentials into all other apps. What is the best approach to implement this? I'm thinking to create a separate authorization server that will issue tokens/secrets on registering and logins. It will have a validation API that will be used by mobile app servers to validate requests.
How to send an array from Python (client) to Java (Server)?
21,436,939
1
0
129
0
java,python,sockets
A meta-answer would be to use JSON, since JSON generators and parsers can be found for every major programming language.
0
0
0
1
2014-01-29T16:24:00.000
2
1.2
true
21,436,787
0
0
1
2
I am using OpenCV on my Raspberry Pi to track circular objects. Then, I want to send the coordinate and radius values in an array of floats across the LAN to the Java program I can send strings all fine with the code I have, but I'm having trouble trying to send numerical datatypes. What is the correct process for this?
How to send an array from Python (client) to Java (Server)?
21,437,109
1
0
129
0
java,python,sockets
Have you looked at BSON? It's like JSON, but optimised for a little more speed.
0
0
0
1
2014-01-29T16:24:00.000
2
0.099668
false
21,436,787
0
0
1
2
I am using OpenCV on my Raspberry Pi to track circular objects. Then, I want to send the coordinate and radius values in an array of floats across the LAN to the Java program I can send strings all fine with the code I have, but I'm having trouble trying to send numerical datatypes. What is the correct process for this?
Django admin project distribution and management
21,442,464
0
0
82
0
python,django,django-admin
What would be best to do is write your core project on one side, and store it in a repository (version control). Apart from that, write your sub-applications as requirements and store/track them on separate version control. Then, to keep everything working together and keep it all integrated, what you should do is that for different functionality, write it all in the same app, but add special settings that change the app's behavior. These settings can be stored in an independent settings_local.py that is imported at the end of your settings file, making them installation-independent, and you keep in your settings.py those that are general to all installations.
0
0
0
0
2014-01-29T19:42:00.000
1
0
false
21,440,987
0
0
1
1
I have core django admin project that when my customers purchase needs to be modified both in configuration and function. Each customer will have their own instance of the project installed on a different server. Currently I am using django apps to separate out the difference in clients and using settings.py to load the correct app for the correct customer. So my questions: Is there a industry standard/best practice/framework to customize configuration and functionality in django admin projects and distribute them?
Creating a web service for Qualtrics written in Python on Google App Engine
21,449,385
0
0
909
0
python,web-services,google-app-engine,rest,qualtrics
I am familiar with Qualtrics but I will answer (b) first. You can write a Python Web Service in a variety of ways, depending on your choice: You could write a simple get handler Use Google Cloud Endpoints Use one of several Web Services Python libraries Having said that, a quick glance at Qualtrics indicated that it required a RSS feed in the result format(I could be wrong). So what you will need to take care of while doing (b) is to ensure that it is in a format that Qualtrics understand and parses out the response format for you. For e.g. if you have to return RSS, you could write your Python Web Service to return that data. Optionally, it can also take one or more parameters to fine tune the results.
0
1
0
0
2014-01-30T00:55:00.000
1
1.2
true
21,445,897
0
0
1
1
Has anyone out there created a a.) web service for Qualtrics or b.) a Python web service on Google App Engine? I need to build in some functionality to a Qualtrics survey that seems only a web service (in the Qualtrics Survey Flow) could do, like passing parameters to a web service then getting a response back. I've looked at GAE Protocol RPC, but I'm not quite sure if that's the right path. Qualtrics gave me a PHP code example but I don't know how to begin translating it to python and/or GAE.
Best Folder Structure For Abstract Models
21,447,135
2
1
590
0
python,django
Well, there's an amount of subjectivity in this choice. It does depend on whether all of these different documents are pretty much the same thing (just relatively minor variations but still more or less similar), or if they have a large amount of very specific functionality. I guess yours is probably the first, in which they might have different functionality, but they do share a lot. Then you can go ahead and create them as a single app You would have documents/__init__.py documents/models.py documents/views.py What you can do if, lets say you have many models, and different kinds of models, lets say 3 different types of text files, 4 different types of audio, you could use a folder structure like this: documents/__init__.py documents/models/__init__.py documents/models/base.py documents/models/text.py documents/models/audio.py ... documents/views.py So, in this case you would have your base abstract model in base.py, then in the other files you'd have several models properly classified, inheriting from your base abstract model. Then to use these classes you'd do: from documents.models.audio import FancyAudio from documents.models.text import BigText, SmallText ....
0
0
0
0
2014-01-30T02:19:00.000
1
1.2
true
21,446,649
0
0
1
1
I have an abstract model called documents. Types of documents like "invoices" and "quotes" inherit from this class. This is my first Django project and I'm a bit unclear of the best folder structure. I was planning to make each type of document get its own app. So, there would be an app for "invoices" and an app for "quotes" and each would have their own folder. Is this a reasonable approach? My second question is where should the documents model be located? Should documents be an app on its own? Should "invoices" and "quotes" sit within "documents"?
Does OpenERP convert False value to NULL when affected to integer?
27,442,462
4
1
1,668
0
python,orm,openerp,odoo
Yes, when issuing a ORM '.search' query, pass in Python False to represent database nulls. The ORM has no 'is' operator, so you must query for ('column', '=', False). Similarly, when '.browse' returns rows, database nulls are converted into Python False. I believe the reason for this is that xmlrpc (which is used to send these queries to your OpenERP server) does not have any way to represent None/null values in its 'out of the box' configuration. Xmlrpc can be configured to allow nulls, but OpenERP does not use this. The obvious question this raises is what happens for nullable boolean fields, where False is a valid value? Based on personal experiments: When creating row with a nullable boolean fields, if you pass Python False, then the database ends up containing false, not null as for other datatypes. I don't think there is any way to set such fields to null: It doesn't work to pass Python None, nor setting the default value to None and not passing this column at all. Where is this documented? I can't find it. It's all crazy. Welcome to Odoo!
0
0
0
0
2014-01-30T10:37:00.000
1
0.664037
false
21,454,046
0
0
1
1
Within OpenERP, when you define a new char field in python and set False as default value, the field will be set to NULL in the db, the ORM taking care of the conversion. Does the ORM also convert integer type the same way, or should I be careful that it may convert it to 0 instead of NULL? Where could I find the on the fly value conversion rules of OSV (OpenERP ORM)?
Pelican restarting you server
50,008,098
0
3
990
0
python,pelican
When you press Ctrl + C or Ctrl + z, do not restart the HTTP server: it is running in the background, and that is the exact reason as why you are getting that error message. To see that the server is running in the background after pressing the any of the key combinations above, try to edit and save any file: you will see right away in the terminal the re-generation process of your pages is active again. You can start the HTTP server using this command: make devserver and stop by ./developer_server.sh stop
0
0
0
0
2014-01-30T11:50:00.000
3
0
false
21,455,719
0
0
1
2
Hi I just started working with Pelican and it really suits my needs, I had tried to build blogs in Flask and other frameworks but I really just wanted something simple so I can post about math, and pelican just works. My question is when I am testing on my machine, I start the server; however when I stop the server to make some edits to my test blogs, and then try to reload the server I get a socket already in use error. I am stopping my server by ctrl+z am I doing this correctly?
Pelican restarting you server
27,992,709
2
3
990
0
python,pelican
For your development server, you can also use the script ./develop_server.sh that comes with the last versions of pelican (at least with the 3.5.0). Build the blog and load a server with ./develop_server.sh start: it reloads each time you edit your blog (except the settings). Simply stop with ./develop_server.sh stop when you're finished.
0
0
0
0
2014-01-30T11:50:00.000
3
0.132549
false
21,455,719
0
0
1
2
Hi I just started working with Pelican and it really suits my needs, I had tried to build blogs in Flask and other frameworks but I really just wanted something simple so I can post about math, and pelican just works. My question is when I am testing on my machine, I start the server; however when I stop the server to make some edits to my test blogs, and then try to reload the server I get a socket already in use error. I am stopping my server by ctrl+z am I doing this correctly?
How can I change Django admin language?
32,872,393
28
20
18,987
0
python,django,django-i18n
In your settings.py just add 'django.middleware.locale.LocaleMiddleware' to your MIDDLEWARE_CLASSES setting, making sure it appears after 'django.contrib.sessions.middleware.SessionMiddleware'.
0
0
0
0
2014-01-30T23:14:00.000
3
1
false
21,469,470
0
0
1
1
I have a django 1.6 site with i18n working. I can change the frontend language with a select box in the top of the template, but I don't know if there is a django app or trick to change the admin language, because it seems to store somewhere in session variable, and it keeps the first language I have used in the frontend.
sending image on client side takes too much time in django
21,473,553
0
0
83
0
python,django,image
Storing your images in database to serve them on a page is not a good idea. Upload them to a CDN , and store the URLs in your database.
0
0
0
0
2014-01-31T06:06:00.000
1
1.2
true
21,473,532
0
0
1
1
I am storing images as base64 string in database(MYSQL) and then when requested from client side(HTML webpages simply), sending base64 strings of images along with some other data from tables and then embedding this data in img tag on client side. These tags are built on dynamically using javascript. But this approach takes a lot of time even for 2 images. I see that when serving static images, django serves quite fast. So what could be a good approach to reduce the time?
Eclipse, PyDev, Jython, external JAR
21,758,703
1
0
483
0
python,eclipse,jar,pydev,jython
Yes, PyDev is not able to read an 'aggregating' jar, so, you really have to add those other jars manually.
0
0
0
0
2014-01-31T12:52:00.000
1
1.2
true
21,480,554
1
0
1
1
I'm using Eclipse, PyDev and Jython together on a project. It works well, but I have a problem with an external jar. This jar is special, has no .class files inside it just a manifest which uses Class-Path attribute to list additional jars (just aggregating a bunch of jars). It seems like adding this as an external jar ignores this attribute and because it has no .class files, it doesnt add anything visible to the project. Do I have to add all the jars by hand now?
How to run a system command from a django web application?
21,489,049
1
0
148
0
python,linux,django,celery
Here is one approach: in your Django web application, write a message to a queue (e.g., RabbitMQ) containing the information that you need. In a separate system, read the message from the queue and perform any file actions. You can indeed use Celery for setting up this system.
0
1
0
0
2014-01-31T17:29:00.000
1
1.2
true
21,486,362
0
0
1
1
Does anyone knows of a proven and simple way of running a system command from a django application? Maybe using celery? ... From my research, it's a problematic task, since it involves permissions and insecure approaches to the problem. Am i right? EDIT: Use case: delete some files on a remote machine. Thanks
translating strings from database flask-babel
22,099,629
2
9
1,789
1
python,flask,python-babel,flask-babel
It's not possible to use Babel in database translations, as database content is dynamic and babel translations are static (they didn't change). If you read the strings from the database you must save the translations on the database. You can create a translation table, something like (locale, source, destination), and get the translated values with a query.
0
0
0
0
2014-02-01T11:31:00.000
2
0.197375
false
21,497,489
0
0
1
1
I'm using Flask-Babel for translating string. In some templates I'm reading the strings from the database(postgresql). How can I translate the strings from the database using Flask-Babel?
How do you return a Partial response in app engine python endpoints?
23,165,174
0
0
418
0
google-app-engine,python-2.7,google-cloud-endpoints
From what I gather, Google has enabled partial response for their APIs, but has not yet explained how to enable it for custom APIs. I'm assuming if they do let us know, it might entail annotations, and possibly overriding a method or two. I've been looking also, to no avail. I've been looking into this just due to a related question, where I'd like to know how to force the JSON object in the response from my google Endpoint API, to include even the members of the class that are null valued. I was trying to see if anything would be returned if I used a partial response with a field indicated that was null.. would the response have the property at least, or would it still not even exist as a property. Anyway, this lead me into the same research, and I do not believe we can enable partial responses in our own APIs yet.
0
1
1
0
2014-02-02T21:10:00.000
2
1.2
true
21,516,287
0
0
1
1
I am learning endpoints and saw that other Google APIs have this "fields" query attribute. Also it appears in the api explorer. I would like to get a partial response for my api also, but when using the fields selector from the api explorer it is simply ignored by the server. Do I need to implement something in the server side? Haven't found anything in the docs. Any help is welcome.
How should I set up my dev enviornment for a django app so that I can pull on static s3 files?
21,518,701
1
0
41
1
python,django,mongodb,postgresql,amazon-s3
Its good to have different settings for production and dev. So you can just create a settings folder and have settings may be prod.py and dev.py. this will let you use diff apps for eg: you actually don't need debug tool bar on prod. And regarding the file, I feel you dont have to worry about the structure as such, you can always refer to Etag and get the file (md5 hash of the object)
0
0
0
0
2014-02-03T00:53:00.000
1
1.2
true
21,518,268
0
0
1
1
I'm having trouble in establishing an ideal setup where I can distinguish between production and test environment for my django app. I'm using a postgresql database that stores a relative file path to a s3 bucket after I upload an image. Am I supposed to make a production copy of all the files in the s3 bucket and connect my current development code to this static directory to do testing? I certainly don't want to connect to production ... What's best practice in this situation? Also I may be doing things wrong here by having the file path in a postgresql database. Would it be more ideal to have some foreign key to a mongodb table which then holds the file path for the file path in aws s3? Another best practice question is how should the file path should be organized? Should I just organize the file path like the following: ~somebucket/{userName}/{date}/{fileNameName} OR ~somebucket/{userName}/{fileName} OR ~somebucket/{fileName} OR ~somebucket/{date}/{userName}/{fileNameName} OR ~somebucket/{fileName} = u1234d20140101funnypic.png ?? This is really confusing for me on how to build an ideal way to store static files for development and production. Any better recommendations would be greatly appreciated. Thanks for your time :)
How to get Sphinx working with Jython on an unnetworked Windows 7 computer?
21,824,353
1
1
376
0
windows,jython,python-sphinx,jython-2.5
I have managed to get it working. The problem was that the manual installation and the use of Jython meant that certain environment variables that were expected were not in place. Also, the use of Windows 7 (and I believe MS Windows in general) means that Python scripts without an extension cannot be run without calling them explicitly through Jython (Windows doesn't check for shebangs). Finally, file associations had not been set up (as happens automatically with CPython installation, but has not happened with Jython). For anyone else with similar problems the following setup works for me: Locations: Java Runtime: C:\Java\jre7 Jython: C:\Jython\jython2.5.2 User Environment Variables: JRE_HOME: C:\Java\jre7 JAVA_HOME: %JRE_HOME% CLASSPATH: . JYTHON_HOME: C:\Jython\jython2.5.2 PATH: %JRE_HOME%\bin;%JYTHON_HOME%\bin File Associations: At the command prompt type assoc .py=Python.File to associate 'Python.File' with the '.py' extension. At the command prompt type ftype Python.File=C:\Jython\jython2.5.2\jython.bat "%1" %* to associate the Jython command with files of type 'Python.File'. Append '.py' (;.PY) to the PATHEXT system environment variable. This will make it possible to execute Python files without having to provide their '.py' extension. (N.B. This does not make it possible to run Python files that do not have a '.py' extension.) File Extensions: Rename the four Sphinx commands to include '.py' extensions. This is remarkably difficult with vanilla Windows 7 as it does everything it can to distance the user from such 'low level' details as file extensions, however the rename command at the command prompt does the job: type ren sphinx* sphinx*.py when in the Jython bin directory. It should now be possible to call sphinx-apidoc or similar from anywhere. Once this is complete the command make html, when called from the documentation directory, should work as expected.
0
0
0
0
2014-02-03T12:11:00.000
1
1.2
true
21,527,115
1
0
1
1
Once sphinx-apidoc has been run the command C:\path\to\doc\make html produces an error beginning: The 'sphinx-build' command was not found [snip] However the command does exist and the relevant environment variables are set. More detail: 1 - Trying to run sphinx_apidoc: 'C:\path\to\jython\bin\sphinx-apidoc' is not recognised as an internal or external command 2 - Called using Jython works: jython C:\path\to\jython\bin\sphinx-apidoc with sensible options produces the documentation *.rst files, conf.py, etc files. 3 - make html then produces the following error: The 'sphinx-build' command was not found [snip] It then recommends setting the SPHINXBUILD environment variable, and even the PATH. I already have these two environment variables set, proven to myself by calling echo %PATH% and echo %SPHINXBUILD%. This is where I get stuck. It appears that the files that Sphinx uses (sphinx-apidoc and sphinx-build in this case), which are in the C:\path\to\jython\bin\ directory, do not have any file suffixes. When called directly from Jython they work as expected (see point 2 above), however when called as part of another process (e.g. make html) they are not recognised and the execution fails (see points 1 and 3 above). Does anyone know the what, why and most importantly 'how to fix' of this problem? My setup process is on an unnetworked Windows 7 computer. Jython (2.5.2) was installed using the Jython installer. Then each of the following packages (except setuptools) was installed by extracting it locally and then running jython setup.py install in its extracted directory: setuptools: by calling jython ez_setup.py with setuptools-1.4.2.tar.gz in the same directory (so there is no attempt to download it) Jinja2 (2.5) docutils (0.11) Pygments (1.6) Sphinx (1.2.1) numpydoc (0.4) - Only mentioned because it is also isntalled on the machine.
Add plugin on Django-CMS page programatically in unit test
21,529,356
1
0
710
0
python,django,unit-testing,plugins,django-cms
create a page with cms.api... and then get the right placeholder form page.placeholders.all() and call the add_plugin() with this.
0
0
0
0
2014-02-03T13:55:00.000
1
1.2
true
21,529,226
0
0
1
1
My Django app needs a test that follows the following scenario: creates a page, edits it by adding a new plugin, then saves it. So far, I am stuck at adding the plugin to the page. How can I do this programatically in a test? I looked over add_plugin() from cms.api, but it needs a placeholder, which I have no idea how to link to an existing page and/or template.
Running unittest Test Cases and Robot Framework Test Cases Together
26,558,782
2
6
7,792
0
python,robotframework,python-unittest
Robot is not at all based on xunit technologies. Personally I think it makes a great unit testing framework for python code, since you can create keywords that can directly import your modules. I use this technique for some projects I work on. With robot, you can tag your unit tests or put them all in a separate hierarchy so that you can run them separate from acceptance tests if you like, or combine them and get statistics broken out separately.
0
0
0
1
2014-02-03T18:37:00.000
3
0.132549
false
21,535,028
0
0
1
2
Our group is evaluating Robot Test Framework for our QA group, not just for BDD, but also to possibly cover a lot of our regular functionality testing needs. It certainly is a compelling project. To what extent, if any, is Robot Framework based on xunit (unittest) architecture? I see that unittest asserts can be used, but I don't see that the RF testcases themselves are based on unittest.TestCase. Ideally, our organization would like to be able to be able to write Robot Framework tests, as well as Python unittest testcases, run the testcases together from one runner and get integrated results, reuse RF's Selenium2 Library's "keywords" as functions used by our regular unittest testcases in order to share a common SE code-base. Is this a solved problem? Does anybody do this kind of thing?
Running unittest Test Cases and Robot Framework Test Cases Together
21,565,221
10
6
7,792
0
python,robotframework,python-unittest
RobotFramework is not the right tool for unit testing. Unit-tests should be written in the same language of the units (modules, classes, etc.) The ability to describe scenarios in natural language (which is one of the strongest features of systems like RF) is worthless in unit tests. At this level of testing scenarios are for input x you get output y. RF is best suited in Acceptance Testing and Integration Testing, the top-grained verification of your system. Nevertheless you can integrate RF and xunit in your QA system together. And merge reports from RF and unit-test.
0
0
0
1
2014-02-03T18:37:00.000
3
1
false
21,535,028
0
0
1
2
Our group is evaluating Robot Test Framework for our QA group, not just for BDD, but also to possibly cover a lot of our regular functionality testing needs. It certainly is a compelling project. To what extent, if any, is Robot Framework based on xunit (unittest) architecture? I see that unittest asserts can be used, but I don't see that the RF testcases themselves are based on unittest.TestCase. Ideally, our organization would like to be able to be able to write Robot Framework tests, as well as Python unittest testcases, run the testcases together from one runner and get integrated results, reuse RF's Selenium2 Library's "keywords" as functions used by our regular unittest testcases in order to share a common SE code-base. Is this a solved problem? Does anybody do this kind of thing?
Pycharm: set environment variable for run manage.py Task
22,899,916
39
45
74,590
0
python,django,pycharm
To set your environment variables in PyCharm do the following: Open the 'File' menu Click 'Settings' Click the '+' sign next to 'Console' Click Python Console Click the '...' button next to environment variables Click the '+' to add environment variables
0
0
0
0
2014-02-03T22:07:00.000
16
1
false
21,538,859
1
0
1
6
I have moved my SECRET_KEY value out of my settings file, and it gets set when I load my virtualenv. I can confirm the value is present from python manage.py shell. When I run the Django Console, SECRET_KEY is missing, as it should. So in preferences, I go to Console>Django Console and load SECRET_KEY and the appropriate value. I go back into the Django Console, and SECRET_KEY is there. As expected, I cannot yet run a manage.py Task because it has yet to find the SECRET_KEY. So I go into Run>Edit Configurations to add SECRET_KEY into Django server and Django Tests, and into the project server. Restart Pycharm, confirm keys. When I run a manage.py Task, such as runserver, I still get KeyError: 'SECRET_KEY'. Where do I put this key?
Pycharm: set environment variable for run manage.py Task
21,603,624
15
45
74,590
0
python,django,pycharm
Same here, for some reason PyCharm cant see exported env vars. For now i set SECRET_KEY in PyCharm Run/Debug Configurations -> "Environment variables"
0
0
0
0
2014-02-03T22:07:00.000
16
1
false
21,538,859
1
0
1
6
I have moved my SECRET_KEY value out of my settings file, and it gets set when I load my virtualenv. I can confirm the value is present from python manage.py shell. When I run the Django Console, SECRET_KEY is missing, as it should. So in preferences, I go to Console>Django Console and load SECRET_KEY and the appropriate value. I go back into the Django Console, and SECRET_KEY is there. As expected, I cannot yet run a manage.py Task because it has yet to find the SECRET_KEY. So I go into Run>Edit Configurations to add SECRET_KEY into Django server and Django Tests, and into the project server. Restart Pycharm, confirm keys. When I run a manage.py Task, such as runserver, I still get KeyError: 'SECRET_KEY'. Where do I put this key?
Pycharm: set environment variable for run manage.py Task
30,374,246
22
45
74,590
0
python,django,pycharm
Another option that's worked for me: Open a terminal Activate the virtualenv of the project which will cause the hooks to run and set the environment variables Launch PyCharm from this command line. Pycharm will then have access to the environment variables. Likely because of something having to do with the PyCharm process being a child of the shell.
0
0
0
0
2014-02-03T22:07:00.000
16
1
false
21,538,859
1
0
1
6
I have moved my SECRET_KEY value out of my settings file, and it gets set when I load my virtualenv. I can confirm the value is present from python manage.py shell. When I run the Django Console, SECRET_KEY is missing, as it should. So in preferences, I go to Console>Django Console and load SECRET_KEY and the appropriate value. I go back into the Django Console, and SECRET_KEY is there. As expected, I cannot yet run a manage.py Task because it has yet to find the SECRET_KEY. So I go into Run>Edit Configurations to add SECRET_KEY into Django server and Django Tests, and into the project server. Restart Pycharm, confirm keys. When I run a manage.py Task, such as runserver, I still get KeyError: 'SECRET_KEY'. Where do I put this key?
Pycharm: set environment variable for run manage.py Task
28,723,630
20
45
74,590
0
python,django,pycharm
You can set the manage.py task environment variables via: Preferences| Languages&Frameworks| Django| Manage.py tasks Setting the env vars via the run/debug/console configuration won't affect the built in pycharm's manage.py task.
0
0
0
0
2014-02-03T22:07:00.000
16
1
false
21,538,859
1
0
1
6
I have moved my SECRET_KEY value out of my settings file, and it gets set when I load my virtualenv. I can confirm the value is present from python manage.py shell. When I run the Django Console, SECRET_KEY is missing, as it should. So in preferences, I go to Console>Django Console and load SECRET_KEY and the appropriate value. I go back into the Django Console, and SECRET_KEY is there. As expected, I cannot yet run a manage.py Task because it has yet to find the SECRET_KEY. So I go into Run>Edit Configurations to add SECRET_KEY into Django server and Django Tests, and into the project server. Restart Pycharm, confirm keys. When I run a manage.py Task, such as runserver, I still get KeyError: 'SECRET_KEY'. Where do I put this key?
Pycharm: set environment variable for run manage.py Task
41,200,535
0
45
74,590
0
python,django,pycharm
Please note that the answer mentioned by "nu everest" works but you will not see Console tab available unless you create a project in pycharm. As individual files can be run without creating a project in pycharm some people might be confused. Although you have also to note that settings are lost to run-configurations that do not belong to a project when pycharm is closed.
0
0
0
0
2014-02-03T22:07:00.000
16
0
false
21,538,859
1
0
1
6
I have moved my SECRET_KEY value out of my settings file, and it gets set when I load my virtualenv. I can confirm the value is present from python manage.py shell. When I run the Django Console, SECRET_KEY is missing, as it should. So in preferences, I go to Console>Django Console and load SECRET_KEY and the appropriate value. I go back into the Django Console, and SECRET_KEY is there. As expected, I cannot yet run a manage.py Task because it has yet to find the SECRET_KEY. So I go into Run>Edit Configurations to add SECRET_KEY into Django server and Django Tests, and into the project server. Restart Pycharm, confirm keys. When I run a manage.py Task, such as runserver, I still get KeyError: 'SECRET_KEY'. Where do I put this key?
Pycharm: set environment variable for run manage.py Task
65,267,215
4
45
74,590
0
python,django,pycharm
When using PyCharm along with Django, there are multiple sections where you can set environment variables (EVs): File > Settings > Languages and Frameworks > Django There's no purpose to set EVs here File > Settings > Build, Execution, Deployment > Console > Python Console There's no purpose to set EVs here File > Settings > Build, Execution, Deployment > Console > Django Console Set EVs here and they'll be accesible when using the PyCharm Python Console (which in a Django project opens a Django shell) File > Settings > Tools > Terminal Set EVs here and they'll be accesible when using the PyCharm Terminal (i.e when running python manage.py commands in the terminal) Run > Edit configurations > [Your Django run configuration] Set EVs here and they'll be accesible when using the PyCharm Run button Tested on PyCharm 2020.2 using a virtual environment.
0
0
0
0
2014-02-03T22:07:00.000
16
0.049958
false
21,538,859
1
0
1
6
I have moved my SECRET_KEY value out of my settings file, and it gets set when I load my virtualenv. I can confirm the value is present from python manage.py shell. When I run the Django Console, SECRET_KEY is missing, as it should. So in preferences, I go to Console>Django Console and load SECRET_KEY and the appropriate value. I go back into the Django Console, and SECRET_KEY is there. As expected, I cannot yet run a manage.py Task because it has yet to find the SECRET_KEY. So I go into Run>Edit Configurations to add SECRET_KEY into Django server and Django Tests, and into the project server. Restart Pycharm, confirm keys. When I run a manage.py Task, such as runserver, I still get KeyError: 'SECRET_KEY'. Where do I put this key?
Pagination in Google App EngineSearch API
37,264,173
1
6
575
0
python,google-app-engine
Sorry to revive this old question, but I have a solution for this issue given a few constraints with possible workarounds. Basically, the cursors for previous pages can be stored and reused for revisiting that page. Constraints: This requires that pagination is done dynamically (e.g. with Javascript) so that older cursors are not lost. Workaround if pagination is done across html pages, the cursors would need to be passed along. Users would not be able to arbitrarily select a forward page, and would only be given next/back buttons. Though any previously visited page could easily be jumped to. Workaround could be to internally iterate and discard entries while generating cursors at pagination points until finally reaching the desired results. Then return the list of previous page cursors as well. All of this requires a lot of extra bookkeeping and complexity, which almost makes the solution purely academic, but I suppose that would depend on how much more efficient cursors are than simply limit/offset. This could be a worthwhile endeavor if your data is such that you don't expect your users to want to jump ahead more than one page at a time (which includes most type of searches).
0
1
0
0
2014-02-04T06:40:00.000
2
0.099668
false
21,545,635
0
0
1
1
I want to do pagination in google app engine search api using cursors (not offset). the forward pagination is straight forward , the problem is how to implement the backward pagination.
how do I re-write/redirect URLs in Gunicorn web server configuration?
21,556,311
1
0
1,587
0
python,https,gunicorn,url-pattern
The protocol has nothing to do with django. That part is handled by your http server
0
0
0
0
2014-02-04T15:17:00.000
1
0.197375
false
21,556,278
0
0
1
1
I'm building a Django-based app, and I need it to use secure requests. The secure requests in my site are enabled and manually writing the url gets it through fine. As I have quite a lot of urls I don't want to do it manually, but instead do something so Django always sends secure requests. How can I make it so it always send https?
Redirect user when the worker is done
21,561,671
1
2
191
0
python,redirect,flask,worker
You can do it as follows: When the user presses the button the server starts the task, and then sends a response to the client, possibly a "please wait..." type page. Along with the response the server must include a task id that references the task accessible to Javascript. The client uses the task id to poll the server regarding task completion status through ajax. Let's say this is route /status/<taskid>. This route returns true or false as JSON. It can also return a completion percentage that you can use to render a progress bar widget. When the server reports that the task is complete the client can issue the redirect to the completion page. If the client needs to be told what is the URL to redirect to, then the status route can include it in the JSON response. I hope this helps!
0
0
0
0
2014-02-04T17:12:00.000
1
1.2
true
21,558,984
0
0
1
1
I have the app in python, using flask and iron worker. I'm looking to implement the following scenario: User presses the button on the site The task is queued for the worker Worker processes the task Worker finishes the task, notifies my app My app redirects the user to the new endpoint I'm currently stuck in the middle of point 5, I have the worker successfully finishing the job and sending a POST request to the specific endpoint in my app. Now, I'd like to somehow identify which user invoked the task and redirect that user to the new endpoint in my application. How can I achieve this? I can pass all kind of data in the worker payload and then return it with the POST, the question is how do I invoke the redirect for the specific user visiting my page?
Keeping live audio stream synced
27,043,950
0
0
661
0
python,streaming,audio-streaming,ntp,gstreamer
Sorry for bring up an old question but this is something that I am looking into. I believe you need to look at Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. The protocol is used for establishing and controlling media sessions between end points. This is a much better way of keeping things synchronized. As for sending it to multiple devices looking into multicast addresses. Hope this helps
0
1
0
0
2014-02-04T19:18:00.000
2
0
false
21,561,405
0
0
1
2
I'm working on an application to provide multi-room audio to devices and have succeeded in keeping audio playing from a file (e.g. mp3) synced using GST and manually using NTP but I can't seem to get a live audio stream to sync. Essentially I want to be able to stream audio from one device to one or more other devices but rather than them buffering and getting out of sync I want them to all play at around the same time (close enough for any delay to not be noticeable anyway). Has anyone got any suggestions on ways that this can be achieved or can provide any material discussing the matter? (Search hasn't turned up much) It's worth noting that this application will be coded in Python.
Keeping live audio stream synced
21,566,534
1
0
661
0
python,streaming,audio-streaming,ntp,gstreamer
Unfortunately, delay as low as 10 milliseconds is noticeable to most folks. Musicians tend to appreciate even lower delay than that. And if you have any of the speakers from different devices within earshot of each other, you're going to run into phase issues at even the slightest unpredictable delay (which is inevitable on a computer). Basically, it is impossible to have a delay that isn't noticeable. Even if you do succeed in synchronizing the start times exactly, each device has a different sample clock on it, and they will drift apart over time. What is 44.1kHz to one device might be 44.103kHz on the other. If you have a more realistic expectation of synchronization... around 50-100ms, then this becomes more feasible. I would have one master device doing the decoding and then sending PCM samples out to the other devices for playback. Keep track of your audio device buffers and make sure they aren't getting too big (indicating that your device is behind) or underrunning (indicating a network problem or that your device is ahead). Have all the devices with the same buffer sizes and maybe even use broadcast packets to send the audio, since all devices are on the same network anyway.
0
1
0
0
2014-02-04T19:18:00.000
2
0.099668
false
21,561,405
0
0
1
2
I'm working on an application to provide multi-room audio to devices and have succeeded in keeping audio playing from a file (e.g. mp3) synced using GST and manually using NTP but I can't seem to get a live audio stream to sync. Essentially I want to be able to stream audio from one device to one or more other devices but rather than them buffering and getting out of sync I want them to all play at around the same time (close enough for any delay to not be noticeable anyway). Has anyone got any suggestions on ways that this can be achieved or can provide any material discussing the matter? (Search hasn't turned up much) It's worth noting that this application will be coded in Python.
Reorder model objects in django admin panel
21,579,803
4
13
5,031
0
python,django,django-models,django-admin
I don't see an obvious solution to this — the models are sorted by their _meta.verbose_name_plural, and this happens inside the AdminSite.index view, with no obvious place to hook custom code, short of subclassing the AdminSite class and providing your own index method, which is however a huge monolithic method, very inheritance-unfriendly.
0
0
0
0
2014-02-05T13:14:00.000
5
1.2
true
21,578,382
0
0
1
1
I have several configuration objects in django admin panel. They are listed in the following order Email config General config Network config Each object can be configured separately, but all of them are included in General config. So basically you will need mostly General config, so I want to move it to the top. I know how to order fields in a model itself, but how to reorder models?
Pycharm - Pyramid debugging request
21,599,084
3
1
399
0
python-3.x,pyramid,pycharm
Finally I have figured it out. It was my fault. The project gets deployed as an egg so that's where I was suppose to place my breakpoints. Thanks a lot for your time and consideration.
0
0
0
0
2014-02-05T13:16:00.000
1
0.53705
false
21,578,415
0
0
1
1
I am using Pycharm as IDE for one of my projects. The framework of choice is Pyramid and here comes my issue. I am not able to debug the request using PyCharm even though I start the application in debug mode. When a request is made from the browser the breakpoints from the views.py are not hit this does not apply for the breakpoints set in the application start-up (init.py and initializedb.py). Please note that I am new on Pyramid. Any idea how to solve this would be much appreciated. EDIT I apologize for not mentioning the details. I am using PyCharm 3.02 Pro and Pyramid 1.4.5. I am using the scaffolding provided by PyCharm.
How to automate Google Wallet order export data?
21,741,357
0
1
433
0
python,ruby,android-pay
analyticsPierce, I've asked the same question and have not received any answers. Here was my question, maybe we can work out a solution somehow. I've just about given up. "HttpWebRequest with Username/Password" on StackOverflow. Trey
0
0
1
0
2014-02-06T18:53:00.000
1
0
false
21,611,503
0
0
1
1
I am working to automate retrieving the Order data from the Google Wallet Merchant Center. This data is on the Orders screen and the export is through a button right above the data. Google has said this data is not available to export to a Google Cloud bucket like payments are and this data is not available through a Google API. I'm wondering if anyone has been successful in automating retrieval of this data using an unofficial method such as scraping the site or a separate gem or library? I have done tons of searching and have not seen any solutions.
Algorithm for traversing website including forms
31,978,451
0
2
77
0
python,algorithm,web,hyperlink,traversal
The links in a set of web pages can be seen as a tree graph and hence you could use various tree traversal algorithms like depth first and breadth first search to find all links. The links and related form data can be saved in a queue or stack depending on what traversal algorithm you are using.
0
0
1
0
2014-02-06T19:32:00.000
1
0
false
21,612,246
0
0
1
1
I am successful to record all the links of the website but missed some of the links which can only be visible with the form posting (for example login). What i did is recorded all the links without login. And took the form values. Then i posted the data and recorded the new links, but here i missed the other forms and links which are not in that posted links. Please suggest any efficient algorithm so that i could grab all the links by posting form datas. Thanks in advance.
CKAN extension deployment not working
21,623,741
0
1
217
0
python,ckan
Do you need to clear your browser's cache? Are there any other settings (e.g. extra_public_paths) that are different between your dev and production machines?
0
0
0
1
2014-02-07T00:15:00.000
1
0
false
21,616,883
0
0
1
1
I have developed couple of extensions and never had any problem in deploying to the production server. I did try to installation a new extension today on my production server that works on my dev machine but doesn't work on the production server. I am suppose to see a new menu option as part on this new extension and I don't see that. To test I changed the extension name in the production.ini and I got an expected error (PlugInNotFoundError). I have restarted the apache and nginx. I am running CKAN 2.1. I did ran the following command on the production server: python setup.py develop I got the message that the plugin was successfully installed. I also included this new plugin in the production.ini file settings. Restarted both the apache2 and nginx servers. Still not seeing a new menu option to access the functionality provided by this newly installed extension. Any help to sort this out be appreciated. Thanks, PK
Where should I save the Amazon Manifest json file on an app hosted at PythonAnywhere?
21,627,045
2
1
155
0
json,pythonanywhere
You can get to /var/www/static in the File browser. Just click on the '/' in the path at the top of the page and then follow the links. You can also just copy things there from a Bash console. You may need to create the static folder in /var/www if it's not there already.
0
1
0
0
2014-02-07T01:25:00.000
2
0.197375
false
21,617,616
1
0
1
1
I am trying to have my app on Amazon appstore. In order to do this Amazon needs to park a small json file (web-app-manifest.json). If I upload it to the the root of my web site (as suggested), Amazon bot says it cannot access file. Amazon support mention I should save it to /var/www/static but either I don't know how to get there or I don't have access to this part of the server. Any ideas ?
Accept permission request in chrome using selenium
21,686,531
8
9
21,421
0
google-chrome,python-2.7,selenium-webdriver
@ExperimentsWithCode Thank you for your answer again, I have spent almost the whole day today trying to figure out how to do this and I've also tried your suggestion where you add that flag --disable-user-media-security to chrome, unfortunately it didn't work for me. However I thought of a really simple solution: To automatically click on Allow all I have to do is press TAB key three times and then press enter. And so I have written the program to do that automatically and it WORKS !!! The first TAB pressed when my html page opens directs me to my input box, the second to the address bar and the third on the ALLOW button, then the Enter button is pressed. The python program uses selenium as well as PyWin32 bindings. Thank you for taking your time and trying to help me it is much appreciated.
0
0
1
0
2014-02-07T13:23:00.000
8
1.2
true
21,628,904
0
0
1
1
I have a HTML/Javascript file with google's web speech api and I'm doing testing using selenium, however everytime I enter the site the browser requests permission to use my microphone and I have to click on 'ALLOW'. How do I make selenium click on ALLOW automatically ?
Get form data in pre_save
21,727,785
0
1
56
0
python,django,django-rest-framework
For anyone who has this problem this is my solution: I didn't use pre_save in the end and worked from validate, where you can access all the attributes.
0
0
0
0
2014-02-07T14:47:00.000
1
1.2
true
21,630,646
0
0
1
1
How can I access the form data in presave? More exactly I have a ManyToManyField (called user_list) in my models and I want to access the list from pre_save(self, obj) I've tryed self.object.user_list and even obj.user_list but I keep getting and error. Thanks
Google App Engine (Python): Allow entity 'previewing before 'submit'
21,631,790
4
1
54
0
python,google-app-engine,google-cloud-datastore,app-engine-ndb
It aint that difficult. Abstract: [User]-> Posts [Data] to the [EntityCreatorPreviewHandler] [EntityCreatorPreviewHandler]-> Recieves the data and creates the entity eg: book = Book(title='Test'). [EntityCreatorPreviewHandler]-> Templates the html and basically shows the entity with all it's attributes etc. [EntityCreatorPreviewHandler]-> Also hides the initial [Data] in a hidden post form [User]-> Accepts save after preview and as soon as the save button is pressed the hidden form is submitted to a EntitySaveHandler [EntitySaveHandler] saves the data
0
1
0
0
2014-02-07T15:25:00.000
1
1.2
true
21,631,528
0
0
1
1
I'd like users to create an entity, and preview it, before saving it in the datastore. For example: User completes entity form, then clicks "preview". Forwarded to an entity 'preview' page which allows the user to "submit" and save the entity in the datastore, or "go back" to edit the entity. How can I achieve this?
Getting connection object in generic model class
21,651,170
1
0
76
1
python,database-connection
Here's how I would do: Use a connection pool with a queue interface. You don't have to choose a connection object, you just pick the next on the line. This can be done whenever you need transaction, and put back afterwards. Unless you have some very specific needs, I would use a Singleton class for the database connection. No need to pass parameters on the constructor every time. For testing, you just put a mocked database connection on the Singleton class. Edit: About the connection pool questions (I could be wrong here, but it would be my first try): Keep all connections open. Pop when you need, put when you don't need it anymore, just like a regular queue. This queue could be exposed from the Singleton. You start with a fixed, default number of connections (like 20). You could override the pop method, so when the queue is empty you block (wait for another to free if the program is multi-threaded) or create a new connection on the fly. Destroying connections is more subtle. You need to keep track of how many connections the program is using, and how likely it is you have too many connections. Take care, because destroying a connection that will be needed later slows the program down. In the end, it's a n heuristic problem that changes the performance characteristics.
0
0
0
0
2014-02-08T19:39:00.000
1
1.2
true
21,650,889
0
0
1
1
I have a Model class which is part of my self-crafted ORM. It has all kind of methods like save(), create() and so on. Now, the thing is that all these methods require a connection object to act properly. And I have no clue on what's the best approach to feed a Model object with a connection object. What I though of so far: provide a connection object in a Model's __init__(); this will work, by setting an instance variable and use it throughout the methods, but it will kind of break the API; users shouldn't always feed a connection object when they create a Model object; create the connection object separately, store it somewhere (where?) and on Model's __init__() get the connection from where it has been stored and put it in an instance variable (this is what I thought to be the best approach, but have no idea of the best spot to store that connection object); create a connection pool which will be fed with the connection object, then on Model's __init__() fetch the connection from the connection pool (how do I know which connection to fetch from the pool?). If there are any other approached, please do tell. Also, I would like to know which is the proper way to this.
Simple explanation of Google App Engine NDB Datastore
21,658,988
13
17
7,423
1
python,google-app-engine,app-engine-ndb
I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put, the datastore will return the key, which is stored in the form [<entity_kind>, <id_or_name>] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity). You can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [<parent_entity_kind>, <parent_id_or_name>, <entity_kind>, <id_or_name>]. You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round. (Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.) Saving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group.
0
1
0
0
2014-02-09T05:53:00.000
2
1.2
true
21,655,862
0
0
1
1
I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble. My current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key. One of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore? Another thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key? I'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background.
Python cgi or wsgi
21,656,118
0
0
227
0
python,python-3.x,cgi,mod-wsgi,wsgi
Go ahead with your work.. It will be fine if you are using Common Gateway Interface but you should have a look at network traffic.
0
0
1
0
2014-02-09T06:17:00.000
1
0
false
21,656,058
0
0
1
1
Hi everyone i'm working on a social network project . I'm using Pythons Common Gateway Interface to write everything ,handle database , ajax . So i have a question , i heard that Web Server Gateway Interface is better than Common Gateway Interface and can handle more users and higher traffic but now i have already finished my website more than half of the project . What should i do now ? i don't have much time to going back either.Is Python Common Gateway Interface that bad for large scale project ?
Using Groups in Django To Map Organizations
21,709,938
0
1
333
0
python,django
Sounds like the proper usecase for auth groups to me. Some sort of a department head role that can edit the name of a group they belong to (but not the permissions). Whoever IS creating the actual groups and setting permissions is a superuser. Even if you create your own model you can't escape the fact that authority must flow from somewhere. Auth groups has a lot of built-in features around permissions that should save you time.
0
0
0
0
2014-02-09T21:40:00.000
1
1.2
true
21,665,412
0
0
1
1
I am creating a Django app where there will be organizations. Organizations will contain departments –– like human resources/sales –– that will each have their own permissions. The name and roles of groups must be set by the organization itself and won't be known in advance. There will also be different permissions granted within groups –– a sales manager can do more than a salesperson. I am unsure to what extent I should use Django's inbuilt groups to handle permissions. Would it be appropriate to make an organization a group? Should a salesperson be a member of two groups –– a departmental group (sales) and a role-based group (salesperson)?
Google Drive API + App Engine = time out
21,677,121
0
0
588
0
python,google-app-engine,oauth-2.0,google-drive-api,httplib2
You mention "AppEngine's oauth2 library", but then you say "Drive API calls time out". So modifying the Oauth http library won't affect Drive. Are you using the Google library for your Drive calls, or making direct REST HTTP calls? If the former, try ... HttpRequest.setConnectTimeout(55000) , if the latter just ... request.getFetchOptions().setDeadline(55d) NB. Drive is having a brain fart today, so one would hope the underlying problem will go away of its own accord.
0
1
0
0
2014-02-10T10:53:00.000
1
1.2
true
21,675,209
0
0
1
1
So I built an app on App Engine that takes users files and move them to certain folders in the same domain. I made REST api that calls Drive API to list files, rename files, and change permissions etc. On app load, it fires 4 ajax calls to the server to get name and id of folders and checking if certain folder exists. The problem is front end ajax calls time out all the time in production. App engine url fetch has 60 sec limit. I used App engine's oauth2 library which uses a different httplib2. So I modified httplib2 source deadline to max 60 sec but it seems to time out after 30 sec. As a result, Drive API calls time out almost every time and app just doesn't work. I have read the guideline on optimizing drive api calls with partial response and implemented it but didn't see noticeable difference. It's driving me crazy.... please help
Private Python API key when using Angular for frontend
21,684,701
0
0
82
0
python,security,angularjs,authentication
As long as you send the sensitive data outside, you are at risk. You can obfuscate your code so that first grade malicious users have a hard time finding the key, but basically breaking your security is just a matter of time as an attacker will have all the elements to analyse your protocol and exchanged data and design a malicious software that will mimic your original client. One possible (although not unbreakable) solution would be to authenticate the users themselves so that you keep a little control over who is accessing the data and revoke infected accounts.
0
0
0
1
2014-02-10T17:50:00.000
1
0
false
21,684,420
0
0
1
1
I have a server implementing a python API. I am calling functions from a frontend that uses Angular.js. Is there any way to add an authentication key to my calls so that random people cannot see the key through the Angular exposed code? Maybe file structure? I am not really sure.
Real-time timeline function like tweetdeck?
22,314,409
0
0
424
0
python,tweepy
Twitter is a Python library, so there's really no way to use it directly on a website. You could send the data from the stream to a browser using WebSockets or AJAX long polling, however. There's no way to have the Userstream endpoint send tweets to you any faster - all of that happens on Twitter's end. I'm not sure why Tweetdeck would receive them faster, but there's nothing than can be done except to contact Twitter, if you're getting a slow speed.
0
0
0
0
2014-02-10T20:26:00.000
1
0
false
21,687,216
0
0
1
1
I'm creating an app using python/tweepy I'm able to use the StreamListener to get real time "timeline" when indicating a "topic" or #, $, etc. Is there a way to have a real-time timeline function similar to tweetdeck or an embedded widget for a website for the user ID? non-stop When using api.user_timeline receiving the 20 most recent tweepy. Any thoughts?
How do I find the postgres version running on my heroku app?
21,710,704
2
2
37
0
python-2.7,heroku-postgres
$ heroku pg:info --app yourapp
0
0
0
0
2014-02-11T11:39:00.000
1
1.2
true
21,700,792
0
0
1
1
I have a python / django app on heroku. how do i find the postgres version running on my app?
Python backend and html/ajax front end, need suggestions for my application
21,705,056
0
1
1,253
0
python,ajax
you should use Django framework for your app. You can integrate your scripts into Django views. And you can also use the loaddata system in order to insert your yaml data into the database
0
0
1
0
2014-02-11T14:44:00.000
2
0
false
21,704,977
0
0
1
1
Currently I have written a python script that extracts data from flickr site, and dump as a python class object and YAML file. I was planning to turn this into a website: and a simple html front page that can send a request to the backend, triggers the python scripts running on the backend the response will be parsed and render as a table on the html page. As i am very new to python, I am not sure how to plan my project. is there any suggestions for building this application? any framework or technologies i should use? For example, i guess i should use ajax in the front end, how about the backend? Thanks in advance for your suggestions!
Django queue function calls
21,718,657
0
0
297
0
python,django,post,dhtmlx
It seems like you should build the queue in django. If the rows need to be processed serially on the backend, then insert the change data into a queue and process the queue like an event handler. You could build a send queue using dhtmlx's event handlers and the ajax callback handler, yet why? The network is already slow, slowing it down further is the wrong approach.
0
0
0
0
2014-02-11T14:45:00.000
1
0
false
21,704,996
0
0
1
1
I have small problem with the nature of the data processing and django. for starters. I have webpage with advanced dhtmlx table. While adding rows to table DHTMLX automatically send POST data to mine django backend where this is processed and return XML data is sent to webpage. All of it works just fine when adding 1 row at a time. But when adding several rows at a time, some problem starts to occur. For starters, I have checked the order of send data to backend and its proper (let say Rows ID 1,2,3,4 are sent in that order). Problem is that backend processes the query when it arrives, usually they arrives in the same order (even though the randomness of the Internet). But django fires the same function for them instantly and it's complex functions that takes some time to compute, then sends the response. Problem is that every time function is called there is a change in the database and one of the variables depends on how big is a database table we are altering. While having the same data table altered in wrong order (different threads speed) the result data is rubbish. Is there any automatic solution to queue calls of one web called function so that every call could go to the queue and wait for previous to complete ?? I want to make such a queue for this function only.
Python - how to send file from filesystem with a unicode filename?
21,716,642
0
1
887
0
python,unicode,flask
OK, after wrestling with it under the hood for a while I fixed it, but not in a very elegant way, I had to modify the source of some werkzeug things. In "http.py", I replaced str(value) with unicode(value), and replaced every instance of "latin-1" with "utf-8" in both http.py and datastructures.py. It fixed the problem, file gets downloaded fine in both the latest Firefox and Chrome. As I said before, I would rather not have to modify the source of the libraries I am using because this is a pain when deploying/testing on different systems, so if anyone has a better fix for this please share. I've seen some people recommend just making the filename part of the URL but I cannot do this as I need to keep my URLs simple and clean.
0
0
0
1
2014-02-11T23:07:00.000
1
1.2
true
21,715,132
0
0
1
1
So I am using Flask to serve some files. I recently downgraded the project from Python 3 to Python 2.7 so it would work with more extensions, and ran into a problem I did not have before. I am trying to serve a file from the filesystem with a Japanese filename, and when I try return send_from_directory(new_folder_path, filename, as_attachment=True) I get UnicodeEncodeError: 'ascii' codec can't encode characters in position 15-20: ordinal not in range(128). in quote_header_value = str(value) (that is a werkzeug thing). I have template set to display the filename on the page by just having {{filename}} in the HTML and it is displaying just fine, so I'm assuming it is somehow reading the name from the filesystem? Only when I try send_from_directory so the user can download it does it throw this error. I tried a bunch of combinations of .encode('utf-8') and.decode('utf-8')`none of which worked at all and I'm getting very frustrated with this. In Python 3 everything just worked seamlessly because everything was treated as unicode, and searching for a way to solve this brought up results that it seems I would need a degree in compsci to wrap my head around. Does anyone have a fix for this? Thanks.
Can serialized objects be accessed simultaneously by different processes, and how do they behave if so?
21,718,777
1
0
2,682
0
python,serialization,pickle,shelve
Without trying it out I'm fairly sure the answer is: They can both be served at once, however, if one user is reading while the other is writing the reading user may get strange results. Probably not. Once the tree has been read from the file into memory the other user will not see edits of the first user. If the tree hasn't been read from the file then the change will still be detected. Both changes will be made simultaneously and the file will likely be corrupted. Also, you mentioned shelve. From the shelve documentation: The shelve module does not support concurrent read/write access to shelved objects. (Multiple simultaneous read accesses are safe.) When a program has a shelf open for writing, no other program should have it open for reading or writing. Unix file locking can be used to solve this, but this differs across Unix versions and requires knowledge about the database implementation used. Personally, at this point, you may want to look into using a simple key-value store like Redis with some kind of optimistic locking.
0
0
0
0
2014-02-12T01:36:00.000
2
1.2
true
21,716,890
1
0
1
1
I have data that is best represented by a tree. Serializing the structure makes the most sense, because I don't want to sort it every time, and it would allow me to make persistent modifications to the data. On the other hand, this tree is going to be accessed from different processes on different machines, so I'm worried about the details of reading and writing. Basic searches didn't yield very much on the topic. If two users simultaneously attempt to revive the tree and read from it, can they both be served at once, or does one arbitrarily happen first? If two users have the tree open (assuming they can) and one makes an edit, does the other see the change implemented? (I assume they don't because they each received what amounts to a copy of the original data.) If two users alter the object and close it at the same time, again, does one come first, or is an attempt made to make both changes simultaneously? I was thinking of making a queue of changes to be applied to the tree, and then having the tree execute them in the order of submission. I thought I would ask what my problems are before trying to solve any of them.
Google App Engine, Illegal string in dataset id when uploading to local datastore
21,805,320
0
0
141
0
python,google-app-engine
I managed to discover the problem on my own. The issue was with adding s~ before the app_id in the app.yaml file. Despite the Google App Engine documentation stating that s~ should be before the app_id for applications using the High Replication Datastore, this apparently causes an error when uploading the the development server.
0
1
0
0
2014-02-12T05:30:00.000
1
1.2
true
21,719,461
0
0
1
1
Using the bulk loader, I've downloaded the live datastore, and am now trying to upload it to the development server. When running the upload_data command to upload the datastore to the dev server I get the following error, BadRequestError: Illegal string "dev~s~app_id" in dataset id. The command I'm using the upload the data is appcfg.py upload_data --url=://localhost:8080/_ah/remote_api --filename=datastore_2-11-14 The command I used the downlaod the data is appcfg.py download_data --url=://app_id.appspot.com/_ah/remote_api --filename=datastore_2-11-14
In django It's correct create much models into file models.py
21,720,470
-2
0
181
0
python,django,django-models,django-admin
All tables creates in to the models.py and all the query can be write in to views.py
0
0
0
0
2014-02-12T05:50:00.000
2
-0.197375
false
21,719,726
0
0
1
1
I am create a new web application with django but I have one question, if I need create 30 o 40 tables into data base, I need put all models into file models.py? So I think that this is very complicated of maintain, because this file may growa lot This is my question, is optimum make that?
Generating entity id with easily distinguished types/groups
21,740,467
0
1
115
0
javascript,python,postgresql,redis,uniqueidentifier
UUID version 4 is basically 122 random bits + 4 bits for UUID version + 2 bits reserved. Its uniqueness relies on low probability of generating the same 122 bits. UUID version 5 is basically 122 hash bits + 4 bits for UUID version + 2 bits reserved. Its uniquess relies on low probability of collision for 122-bit truncated SHA1 hash. When you replace N bits of UUID (as long as they are not "version" or "reserved" bits), you make a tradeoff: probability of collision becomes higher 2^N times. For example, if you use UUID4, probability of collision is neglectible, namely 2^122. In the same time, if you have up to 8 entity types and use UUID4 with 8 bits replaced, probability of collision becomes 2^194, which is bigger, though still neglectible. So probably using UUID4 with N bits replaced may be a safe option without taking extra care to guarantee uniquess.
0
0
0
0
2014-02-12T14:35:00.000
2
0
false
21,730,906
0
0
1
1
I need to generate unique id in distributed environment. Catch is that each id has to have a group/type information that can be examined by simple script. Details: I have some, fixed, number of entity types (lets call them: message, resource, user, session, etc). I need to generate unique id in form: so i can know where to direct request based only on id - without db, list, or anything. I have considered uuid in version 3 or 5 but as far as I can see it is impossible to know "namespace" provided for generating id. I have also considered just replacing first x characters of uuid with fixed values but then i will lose uniqueness. I have also considered Twitter snowflake or Instagram way of generating id's but I don't know the number of nodes in each group and I cannot assume anything. I will be using them in JS, Python, Redis and Postgresql so portability of code (and representation - big integer representation is full of bugs in JavaScript) is required. So either pure "number" or string that can be formatted as uuid (binary representation) for database. edit: I will generate them in Python or in Postgresql and only pass them in JavaScript and Redis.
Django http connection timeout
47,601,617
3
7
14,680
0
python,django,apache,mod-wsgi
I solved this problem with: python manage.py runserver --http_timeout 120
0
0
0
0
2014-02-12T18:27:00.000
2
0.291313
false
21,736,489
0
0
1
1
I have Django + mod_wsgi + Apache server. I need to change default HTTP connection timeout. There is Timeout directive in apache config but it's not working. How can I set this up?
Find user's Facebook friends who are also registered users on my site in Django
21,740,472
0
1
220
0
python,django,facebook,facebook-graph-api,facebook-fql
Find friend button is inefficient indeed, I would search the database just once at user registration.
0
0
0
0
2014-02-12T18:39:00.000
1
0
false
21,736,742
0
0
1
1
I'm developing a Django site that has as a feature its own social network. Essentially, the site stores connections between users. I want users to be able to import their pre-existing Facebook connections (friends) to my site, so that they will automatically be connected to their existing Facebook friends who are users on my site. The way I envision doing this is by allowing users to login with Facebook (probably with something like django-socialauth), and store the user's Facebook ID in the database. Then, each time a user clicks the "find friends from Facebook" button, I could query the Facebook API to see if any of my existing users are their friends. What's the best way to do this? I could use FQL and get a list of their friend's Facebook IDs, and then check that against my users', but that seems really inefficient at scale. Is there any way to do this without running through each of my users, one by one, and checking whether their Facebook ID is in the user's friends list? Thanks.
Need to read XML files as a stream using BeautifulSoup in Python
21,740,512
2
3
1,799
0
python,xml
Beautiful Soup has no streaming API that I know of. You have, however, alternatives. The classic approach for parsing large XML streams is using an event-oriented parser, namely SAX. In python, xml.sax.xmlreader. It will not choke with malformed XML. You can avoid erroneous portions of the file and extract information from the rest. SAX, however, is low-level and a bit rough around the edges. In the context of python, it feels terrible. The xml.etree.cElementTree implementation, on the other hand, has a much nicer interface, is pretty fast, and can handle streaming through the iterparse() method. ElementTree is superior, if you can find a way to manage the errors.
0
0
1
0
2014-02-12T21:44:00.000
1
1.2
true
21,740,376
0
0
1
1
I have a dilemma. I need to read very large XML files from all kinds of sources, so the files are often invalid XML or malformed XML. I still must be able to read the files and extract some info from them. I do need to get tag information, so I need XML parser. Is it possible to use Beautiful Soup to read the data as a stream instead of the whole file into memory? I tried to use ElementTree, but I cannot because it chokes on any malformed XML. If Python is not the best language to use for this project please add your recommendations.
Best format to pack data for correlation determination?
21,740,743
1
1
73
0
python,csv,scipy,correlation
Each dataset is a column and all the datasets combined to make a CSV. It get read as a 2D array by numpy.genfromtxt() and then call numpy.corrcoef() to get correlation coefficients. Note: you should also consider the same data layout, but using pandas. Read CSV into a dataframe by pandas.read_csv() and get the correlation coefficients by .corr()
0
0
0
0
2014-02-12T21:50:00.000
1
1.2
true
21,740,498
0
1
1
1
I'm using a Java program to extract some data points, and am planning on using scipy to determine the correlation coefficients. I plan on extracting the data into a csv-style file. How should I format each corresponding dataset, so that I can easily read it into scipy?
How to know whether login using social account or website login in Django
21,747,354
0
0
313
0
python,django,django-socialauth
A simple solution (when someone don't know the actual implementation) can be Create a new table with user as foreign key and one more column with will work as flag for type of authentication. The flag can be 1 for django user and 2 for social auth user While creating the user in your system populate this table accordingly. Hide the option of change the password on the basis of same table.
0
0
0
0
2014-02-13T06:12:00.000
2
1.2
true
21,746,564
0
0
1
2
I'm develop the site using Django and am using django social_auth API for social login authentication. Here in my website no need to display the change password option when am login using social account. So how to hide that option when am login with social account. OR If there is any possibility to know whether login using social account or website login. Kindly let me know if you have an idea to solve this issue. Thanking you.
How to know whether login using social account or website login in Django
21,769,885
0
0
313
0
python,django,django-socialauth
Check the session value social_auth_last_login_backend, if it's set it will have the last social backend used to login, if it's not set, then it means that the user logged in with non-social auth.
0
0
0
0
2014-02-13T06:12:00.000
2
0
false
21,746,564
0
0
1
2
I'm develop the site using Django and am using django social_auth API for social login authentication. Here in my website no need to display the change password option when am login using social account. So how to hide that option when am login with social account. OR If there is any possibility to know whether login using social account or website login. Kindly let me know if you have an idea to solve this issue. Thanking you.
Django 1.4 - django.db.models.FileField.save(filename, file, save=True) produces error with non-ascii filename
21,777,287
0
1
1,378
0
python,django,unicode,encoding,utf-8
In your FileField definition the 'upload_to' argument might be like os.path.join(u'uploaded', 'files', '%Y', '%m', '%d') (see the first u'uploaded' started with u') so all string will be of type unicode and this may help you.
0
0
0
0
2014-02-13T07:26:00.000
2
0
false
21,747,765
0
0
1
1
I'm making a fileupload feature using django.db.models.FileField of Django 1.4 When I try to upload a file whose name includes non-ascii characters, it produces error below. 'ascii' codec can't encode characters in position 109-115: ordinal not in range(128) The actual code is like below file = models.FileField(_("file"), max_length=512, upload_to=os.path.join('uploaded', 'files', '%Y', '%m', '%d')) file.save(filename, file, save=True) #<- This line produces the error above, if 'filename' includes non-ascii character If I try to use unicode(filename, 'utf-8') insteadof filename, it produces error below TypeError: decoding Unicode is not supported How can I upload a file whose name has non-ascii characters? Info of my environment: sys.getdefaultencoding() : 'ascii' sys.getfilesystemencoding() : 'UTF-8' using Django-1.4.10-py2.7.egg
Cronjob at given interval
21,785,587
0
0
52
0
python,cron,raspberry-pi
You can always add the unix command sleep xx to the cronjob before executing your command. Example: */15 * * * * (sleep 20; /root/crontabjob.sh) Now the job will run every 15 minutes and 20 seconds (00:15:20, 00:30:20), 00:45:20 ....)
0
0
0
1
2014-02-13T17:25:00.000
2
0
false
21,761,216
0
0
1
1
I have a python script that loads up a webpage on a raspberry. This script MUST run at startup, and then every 15 minutes. In future there will be many of these, maybe 1000 or even more. Currently i am doing this with a cronjob, but the problem with that is that all 1000 raspberries will connect to the webpage at the very same time (plus minus a few seconds given that they take the precise clock from the web) It would be good to execute the command after 15 minutes from the last run, regardless of the time. I like the cronjob solution because i have nothing running in background, so it simply executes does its job and then it's over. at the other hand, cronjob takes care only of the minutes, and not the seconds, so even if i scatter the 1000 pi's over these 15 minutes I will still end having about 80 simultaneous requests to the webpage every single minute. Is there a nice solution to this?
Reload single module in cherrypy?
21,762,864
5
0
468
0
python,cherrypy
Reloading modules is very, very hard to do in a sane way. It leads to the potential of stale objects in your code with impossible-to-interrogate state and subtle bugs. It's not something you want to do. What real web applications tend to do is to have a server that stays alive in front of their application, such as Apache with mod_proxy, to serve as a reverse proxy. You start your new app server, change your reverse proxy's routing, and only then kill the old app server. No downtime. No insane, undebuggable code.
0
0
0
0
2014-02-13T18:29:00.000
1
1.2
true
21,762,574
0
0
1
1
Is it possible to use the python reload command (or similar) on a single module in a standalone cherrypy web app? I have a CherryPy based web application that often is under continual usage. From time to time I'll make an "important" change that only affects one module. I would like to be able to reload just that module immediately, without affecting the rest of the web application. A full restart is, admittedly, fast, however there are still several seconds of downtime that I would prefer to avoid if possible.
Simple way to send emails asynchronously
21,764,727
0
0
345
0
django,python-multithreading,django-commands
If you don't want to implement celery (which in my opinion isn't terribly difficult to setup), then your best bet is probably implementing a very simple queue using either your database. It would probably work along the lines of this: System determines that an email needs to be sent and creates a row in the database with a status of being 'created' or 'queued' On the other side there will be a process that scans your "queue" periodically. If they find anything to send (in this case any rows that with status "created/queued", they will update the status to 'sending'. The process will then proceed to send the email and finally update the status to sent. This will take care of both asynchronously sending the objects and keeping track of the statuses of all emails should things go awry. You could potentially go with a Redis backend for your queue if the additional updates are too taxing onto your database as well.
0
1
0
0
2014-02-13T19:38:00.000
2
0
false
21,763,924
0
0
1
1
I'm running a django app and when some event occurs I'd like to send email to a list of recipients. I know that using Celery would be an intelligent choice, but I'd like to know if there's another, most simple way to do it without having to install a broker server, supervisor to handle the daemon process running in the background... I'd like to find a more simple way to do it and change it to celery when needed. I'm not in charge of the production server and I know the guy who's running it will have big troubles setting all the configuration to work. I was thinking about firing a django command which opens several processes using multiprocessing library or something like that.
Python in the browser (Skulpt/Codeskulptor) + tests?
21,801,491
0
0
324
0
python,codeskulptor
This is probably an insecure solution but it works for my current purposes. My PHP script calls shell_exec('python3 -c "X") where X is the user-supplied code appended with the code I use for testing, e.g. calling their created functions etc.
0
0
0
0
2014-02-13T19:51:00.000
1
1.2
true
21,764,170
1
0
1
1
For a school project I want to make a small Codecademy-like site that teaches programming for beginners. I want the site to teach Python as it has a syntax that is suitable for beginners to learn, and for this reason I found Skulpt to be useful as it has both browser text and drawing output capabilities. My question now though is, is there some way to integrate testing with the code the user writes, so the site can mark the code as correct or incorrect? E.g. a task could be to write a function that returns the nth fibonacci number, and the site runs the user-provided code and checks for instance that their fib(5) returns 8. How does CodingBat do it?
Does Django Block When Celery Queue Fills?
21,765,816
1
0
400
0
python,django,multithreading,rabbitmq,celery
It's impossible to really answer your question without an in-depth analysis of your actual code AND benchmark protocol, and while having some working experience with Python, Django and Celery I wouldn't be able to do such an in-depth analysis. Now there are a couple very obvious points : if your workers are running on the same computer as your Django instance, they will compete with Django process(es) for CPU, RAM and IO. if the benchmark "client" is also running on the same computer then you have a "heisenbench" case - bombing a server with 100s of HTTP request per second also uses a serious amount of resources... To make a long story short: concurrent / parallel programming won't give you more processing power, it will only allow you to (more or less) easily scale horizontally.
0
1
0
0
2014-02-13T20:54:00.000
2
1.2
true
21,765,266
0
0
1
2
I'm doing some metric analysis on on my web app, which makes extensive use of celery. I have one metric which measures the full trip from a post_save signal through a celery task (which itself calls a number of different celery tasks) to the end of that task. I've been hitting the server with up to 100 requests in 5 seconds. What I find interesting is that when I hit the server with hundreds of requests (which entails thousands of celery worker processes being queued), the time it takes for the trip from post save to the end of the main celery task increases significantly, even though I never do any additional database calls, and none of the celery tasks should be blocking the main task. Could the fact that there are so many celery tasks in the queue when I make a bunch of requests really quickly be slowing down the logic in my post_save function and main celery task? That is, could the processing associated with getting the sub-tasks that the main celery task creates onto a crowded queue be having a significant impact on the time it takes to reach the end of the main celery task?
Does Django Block When Celery Queue Fills?
34,550,948
0
0
400
0
python,django,multithreading,rabbitmq,celery
I'm not sure about slowing down, but it can cause your application to hang. I've had this problem where one application would backup several other queues with no workers. My application could then no longer queue messages. If you open up a django shell and try to queue a task. Then hit ctrl+c. I can't quite remember what the stack trace should be, but if you post it here I could confirm it.
0
1
0
0
2014-02-13T20:54:00.000
2
0
false
21,765,266
0
0
1
2
I'm doing some metric analysis on on my web app, which makes extensive use of celery. I have one metric which measures the full trip from a post_save signal through a celery task (which itself calls a number of different celery tasks) to the end of that task. I've been hitting the server with up to 100 requests in 5 seconds. What I find interesting is that when I hit the server with hundreds of requests (which entails thousands of celery worker processes being queued), the time it takes for the trip from post save to the end of the main celery task increases significantly, even though I never do any additional database calls, and none of the celery tasks should be blocking the main task. Could the fact that there are so many celery tasks in the queue when I make a bunch of requests really quickly be slowing down the logic in my post_save function and main celery task? That is, could the processing associated with getting the sub-tasks that the main celery task creates onto a crowded queue be having a significant impact on the time it takes to reach the end of the main celery task?
Selenium webdriver, Python - target partial CSS selector?
22,903,875
0
2
2,815
0
python,selenium-webdriver
css=span.error -- Error css=span.warning -- Warning css=span.critical -- Critical Error Simple above are the CSS Selectors we can use.
0
0
1
0
2014-02-13T21:00:00.000
3
0
false
21,765,396
0
0
1
1
I am trying to target specific CSS elements on a page, but the problem is that they have varying selector names. For instance, input#dp156435476435.textinput.wihtinnextyear.datepicker.hasDatepicker.error. I need to target the CSS because i am specifcally looking for the .error at the end of the element, and that is only in the CSS (testing error validation for fields on a website. I know if I was targeting class/name/href/id/etc, I could use xpath, but I'm not aware of a partial CSS selector in selenium webdriver. Any help would be appreciated, thanks!
Django and external sqlite db driven by python script
21,768,188
1
0
1,298
1
python,django,sqlite
If you care about taking control over every single aspect of how you want to render your data in HTML and serve it to others, Then for sure Django is a great tool to solve your problem. Django's ORM models make it easier for you to read and write to your database, and they're database-agnostic. Which means that you can reuse the same code with a different database (like MySQL) in the future. So, to wrap it up. If you're planning to do more development in the future, then use Django. If you only care about creating these HTML pages once and for all, then don't. PS: With Django, you can easily integrate these scripts into your Django project as management commands, run them with cronjobs and integrate everything you develop together with a unified data access layer.
0
0
0
0
2014-02-13T22:48:00.000
2
1.2
true
21,767,229
0
0
1
1
I am just beginning learning Django and working through the tutorial, so sorry if this is very obvious. I have already a set of Python scripts whose ultimate result is an sqlite3 db that gets constantly updated; is Django the right tool for turning this sqlite db something like a pretty HTML table for a website? I can see that Django is using an sqlite db for managing groups/users and data from its apps (like the polls app in the tutorial), but I'm not yet sure where my external sqlite db, driven by my other scripts, fits into the grand scheme of things? Would I have to modify my external python scripts to write out to a table in the Django db (db.sqlite3 in the Django project dir in tutorial at least), then make a Django model based on my database structure and fields? Basically,I think my question boils down to: 1) Do I need to create Django model based on my db, then access the one and only Django "project db", and have my external script write into it. 2) or can Django utilise somehow a seperate db driven by another script somehow? 3) Finally, is Django the right tool for such a task before I invest weeks of reading...
Does NDB still index with default=None or properties set to None?
21,767,994
6
2
363
0
google-app-engine,python-2.7
Explicitly setting a property to None is defining a value, and yes defaults work and the property will be indexed. This assumes None is a valid value for a particular property type. Some issues will arise, as you pointed out, often you use None as a sentinal value, so how do you tell between no Value provided and an explicit None?
0
0
0
0
2014-02-13T23:35:00.000
1
1
false
21,767,951
0
0
1
1
I'd like to be able to run a query like: MyModel.query(MyModel.some_property == None) and get results. I know that if I don't put a default=<some default> in a property, I won't be able to query for it, but if I set default=None will it index it? Similarly, does setting values to None cause properties to be indexed in ndb.Model? What if you pass some_keyword_arg=None to the constructor? I know that doing something like: ndb.StringProperty(default='') means you can query on it, just not clear on the semantics of using None.
How to add path for the file today.html in views.py and settings.py in django
21,795,994
0
0
1,522
0
python,django
First copy the file today.html into your projects template folder. Add the path in settings.py file. ie your project-path/your file-path(today.html). Then create a function to open today.html file in views.py. Now in urls.py file give the url. ie. url(r'^today/$', 'your project-path.views.today', name='today'), that's it
0
0
0
0
2014-02-14T06:53:00.000
3
0
false
21,772,673
0
0
1
2
I have a html file in my home directory named today.html I have to load that file on a click of a link using django. How to add the file path in views.py and settings.py files.
How to add path for the file today.html in views.py and settings.py in django
21,772,731
0
0
1,522
0
python,django
You could add your home directory path to the TEMPLATE_DIRS setting in your project's settings.py file. Then when you try to render the template in your view, Django will be able to find it.
0
0
0
0
2014-02-14T06:53:00.000
3
0
false
21,772,673
0
0
1
2
I have a html file in my home directory named today.html I have to load that file on a click of a link using django. How to add the file path in views.py and settings.py files.
Django Middleware, single action after login
21,784,110
0
0
795
0
django,python-2.7,login,django-middleware
Maybe you should try to use user_logged_in signal instead of middleware? Also you can check user object from request for is_anonymous, maybe it can helps
0
0
0
0
2014-02-14T15:11:00.000
2
0
false
21,782,897
0
0
1
1
Let's say I created a Middleware which should redirect user after login to a view with "next" parameter taken from LOGIN_REDIRECT_URL. But it should do it only once directly after logging, not with every request to LOGIN_REDIRECT_URL. At the moment I check User.last_login and compare it with datetime.datetime.now(), but it seems not to be a reasonable solution. Any better ideas?
GAE Request Timeout when user uploads csv file and receives new csv file as response
21,802,072
2
0
79
0
python,google-app-engine
You have many options: Use a timer in your client to check periodically (i.e. every 15 seconds) if the file is ready. This is the simplest option that requires only a few lines of code. Use the Channel API. It's elegant, but it's an overkill unless you face similar problems frequently. Email the results to the user.
0
1
0
0
2014-02-15T17:05:00.000
2
0.197375
false
21,800,806
0
0
1
1
I have an app on GAE that takes csv input from a web form and stores it to a blob, does some stuff to obtain new information using input from the csv file, then uses csv.writer on self.response.out to write a new csv file and prompt the user to download it. It works well, but my problem is if it takes over 60 seconds it times out. I've tried to setup the do some stuff part as a task in task queue, and it would work, except I can't make the user wait while this is running, and there's no way of calling the post that would write out the new csv file automatically when the task queue is complete, and having the user periodically push a button to see if it is done is less than optimal. Is there a better solution to a problem like this other than using the task queue and having the user have to manually push a button periodically to see if the task is complete?
Is there a way to tell a browser to download a file as a different name than as it exists on disk?
21,817,783
0
0
143
1
python,amazon-s3,flask
I'm stupid. Right in the Flask API docs it says you can include the parameter attachment_filename in send_from_directory if it differs from the filename in the filesystem.
0
0
0
0
2014-02-16T03:48:00.000
1
1.2
true
21,807,032
0
0
1
1
I am trying to serve up some user uploaded files with Flask, and have an odd problem, or at least one that I couldn't turn up any solutions for by searching. I need the files to retain their original filenames after being uploaded, so they will have the same name when the user downloads them. Originally I did not want to deal with databases at all, and solved the problem of filename conflicts by storing each file in a randomly named folder, and just pointing to that location for the download. However, stuff came up later that required me to use a database to store some info about the files, but I still kept my old method of handling filename conflicts. I have a model for my files now and storing the name would be as simple as just adding another field, so that shouldn't be a big problem. I decided, pretty foolishly after I had written the implmentation, on using Amazon S3 to store the files. Apparently S3 does not deal with folders in the way a traditional filesystem does, and I do not want to deal with the surely convoluted task of figuring out how to create folders programatically on S3, and in retrospect, this was a stupid way of dealing with this problem in the first place, when stuff like SQLalchemy exists that makes databases easy as pie. Anyway, I need a way to store multiple files with the same name on s3, without using folders. I thought of just renaming the files with a random UUID after they are uploaded, and then when they are downloaded (the user visits a page and presses a download button so I need not have the filename in the URL), telling the browser to save the file as its original name retrieved from the database. Is there a way to implement this in Python w/Flask? When it is deployed I am planning on having the web server handle the serving of files, will it be possible to do something like this with the server? Or is there a smarter solution?
What is the difference between mod_wsgi and uwsgi?
21,814,847
3
6
5,775
0
python,apache,nginx,wsgi,uwsgi
They are just 2 different ways of running WSGI applications. Have you tried googling for mod_wsgi nginx? Any wsgi compliant server has that entry point, that's what the wsgi specification requires. Yes, but that's only how uwsgi communicates with Nginx. With mod_wsgi the Python part is run from within Nginx, with uwsgi you run a separate app.
0
1
0
1
2014-02-16T17:14:00.000
1
1.2
true
21,814,585
0
0
1
1
There seems to be mod_wsgi module in Apache and uwsgi module in Nginx. And there also seems to be the wsgi protocol and uwsgi protocol. I have the following questions. Are mod_wsgi and uwsgi just different implementations to provide WSGI capabilities to the Python web developer? Is there a mod_wsgi for Nginx? Does uwsgi also offer the application(environ, start_response) entry point to the developers? Is uwsgi also a separate protocol apart from wsgi? In this case, how is the uwsgi protocol different from the wsgi protocol?
Installing RIDE(Robot Framework)
51,078,298
0
3
21,961
0
automated-tests,wxpython,robotframework,python-2.6
You probably have the different versions for wxPython and Python in your machine. Always make sure you should install the wxPython version same as the python version i.e. Python 2.7.
0
0
0
0
2014-02-17T09:23:00.000
3
0
false
21,825,122
0
0
1
1
For automated testing on RIDE(Robot framework), I had already installed PYTHON 2.6 and wxPython 3.0 version,PATH had already been updated in Environment variables, and when I jumped to the last phase i.e Installing RIDE(version -"robotframework-ride-1.3.win32.exe") through Windows Installer, application is been installed when I try to through "Run as Administrator", it was unable to open the IDE. How I can resolve this issue?