Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Does the order of decorators matter on a Flask view? | 28,204,335 | 23 | 15 | 3,907 | 0 | python,flask,python-decorators,flask-login | While there probably won't be any problem in this case no matter what the order, you probably want login_required to execute first so that you don't make queries and paginate results that will just get thrown away.
Decorators wrap the original function bottom to top, so when the function is called the wrapper added by each decorator executes top to bottom. @login_required should be below any other decorators that assume the user is logged in so that its condition is evaluated before those others.
@app.route() must always be the top, outermost decorator. Otherwise the route will be registered for a function that does not represent all the decorators.
The broader answer is that it depends on what each of the decorators are doing. You need to think about the flow of your program and whether it would make logical sense for one to come before the other. | 0 | 0 | 0 | 0 | 2015-01-28T23:01:00.000 | 4 | 1.2 | true | 28,204,071 | 0 | 0 | 1 | 1 | I'm using the login_required decorator and another decorator which paginates output data. Is it important which one comes first? |
Run python script every 5 minutes under Windows | 28,215,366 | 3 | 3 | 11,625 | 0 | python | If you are using Windows i would suggest using Windows Task Scheduler, it's quite simple thanks to the UI and from there Trigger your Python code.
For a server environment like Linux you could set up a Cron task. | 0 | 0 | 1 | 0 | 2015-01-29T12:46:00.000 | 2 | 0.291313 | false | 28,215,153 | 0 | 0 | 1 | 1 | I am having a simple python script scraping some data from html page and writing out results to a csv file. How can I automate the scraping, i.e. kick it off every five minutes under Windows.
Thanks
Peter |
Running different stacks on same cloud instance | 28,217,518 | 0 | 0 | 31 | 0 | php,python,wordpress,amazon-web-services | If you can run it on the same physical server, you can run it on an EC2 server the same way. There is no difference as far as that is concerned. | 0 | 0 | 0 | 1 | 2015-01-29T13:46:00.000 | 2 | 0 | false | 28,216,315 | 1 | 0 | 1 | 1 | I'm running a Python-based application on an AWS instance but now I want to install a Wordpress (PHP) blog as a sub-domain or as a sub-folder as an addition in the application. Is it technically possible to run two different stack applications on a single cloud instance? Currently getting an inscrutable error installing the Wordpress package with the Yum installer. |
Django + Angular deployment on Heroku | 28,218,300 | 1 | 5 | 2,836 | 0 | python,django,angularjs,heroku | There's nothing particularly special about this setup. Angular code is just static files, and can be served from whatever point you want; then the Ajax calls to the REST backend go to the endpoint you determined. | 0 | 0 | 0 | 0 | 2015-01-29T13:58:00.000 | 2 | 0.099668 | false | 28,216,549 | 0 | 0 | 1 | 1 | I have a project consisting of the Django Rest Backend and AngularJS frontend. The project root directory contains two folders: backend and frontend, in the first one there is placed the whole Django app and in the second one the Angular frontend app.
Is it possible to deploy such a structure to Heroku in one subdomain. To be precise, I want to have urls like this:
myapp.heroku.com - which will load the whole Angular project frontend
myapp.heroku.com/backend - which will be the Rest API endpoint
How to deploy both apps on Heroku to obtain such a solution? Or maybe you have any other suggestions concerning the project structure and deployment? |
Python - BeautifulSoup - German characters in html | 42,434,749 | 0 | 0 | 580 | 0 | python,html | While reading assign text to an variable and the decode it, like if your text is stored under variable Var, then while reading use Var.decode("utf-8"). | 0 | 0 | 0 | 0 | 2015-01-30T11:11:00.000 | 1 | 0 | false | 28,234,634 | 0 | 0 | 1 | 1 | Dear friendly python experts,
I am using BeautifulSoup to scrape some html text from a site. This site contains German words, such as "Groß" or "Bär". When I print the html text these characters get translated quite nasty making it too hard to search the html text for the words then.
How can I replace ß to ss, ä to ae, ü to, ö to oe, in the html text?
I was looking for a solution everywhere to this, however it got me nowhere, except confusion land
As this is a project help is very much appreciated! |
Django/Postgres: FATAL: remaining connection slots are reserved for non-replication superuser connections | 28,395,905 | 4 | 2 | 3,434 | 1 | python,django,postgresql,heroku,django-queryset | I realized that I was using the django server in my procfile. I accidentally commented out and commited it to heroku instead of using gunicorn. Once I switched to gunicorn on the same heroku plan the issue was resolved.
Using a production level application server really makes a big difference. Also don't code at crazy hours of the day when you're prone to errors. | 0 | 0 | 0 | 0 | 2015-01-30T14:35:00.000 | 1 | 1.2 | true | 28,238,144 | 0 | 0 | 1 | 1 | Recently I've been receiving this error regarding what appears to be an insufficiency in connection slots along with many of these Heroku errors:
H18 - Request Interrupted
H19 - Backend connection timeout
H13 - Connection closed without response
H12 - Request timeout
Error
django.db.utils.OperationalError in /
FATAL: remaining connection slots are reserved for non-replication superuser connections
Current Application setup:
Django 1.7.4
Postgres
Heroku (2x 2 dynos, Standard-2) 5ms response time, 13rpm Throughput
Are there general good practices for where one should or should not perform querysets in a Django application, or when to close a database connection?
I've never experienced this error before. I have increased my dynos on heroku and allocated significantly more RAM and I am still experiencing the same issue.
I've found similar questions on Stack Overflow but I haven't been able to figure out what might be causing the issue exactly.
I have querysets in Model methods, views, decorator views, context processors.
My first inclination would be that there is an inefficient queryset being performed somewhere causing connections to remain open that eventually crashes the application with enough people accessing the website.
Any help is appreciated. Thanks. |
Flask isn't recognising connections from other clients | 28,257,714 | 0 | 0 | 48 | 0 | python,flask | you need to configure your firewall on your server/workstation to allow connections on port 5000. setting the ip to 0.0.0.0 allows connections to your machine but only if you have the port open. also, you will need to connect via the ip of your machine and not localhost since localhost will only work from the machine where the server is running. | 0 | 0 | 0 | 0 | 2015-01-31T16:29:00.000 | 1 | 0 | false | 28,253,855 | 0 | 0 | 1 | 1 | I have an apache server setup on a Pi, and i'm trying to learn Flask. I set it up so that The 'view' from the index '/' returns "hello world". then i ran my main program. nothing happens from the browser on the PC i'm SSH'ing from,I just get an error saying , but when i used the Pi directly and went to http:localhost:5000/ i got a response.I read about setting Host to '0.0.0.0' but that didnt help. how can i get my Flask to accept all connections? does it make a difference that I have an 'index.html' in '/'? |
Handling user database in flask web app without ORM like SQLAlchemy | 28,280,443 | 0 | 2 | 970 | 1 | python,sql,orm,flask,sqlalchemy | No, an ORM is not required, just incredibly convenient. SQLAlchemy will manage connections, pooling, sessions/transactions, and a wide variety of other things for you. It abstracts away the differences between database engines. It tracks relationships between tables in convenient collections. It generally makes working with complex data much easier.
If you're concerned about performance, SQLAlchemy has two layers, the orm and the core. Dropping down to the core sacrifices some convenience for better performance. It won't be as fast as using the database driver directly, but it will be fast enough for most use cases.
But no, you don't have to use it. | 0 | 0 | 0 | 0 | 2015-02-02T05:39:00.000 | 1 | 0 | false | 28,271,711 | 0 | 0 | 1 | 1 | Most of the Flask tutorials and examples I see use an ORM such as SQLAlchemy to handle interfacing with the user database. If you have a general working knowledge of SQL, is this extra level of abstraction, heavy with features, necessary? I am tempted to write a lightweight interface/ORM of my own so I better understand exactly what's going on and have full control over the queries, inserts, etc. But are there pitfalls to this approach that I am not considering that may crop up as the project gets more complex, making me wish I used a heavier ORM like SQLAlchemy? |
Google App Engine Python Search Api's Location-based queries (Geosearch) Issues | 29,008,682 | 0 | 0 | 248 | 0 | python,google-app-engine,google-search-api,google-app-engine-python | There can be two reasons for this :
1 - miles instead of km
2 - conversion numbers (for example 35,322.2 is 35322.2 ? km ? miles ?)
i suggest to check what exactly are the numbers processed when executing distance function, you can programmatically output this data in some logs
Hope it helps | 0 | 1 | 0 | 0 | 2015-02-02T06:30:00.000 | 1 | 0 | false | 28,272,226 | 0 | 0 | 1 | 1 | I have implemented GAE's Python Search Api and am trying to query based on distance from given geopoint.
My query string is: "distance(location, geopoint(XXX, YYY)) < ZZZ". However, for some reason on the production server, this query string is returning items where the distance is greater than the ZZZ parameter.
Below are actual numbers (production) demonstrating the inaccuracies:
Actual Distance: 343.9m
Query Distance that still gets the result: 325m
Actual Distance: 18,950.3
Query Distance that still gets the result: 13,499m
Actual Distance: 55,979.0
Query Distance that still gets the result: 44,615m
Actual Distance: 559,443.6
Query Distance that still gets the result: 451,167m
Actual Distance: 53.4
Query Distance that still gets the result: 46m
Actual Distance: 35,322.2
Query Distance that still gets the result: 30,808m
Actual Distance: 190.2
Query Distance that still gets the result: 143m
On my development server, these inaccuracies do not exist. I am able to query down to the exact meter and get the expected results.
What could cause this and how to fix it so that I get accurate query results in production? Is anyone else getting the same issue? |
Share object between django users | 28,293,569 | 0 | 3 | 1,011 | 0 | python,django | You no need any magic to do singleton-like object in python. Just write module, for example shared.py inside your django project. Put your dictionary initialization here and import it from anywhere. | 0 | 0 | 0 | 0 | 2015-02-03T06:15:00.000 | 3 | 0 | false | 28,292,538 | 0 | 0 | 1 | 2 | I have an application and a database. The application initially was written in python without django. What seems to be the problem is that it makes too many connections with the database and that slows it down. What I want to do is load whatever data is going to be used in python dictionary and then share that object with everybody(something like singletone object). What django seems to do is create a new instance of application each time a new request is made. How can I make it share the same loaded data? |
Share object between django users | 28,294,419 | 1 | 3 | 1,011 | 0 | python,django | Contrary to your assertion, Django does not reinitialize on every request. Actually, Django processes last for many requests, and anything defined at module level will be shared across all requests handled by that process. This can often be a cause of thread safety bugs, but in your case is exactly what you want.
Of course a web site normally runs several processes concurrently, and it is impossible to share objects between them. Also, it is up to your server to determine how many processes to spawn and when to recycle them. But one object instantiation per process is better than one per request for your use case. | 0 | 0 | 0 | 0 | 2015-02-03T06:15:00.000 | 3 | 1.2 | true | 28,292,538 | 0 | 0 | 1 | 2 | I have an application and a database. The application initially was written in python without django. What seems to be the problem is that it makes too many connections with the database and that slows it down. What I want to do is load whatever data is going to be used in python dictionary and then share that object with everybody(something like singletone object). What django seems to do is create a new instance of application each time a new request is made. How can I make it share the same loaded data? |
get first object in mongoengine | 28,293,187 | 6 | 3 | 2,821 | 0 | python,mongodb,flask,mongoengine | It's simple just use :
Request.objects.first() | 0 | 0 | 0 | 0 | 2015-02-03T06:59:00.000 | 1 | 1.2 | true | 28,293,096 | 0 | 0 | 1 | 1 | I have a class with name of Request and I want to get first object of it in mongoengine
I think I can use this :
first get all objects like this visitors = Request.objects.all()and then ss = visitors[0].ip
and then call an attribute of object |
Empty request.FILES in Django | 28,295,685 | 3 | 2 | 3,002 | 0 | python,django,forms,post | Should I use exactly multipart/form-data content-type?
Django supports only multipart/form-data, so you must use that content-type.
Where can I specify enctype? (headers, parameters, etc)
in normal HTML just put enctype="multipart/form-data" as one of parameters of your form element. In HttpRequester it's more complicated, because I think it lacks support for multipart/form-data by default. http://www.w3.org/TR/html4/interact/forms.html#h-17.13.4.2 is more details about multipart/form-data, it should be possible to run it in HttpRequester by hand.
Why the file's data contains in request.body but request.FILES is empty?
You've already answered that:
Note that FILES will only contain data if the request method was POST and the that posted to the request had enctype="multipart/form-data". Otherwise, FILES will be a blank dictionary-like object. | 0 | 0 | 1 | 0 | 2015-02-03T09:08:00.000 | 1 | 0.53705 | false | 28,295,059 | 0 | 0 | 1 | 1 | I'm trying to send a file via post request to server on localhost. I'm using HttpRequester in Firefox (also tried Postman in Chrome and Tasker on Android) to sumbit request.
The problem is that request.FILES is always empty. But when I try to print request.body it shows some non-human-readable data which particularly include the data from the file I want to upload (it's a database). So it makes sense to me that somehow file arrives to the server.
From Django docs:
Note that FILES will only contain data if the request method was POST
and the that posted to the request had
enctype="multipart/form-data". Otherwise, FILES will be a blank
dictionary-like object.
There was an error 'Invalid boundary in multipart: None' when I tried to set Content-type of request to 'multipart/form-data'. An error disappeared when I added ';boundary=frontier' to Content-type.
Another approach was to set enctype="multipart/form-data".
Therefore I have several questions:
Should I use exactly multipart/form-data content-type?
Where can I specify enctype? (headers, parameters, etc)
Why the file's data contains in request.body but request.FILES is empty?
Thanks |
how to run Apache with mod_wsgi and django in one process only? | 28,320,868 | 0 | 1 | 444 | 0 | django,python-2.7,apache2,mod-wsgi,pyinotify | You shouldn't prevent spawning multiple processes, because it's good thing, especially on production environment. You should consider using some external tool, separated from django or add check if folder listening is already running (for example monitor persistence of PID file and it's content). | 0 | 0 | 0 | 0 | 2015-02-04T09:42:00.000 | 2 | 0 | false | 28,318,105 | 0 | 0 | 1 | 2 | I'm running apache with django and mod_wsgi enabled in 2 different processes.
I read that the second process is a on-change listener for reloading code on change, but for some reason the ready() function of my AppConfig class is being executed twice. This function should only run once.
I understood that running django runserver with the --noreload flag will resolve the problem on development mode, but I cannot find a solution for this in production mode on my apache webserver.
I have two questions:
How can I run with only one process in production or at least make only one process run the ready() function ?
Is there a way to make the ready() function run not in a lazy mode? By this, I mean execute only on on server startup, not on first request.
For further explanation, I am experiencing a scenario as follows:
The ready() function creates a folder listener such as pyinotify. That listener will listen on a folder on my server and enqueue a task on any changes.
I am seeing this listener executed twice on any changes to a single file in the monitored directory. This leads me to believe that both processes are running my listener. |
how to run Apache with mod_wsgi and django in one process only? | 28,321,203 | 2 | 1 | 444 | 0 | django,python-2.7,apache2,mod-wsgi,pyinotify | No, the second process is not an onchange listener - I don't know where you read that. That happens with the dev server, not with mod_wsgi.
You should not try to prevent Apache from serving multiple processes. If you do, the speed of your site will be massively reduced: it will only be able to serve a single request at a time, with others queued until the first finishes. That's no good for anything other than a toy site.
Instead, you should fix your AppConfig. Rather than blindly spawning a listener, you should check to see if it has already been created before starting a new one. | 0 | 0 | 0 | 0 | 2015-02-04T09:42:00.000 | 2 | 0.197375 | false | 28,318,105 | 0 | 0 | 1 | 2 | I'm running apache with django and mod_wsgi enabled in 2 different processes.
I read that the second process is a on-change listener for reloading code on change, but for some reason the ready() function of my AppConfig class is being executed twice. This function should only run once.
I understood that running django runserver with the --noreload flag will resolve the problem on development mode, but I cannot find a solution for this in production mode on my apache webserver.
I have two questions:
How can I run with only one process in production or at least make only one process run the ready() function ?
Is there a way to make the ready() function run not in a lazy mode? By this, I mean execute only on on server startup, not on first request.
For further explanation, I am experiencing a scenario as follows:
The ready() function creates a folder listener such as pyinotify. That listener will listen on a folder on my server and enqueue a task on any changes.
I am seeing this listener executed twice on any changes to a single file in the monitored directory. This leads me to believe that both processes are running my listener. |
Opening pages asynchronously in headless browser (PhantomJS) | 28,319,699 | 2 | 1 | 691 | 0 | python,selenium-webdriver,phantomjs,headless-browser,ghostdriver | If you want to bypass ghostdriver, then you can directly write your PhantomJS scripts in JavaScript or CoffeeScript. As far as I know there is no way of doing this with the selenium webdriver except with different threads in the language of your choice (python).
If you are not happy with it, there is CasperJS which has more freedom in writing scripts than with selenium, but you will only be able to use PhantomJS or SlimerJS. | 0 | 0 | 1 | 0 | 2015-02-04T10:55:00.000 | 2 | 1.2 | true | 28,319,579 | 0 | 0 | 1 | 1 | I am using PhantomJS via Python through Selenium+Ghostdriver.
I am looking to load several pages simultaneously and to do so, I am looking for an async method to load pages.
From my research, PhantomJS already lives in a separate thread and supports multiple tabs, so I believe the only missing piece of the puzzle is a method to load pages in a non-blocking way.
Any solution would be welcome, be it a simple Ghostdriver method I overlooked, bypassing Ghostdriver and interfacing directly with PhantomJS or a different headless browser.
Thanks for the help and suggestions.
Yuval |
IQtNetwork.QHttp request credential issue | 28,327,369 | 1 | 1 | 49 | 0 | python,qt,pyqt,twisted,twisted.web | So i figured out that Qt isn't sending the TWISTED_SESSION cookie back with subsequent requests.
all i did was send the cookie along with subsequent requests and it worked fine.
i had to sqitch to python's request to ease things | 0 | 0 | 1 | 0 | 2015-02-04T14:56:00.000 | 1 | 0.197375 | false | 28,324,393 | 0 | 0 | 1 | 1 | I am currently authenticating via a RESTful http api that generates a token which is then used for subsequent request.
The api server is written with python twisted and works great
the auth token generation works fine in browsers
When requesting from software written in pyqt
the first request hands over a token to the pyqt app
while subsequent request from the pyqt app fails because the remote twisted server believes it is another browser entirely.
javascript ajax does this too but is solvable by sending xhrFields: {withCredentials: true} along with the request.
How do I resolve this in PyQt? |
GoogleAppEngine error directory not found | 28,340,241 | 1 | 0 | 95 | 0 | google-app-engine,python-2.7,google-app-engine-python | Your full project path contains two space characters and needs to be quoted, also, a trailing slash might be required i.e.:
C:\Python27\python.exe appcfg.py update "C:\Users\alastair\Desktop\School Files\Proxy Files\mirrorrr-master\mirrorrr-master\" assuming that's where you have your app.yaml file.
In your case it's thinking you are pointing to "C:\Users\alastair\Desktop\School" file which does not exist and thus showing the error. | 0 | 1 | 0 | 0 | 2015-02-05T08:57:00.000 | 1 | 0.197375 | false | 28,339,778 | 0 | 0 | 1 | 1 | Ive been working on get a proxy working for when im school, to access sites that i use alot for work but my school dont like.. This is the error it comes up with when i try to upload the files to googles app engine..
C:\Program Files (x86)\Google\google_appengine>"C:\Python27\python.exe" appcfg.p
y update C:\Users\alastair\Desktop\School Files\Proxy Files\mirrorrr-master\mirrorrr-master
09:44 PM Host: appengine.google.com
Usage: appcfg.py [options] update | [file, ...]
appcfg.py: error: Directory does not contain an School.yaml configuration file
So im very confused on why it is asking for a "School.yaml" But i made one anyway, And even though its been made, it still displays this error, So if anyone can help, Please! |
How to proceed after Implementing django tenant schemas | 28,382,483 | 1 | 0 | 594 | 0 | python,django,postgresql | If you are using Linux you just have to add that domain name in /etc/hosts and access it like it is a real domain name. Another solution is to make that domain name to point to 127.0.0.1 while you don't push the changes to production. I'd go with the first idea though. | 0 | 0 | 0 | 0 | 2015-02-06T05:53:00.000 | 1 | 1.2 | true | 28,359,414 | 0 | 0 | 1 | 1 | I have successfully implemented django-tenant-schema in my project. It also creates separate schema for each user after they got registered.Suppose if a customer named 'customer1' is successfully logged in, then he will redirect to "customer1.domainname.com".So please suggest me a solution to test if this is working in my local system ahead of being put it in the production environment.
Thanks in advance... |
Webpage ping when active | 28,397,610 | 1 | 0 | 47 | 0 | python,google-app-engine,web | The best way is to ping the server while the user is online. Using other methods such as the Channel API with GAE proves to be unreliable since you are not constantly sending a ping message but rather just sending a disconnect message. If the browser crashes, no disconnect message is sent. | 0 | 1 | 0 | 0 | 2015-02-08T03:29:00.000 | 1 | 1.2 | true | 28,390,253 | 0 | 0 | 1 | 1 | I want a way for users to open this webpage, and whenever they are on that page, it updates the server that they are on the page. It should only work when the user is actually looking at the webpage (not inactive, like from switchings tabs). One way to do this which I have implemented is to keep pinging the server saying that I am alive.
This however causes a lot of load on the server and client side. I am using Google App Engine and webapp2, and was wondering if anyone knows a better way to do this. |
In a Django web application, would large files or many unnecessary import statements slow down my server? | 28,399,223 | 1 | 0 | 62 | 0 | python,django | No, code speed is not affected by the size of your modules.
Additional imports only affect the memory footprint (a little more memory is needed to hold the extra code objects) and startup speed (more files are loaded from disk when your Django server starts).
However, this doesn't really affect code running speeds; Python does not have to do extra work to run your code. | 0 | 0 | 0 | 0 | 2015-02-08T20:39:00.000 | 2 | 1.2 | true | 28,399,120 | 0 | 0 | 1 | 2 | In my Django web app, I have pretty much one large file that contains all my views. This has a ton of imported python libraries that are only used for certain views.
Does this slow my code? Like in python does importing things like python natural language toolkit (nlkt) and threading libraries slow down the code when its not needed?
I know its not great for a maintainability/style standpoint to have one big file like this, but I am asking purely from a performance standpoint. |
In a Django web application, would large files or many unnecessary import statements slow down my server? | 28,399,156 | 0 | 0 | 62 | 0 | python,django | Views load only one time, at the moment of start your code | 0 | 0 | 0 | 0 | 2015-02-08T20:39:00.000 | 2 | 0 | false | 28,399,120 | 0 | 0 | 1 | 2 | In my Django web app, I have pretty much one large file that contains all my views. This has a ton of imported python libraries that are only used for certain views.
Does this slow my code? Like in python does importing things like python natural language toolkit (nlkt) and threading libraries slow down the code when its not needed?
I know its not great for a maintainability/style standpoint to have one big file like this, but I am asking purely from a performance standpoint. |
Robot Framework can't find Python | 41,825,897 | 1 | 1 | 1,818 | 0 | python,robotframework | Try to add the following path in environment variable also:
"C:\Python27\Lib\site-packages"
Since this path consists all the third party modules installed on your PC and also verify if robot-framework library is present in this folder. | 0 | 0 | 0 | 1 | 2015-02-09T15:44:00.000 | 2 | 0.099668 | false | 28,413,567 | 1 | 0 | 1 | 2 | I'm trying to install Robot Framework, but it keeps giving me an error message during setup that "No Python installation found in the registry." I've tried running the installer as administrator, I've made sure that Python is installed (I've tried both 2.7.2 and 2.7.9), and both C:\Python27 and C:\Python27\Scripts are added to the PATH (with and without slashes on the end, if that matters).
I have no idea why it can't find Python. What do I need to do? |
Robot Framework can't find Python | 28,443,946 | 0 | 1 | 1,818 | 0 | python,robotframework | I faced the same issue.
Install a different bit version of ROBOT framework. In my case, I was first trying to install 64bit version but it said "No Python installation found in the registry."
Then I tried to install the 32bit version of ROBOT framework and it worked.
So there is nothing wrong with your Python version. | 0 | 0 | 0 | 1 | 2015-02-09T15:44:00.000 | 2 | 0 | false | 28,413,567 | 1 | 0 | 1 | 2 | I'm trying to install Robot Framework, but it keeps giving me an error message during setup that "No Python installation found in the registry." I've tried running the installer as administrator, I've made sure that Python is installed (I've tried both 2.7.2 and 2.7.9), and both C:\Python27 and C:\Python27\Scripts are added to the PATH (with and without slashes on the end, if that matters).
I have no idea why it can't find Python. What do I need to do? |
Mac OSX Trouble Running pip commands | 35,575,253 | 0 | 0 | 797 | 0 | python,django,macos,pip | Try adding sudo. sudo pip install Django | 0 | 1 | 0 | 0 | 2015-02-10T01:19:00.000 | 2 | 0 | false | 28,422,520 | 0 | 0 | 1 | 2 | I recently installed Python 3.4 on my Mac and now want to install Django using pip. I tried running pip install Django==1.7.4 from the command line and received the following error:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/basecommand.py", line 232, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/commands/install.py", line 347, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_set.py", line 549, in install
**kwargs
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 754, in install
self.move_wheel_files(self.source_dir, root=root)
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 963, in move_wheel_files
isolated=self.isolated,
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 234, in move_wheel_files
clobber(source, lib_dir, True)
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 205, in clobber
os.makedirs(destdir)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/django'
Obviously my path is pointing to the old version of Python that came preinstalled on my computer, but I don't know how to run the pip on the new version of Python. I am also worried that if I change my file path, it will mess up other programs on my computer. Is there a way to point to version 3.4 without changing the file path? If not how do I update my file path to 3.4? |
Mac OSX Trouble Running pip commands | 70,606,374 | 0 | 0 | 797 | 0 | python,django,macos,pip | Try to create a virtual environment. This can be achieved by using python modules like venv or virtualenv. There you can change your python path without affecting any other programs on your machine. If then the error is is still that you do not have permission to read files, try sudo pip install. But only as a last resort since pip recommends not using it as root. | 0 | 1 | 0 | 0 | 2015-02-10T01:19:00.000 | 2 | 0 | false | 28,422,520 | 0 | 0 | 1 | 2 | I recently installed Python 3.4 on my Mac and now want to install Django using pip. I tried running pip install Django==1.7.4 from the command line and received the following error:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/basecommand.py", line 232, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/commands/install.py", line 347, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_set.py", line 549, in install
**kwargs
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 754, in install
self.move_wheel_files(self.source_dir, root=root)
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 963, in move_wheel_files
isolated=self.isolated,
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 234, in move_wheel_files
clobber(source, lib_dir, True)
File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 205, in clobber
os.makedirs(destdir)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/django'
Obviously my path is pointing to the old version of Python that came preinstalled on my computer, but I don't know how to run the pip on the new version of Python. I am also worried that if I change my file path, it will mess up other programs on my computer. Is there a way to point to version 3.4 without changing the file path? If not how do I update my file path to 3.4? |
PHP-Python - killing python process leads memory leak? | 28,453,222 | 1 | 0 | 71 | 0 | php,python,memory,memory-management | No. When a process is killed, the operating system releases all operating system resources (memory, sockets, file handles, …) previously acquired by that process. | 0 | 0 | 0 | 1 | 2015-02-11T11:28:00.000 | 1 | 1.2 | true | 28,453,194 | 0 | 0 | 1 | 1 | I am writing PHP-Python web application for a webApp scanner where web application is managed by PHP and scanning service is managed by python. My question is if I kill a running python process with PHP, does it cause any memory leak or any other trouble (functionality-wise I handled it already) |
Is CapeDwarf compatible with Python GAE? | 28,474,564 | 1 | 1 | 105 | 0 | python,google-app-engine,capedwarf | Yes, as Alex posted - CapeDwarf is Java only. | 0 | 1 | 0 | 0 | 2015-02-12T01:08:00.000 | 2 | 0.099668 | false | 28,467,558 | 0 | 0 | 1 | 1 | I'm trying to deploy my GAE application - written with Python - on CapeDwarf (WildFly_2.0.0.CR5).
But all the documentation talking only about Java Application.
So is CapeDwarf can deploy Python Application ?
if it is, how to do it ?
else any other application that can ? |
Sublime Text 3 - Plugin Profiles | 40,161,595 | 0 | 6 | 1,302 | 0 | java,python,sublimetext,sublimetext3,sublime-text-plugin | In windows 10 if you delete this folder --> C:\Users\USER\AppData\Roaming\Sublime Text 3
then sublime text will default back to its original state. Maybe if you setup a batch file to keep different versions of this folder for example":
"./Sublime Text 3_Java" or
"./Sublime Text 3_Python" or
"./Sublime Text 3_C++"
and then when you want to work on java have a batch file to rename "./Sublime Text 3_Java" to "./Sublime Text 3" and restart sublime. If you really want to get fancy use a symlink to represent "./Sublime Text 3" then have the batch file to modify(or delete and recreate) where the symlink points to. | 0 | 0 | 0 | 1 | 2015-02-12T02:36:00.000 | 2 | 0 | false | 28,468,321 | 0 | 0 | 1 | 2 | I am currently using Sublime Text 3 for programming in Python, Java, C++ and HTML. So, for each language I have a different set of plugins. I would like to know if there is a way for changing between "profiles", with each profile containing plugins of the respective language. My PC is not all that powerful, so it starts to hang if I have too many active plugins. So when one profile is running, all other plugins should be disabled.
TL;DR : Is there a way to change between "profiles" containing a different set of plugins in Sublime Text? |
Sublime Text 3 - Plugin Profiles | 28,504,749 | 2 | 6 | 1,302 | 0 | java,python,sublimetext,sublimetext3,sublime-text-plugin | The easiest way I can think of doing this on Windows is to have multiple portable installs, each one set up for your programming language and plugin set of your choice. You can then set up multiple icons on your desktop/taskbar/start menu/whatever, each one pointing to a different installation. This way you don't have to mess around with writing batch files to rename things.
Alternatively, you could just spring for a new computer :) | 0 | 0 | 0 | 1 | 2015-02-12T02:36:00.000 | 2 | 0.197375 | false | 28,468,321 | 0 | 0 | 1 | 2 | I am currently using Sublime Text 3 for programming in Python, Java, C++ and HTML. So, for each language I have a different set of plugins. I would like to know if there is a way for changing between "profiles", with each profile containing plugins of the respective language. My PC is not all that powerful, so it starts to hang if I have too many active plugins. So when one profile is running, all other plugins should be disabled.
TL;DR : Is there a way to change between "profiles" containing a different set of plugins in Sublime Text? |
Checking file type with django form: 'application/octet-stream' issue | 28,474,553 | 0 | 0 | 3,271 | 0 | python,django,django-forms,mime-types | You should not rely on the MIME type provided, but rather the MIME type discovered from the first few bytes of the file itself.
This will help eliminate the generic MIME type issue.
The problem with this approach is that it will usually rely on some third party tool (for example the file command commonly found on Linux systems is great; use it with -b --mime - and pass in the first few bytes of your file to have it give you the mime type).
The other option you have is to accept the file, and try to validate it by opening it with a library.
So if pypdf cannot open the file, and the built-in zip module cannot open the file, and rarfile cannot open the file - its most likely something that you don't want to accept. | 0 | 0 | 0 | 0 | 2015-02-12T09:20:00.000 | 3 | 0 | false | 28,473,613 | 0 | 0 | 1 | 1 | I'm using django validators and python-magic to check the mime type of uploaded documents and accept only pdf, zip and rar files.
Accepted mime-types are:
'application/pdf’,
'application/zip’, 'multipart/x-zip’, 'application/x-zip-compressed’, 'application/x-compressed',
'application/rar’, 'application/x-rar’ 'application/x-rar-compressed’, 'compressed/rar',
The problem is that sometimes pdf files seem to have 'application/octet-stream' as mime-type.
'application/octet-stream' means generic binary file, so I can't simply add that mime type to the list of accepted files, because in that case also other files such es excel files would be accepted, and I don't want that to happen.
How can I do in this case?
Thanks in advance. |
What's the right way to handle an initial database migration in Django? | 28,486,071 | 2 | 0 | 129 | 0 | python,django | You're going wrong at the very end -- yes, you do need to call manage.py makemigrations <appname> for each of your apps once. It's not automatically done for all apps.
Presumably that is because Django has no way of knowing if that is what you want to do (especially if some apps were downloaded from PyPI, etc). And a single command per app can't really be an extreme amount of work, right? | 0 | 0 | 0 | 0 | 2015-02-12T19:03:00.000 | 1 | 1.2 | true | 28,485,685 | 0 | 0 | 1 | 1 | I'm in the process of preparing a Django application for its initial production release, and I have deployed development instances of it in a few different environments. One thing that I can't quite get happening as smoothly as I'd like is the initial database migration. Given a fresh installation of Django, a deployment of my application from version control, and a clean database, manage.py migrate will handle the initial creation of all tables (both Django's and my models'). That's great, but it doesn't actually create the initial migration files for my apps. This leads to a problem down the road when I need to deploy code changes that require a new database migration, because there's no basis for Django to compute the deltas.
I've tried running manage.py makemigrations as the first step in the deployment, in the hopes that it would create the migration files, but it reports that there are no changes to migrate. The only way I've found to get the baseline state that I need is to run manage.py makemigrations [appname] for each of my apps. Shouldn't makemigrations, called without a specific app name, pick up all the installed apps and create their migrations? Where am I going wrong? |
Can I connect a flask app to a database using MySQLdb-python vertica_python? | 28,502,445 | 0 | 0 | 224 | 1 | python,flask,flask-sqlalchemy | Yes, it is possible. I was having difficulties debugging because of the opacity of the error, but ran it with app.run(debug=True), and managed to troubleshoot my problem. | 0 | 0 | 0 | 0 | 2015-02-12T23:21:00.000 | 1 | 0 | false | 28,489,779 | 0 | 0 | 1 | 1 | Is it possible to connect a Flask app to a database using MySQLdb-python and vertica_python? It seems that the recommended Flask library for accessing databases is Flask-SQLAlchemy. I have an app that connects to MySQL and Vertica databases, and have written a GUI wrapper for it in Flask using flask-wtforms, but am just getting an error when I try to test a Vertica or MySQL connection through the flask app. Is there a reason that I cannot use the prior libraries that I was using within my app? |
what tools can be used to debug python / django code on a remote hosts without graphical UI | 28,491,949 | 1 | 0 | 158 | 0 | python,django,vim | I think the best practice is developing locally, and use Pycharm "sync" to deploy the code from local to remote. If you prefer to code without GUI or in the console mode, you could try "emacs+jedi", it works well in the console mode.
For debug, the best choice is pydev/pdb. | 0 | 0 | 0 | 0 | 2015-02-13T03:20:00.000 | 3 | 0.066568 | false | 28,491,923 | 1 | 0 | 1 | 1 | I started develop python / django code recently.
When I do it on my local machine (Ubuntu desktop), I use PyCharm, which I like a lot, because I can set up break points, and, when an exception occurs, it shows me the line in the code to which that exception relates.
Are there any tools available for python / django development in the environment without a graphic UI, which would give me break point and debugging features?
I know, that VIM with the right settings is an excellent coding tool, but I have no idea if it could be used for interactive debugging. |
django modelform display blank for null | 28,510,476 | 0 | 0 | 432 | 0 | python,django | Instead I ended up using forms.HiddenInput() and did not display the field with null, which fits my case perfectly well! | 0 | 0 | 0 | 0 | 2015-02-13T07:36:00.000 | 2 | 0 | false | 28,494,629 | 0 | 0 | 1 | 1 | In django model form, how to display blank for null values i.e prevent display of (None) on the form .Using postgresql, django 1.6.5. I do not wish to add space in the model instance for the allowed null values. |
Dependencies between two Django projects | 28,505,992 | 0 | 0 | 79 | 0 | python,django,import,reference,dependencies | PyCharm adds only source folders automatically to the python path. So I marked app1 as a source folder and everything works. | 0 | 0 | 0 | 0 | 2015-02-13T17:04:00.000 | 1 | 1.2 | true | 28,504,861 | 0 | 0 | 1 | 1 | I have a question concerning dependencies between two Django projects.
I have two Django Projects, P1 and P2. I want to import a form model from P2 > apps > app1 > forms > form.py. I marked P1 as depending on P2 (using Pycharm) and tried to use from app1.forms import form.py inside my model.py file of an app in P1. PyCharm now says that app1 is an unresolved reference. Is there any step I missed?
Thank you for your help. |
Google NDB: Adding an entity with non existing parent | 28,513,820 | 4 | 1 | 168 | 0 | python,google-app-engine,google-cloud-datastore | You can create a key for any entity whether this entity exists or not. This is because a key is simply an encoding of an entity kind and either an id or name (and ancestor keys, if any).
This means that you can store a child entity before a parent entity is saved, as long as you know the parent's id or name. You cannot reassign a child from one parent to another, though. | 0 | 1 | 0 | 0 | 2015-02-14T08:18:00.000 | 2 | 0.379949 | false | 28,513,774 | 0 | 0 | 1 | 1 | I am working on a web application based on Google App Engine (Python / Webapp2) and Google NDB Datastore.
I assumed that if I tried to add a new entity using as parent key the key of a no longer existing entity an exception was thrown. I have instead found the entity is actually created.
Am i doing something wrong?
I may check before whether the parent still exist through a keys_only query. Does it consume GAE read quotas? |
Why java String.length gives different result than python len() for the same string | 28,524,245 | 1 | 4 | 673 | 0 | java,python,string | If you use u"" before the string, which means unicode in python2.x, then you would possibly get the same result with Java | 0 | 0 | 0 | 0 | 2015-02-15T08:17:00.000 | 3 | 0.066568 | false | 28,524,215 | 0 | 0 | 1 | 1 | I have a string like the follwoing
("استنفار" OR "الأستنفار" OR "الاستنفار" OR "الإستنفار" OR "واستنفار" OR "باستنفار" OR "لستنفار" OR "فاستنفار" OR "والأستنفار" OR "بالأستنفار" OR "للأستنفار" OR "فالأستنفار" OR "والاستنفار" OR "بالاستنفار" OR "فالاستنفار" OR "والإستنفار" OR "بالإستنفار" OR "للإستنفار" OR "فالإستنفار" OR "إستنفار" OR "أستنفار" OR "إلأستنفار" OR "ألأستنفار" OR "إلاستنفار" OR "ألاستنفار" OR "إلإستنفار" OR "ألإستنفار") (("قوات سعودية" OR "قوات سعوديه" OR "القوات سعودية" OR "القوات سعوديه") OR ("القواتالسعودية" OR "القواتالسعوديه" OR "إلقواتالسعودية" OR "ألقواتالسعودية" OR "إلقواتالسعوديه" OR "ألقواتالسعوديه")("القوات السعودية" OR "إلقوات السعودية" OR "ألقوات السعودية" OR "والقوات السعودية" OR "بالقوات السعودية" OR "للقوات السعودية" OR "فالقوات السعودية" OR "وإلقوات السعودية" OR "بإلقوات السعودية" OR "لإلقوات السعودية" OR "فإلقوات السعودية" OR "وألقوات السعودية" OR "بألقوات السعودية" OR "لألقوات السعودية" OR "فألقوات السعودية") OR )
If I used java string variable and count the number of characters it gives me 923 but if I used the len function of python it gives me 1514
What is the difference here ? |
Django __gt filter return the duplicate item | 28,538,381 | 0 | 1 | 163 | 0 | python,django | Django is storing time in database with more precision, than seconds. So if you time_at holds exact time of 12:45:44, query for database will retrieve fields that have time more than that just by miliseconds, so time of field retrieved from database can have 12:45:44.231 and it will be retrieved. | 0 | 0 | 0 | 0 | 2015-02-15T20:38:00.000 | 2 | 0 | false | 28,530,885 | 0 | 0 | 1 | 1 | I want to fetch items from my model by using time_created as criteria.
If the latest item I fetched was posted at 12:45:44, I store it in request.session['time_at'] = 12:45:44 and use it to fetch item that are later than the last fetched.
new_notice = Notify.objects.filter(time_created__gt = request.session['time_at'])
This is supposed to return items with time from 12:45:45 but it still return the ones with 12:45:44 which is making me have duplicate of items I have already fetched.
How do I deal with this the right way? |
How to create a unique random username for django? | 28,533,371 | 2 | 3 | 2,119 | 0 | python,django,django-1.7 | No, django doesn't have such function. So you have to check for the existence of the generated username in the loop. | 0 | 0 | 0 | 0 | 2015-02-16T01:33:00.000 | 2 | 1.2 | true | 28,533,354 | 0 | 0 | 1 | 1 | I want visitors of my page to be able to create entries without registering first. When they register later I want everything they have created within that session to belong to their user account.
To achieve that I just want to create blank users with random usernames when entries from non users are made.
What is the most elegant way to create a unique username randomly avoiding any collision?
Should I just make a while loop that generates usernames and tries to save them to the db with a break upon success or is there a better way?
Most scripts I've seen just create a random string, but that has the danger of a collision. Is there any Django function that creates random usernames based on which usernames are already taken? |
unregister or register models conditionally in django admin | 28,538,705 | 0 | 4 | 1,619 | 0 | python,django,django-models,django-admin,django-admin-tools | I've tried a couple of approaches locally, including overriding an AdminSite, but given the fact that all admin-related code is loaded when the app is initialized, the simplest approach would be to rely on permissions (and not give everyone superuser access). | 0 | 0 | 0 | 0 | 2015-02-16T09:43:00.000 | 2 | 0 | false | 28,538,461 | 0 | 0 | 1 | 1 | Is it possible to conditionally register or unregister models in django admin?
I want some models to appear in django admin, only if request satisfies some conditions. In my specific case I only need to check if the logged in user belongs to a particular group, and not show the model if the user (even if superuser) is not in the group. I can not use permissions here because, superusers can not be ruled out using permissions.
Or, is there a way to revoke permission from even superusers on model. |
Django Rest frameworks: request.Post vs request.data? | 28,545,657 | 29 | 23 | 8,970 | 0 | python,django,rest,django-rest-framework | The docs cover this:
request.data returns the parsed content of the request body. This is similar to the standard request.POST and request.FILES attributes except that:
It includes all parsed content, including file and non-file inputs.
It supports parsing the content of HTTP methods other than POST, meaning that you can access the content of PUT and PATCH requests.
It supports REST framework's flexible request parsing, rather than just supporting form data. For example you can handle incoming JSON data in the same way that you handle incoming form data.
The last two are the important ones. By using request.data throughout instead of request.POST, you're supporting both JSON and Form-encoded inputs (or whatever set of parsers you have configured), and you'll be accepting request content on PUT and PATCH requests, as well as for POST.
Is one more flexible?
Yes. request.data is more flexible. | 0 | 0 | 0 | 0 | 2015-02-16T16:06:00.000 | 2 | 1.2 | true | 28,545,553 | 0 | 0 | 1 | 1 | The Django Rest Frameworks has this to say about POST, quoting a Django dev
Requests
If you're doing REST-based web service stuff ... you should ignore request.POST.
— Malcom Tredinnick, Django developers group
As not-so experienced web-developer, why is request.POST (standard) discouraged over request.DATA (non-standard)? Is one more flexible? |
Auto image resizing/compressing using python/django | 28,550,992 | 0 | 0 | 162 | 0 | python,django,image,web | Have multiple copies on the same image on different resolution on the server, and serve the correct one according to screen size using CSS media queries | 0 | 0 | 0 | 0 | 2015-02-16T19:21:00.000 | 1 | 1.2 | true | 28,548,813 | 0 | 0 | 1 | 1 | I'm working on a responsive website that uses Django and a lot of the content is static. Most of this content are photos of high resolution and because of this, it takes too much time to load.
Is there any way (using python/django) to make it so that to load such an image, a server request is made and then a function automatically resizes the image to the size that it needs to be (bigger for a desktop, smaller for a smartphone, etc) or compresses the image so that it doesn't take so much time to load? |
Django - how can I have a field in my model consisting of tuple of integers? | 28,551,333 | 0 | 0 | 339 | 0 | python,django | Depends on how you intend to use them after storing in the database; 2 methods I can think of are:
Option 1)
models.IntegerField(unique=True)
now the trick is loading data and parsing it: you would have to concatenate the numbers then have a way to split them back out.
fast would be
Option 2)
models.CommaSeparatedIntegerField(max_length=1024, unique=True)
not sure how it handles unique values; likely '20,40' is not equal to '40,20', so those two sets would be unique.
or just implement it yourself in a custom field/functions in the model. | 0 | 0 | 0 | 0 | 2015-02-16T21:58:00.000 | 1 | 0 | false | 28,551,063 | 0 | 0 | 1 | 1 | I am defining the models for my Django app, and I would like to have a field for a model consisting of a tuple of two (positive) integers. How can I do this? I'm looking at the Django Models API reference but I can't see any way of doing this. |
Which HTTP method should I use to update a session attribute? | 28,561,881 | 3 | 3 | 131 | 0 | python,django,http,standards | I personally don't ever consider it safe to use GET for a request that will have side effects.
I try to always follow the practice of POST + redirection to another page.
This solves all kinds of problems, such as F5 refreshing the action, the user bookmarking a URL which has a side effect and so on.
In your case, the update is harmless, but using POST at least conveys the fact that it's an update and may be useful for tools such as caching software (POST requests usually aren't cached).
On top of that, applications often change, and you may at some point need to modify the app in such a way that the update isn't so harmless anymore.
It's also generally difficult to guarantee that anything is completely secure in web development, so I prefer to stay on the safe side. | 0 | 0 | 0 | 0 | 2015-02-17T12:32:00.000 | 1 | 1.2 | true | 28,561,598 | 0 | 0 | 1 | 1 | If I have a view which the only purpose is to update a value in my session, is it safe to use it over GET or should I use POST and CSRF protection?
The value modified in the session is only used to change the user's context and if somebody manage to change the user's context in his own browser that should be harmless. |
Is there a point to setting __all__ and then using leading underscores anyway? | 28,572,917 | 5 | 5 | 234 | 0 | python,cpython | Aside from the "private-by-convention" functions with _leading_underscores, there are:
Quite a few imported names;
Four class names;
Three function names without leading underscores;
Two string "constants"; and
One local variable (nobody).
If __all__ wasn't defined to cover only the classes, all of these would also be added to your namespace by a wildcard from server import *.
Yes, you could just use one method or the other, but I think the leading underscore is a stronger sign than the exclusion from __all__; the latter says "you probably won't need this often", the former says "keep out unless you know what you're doing". They both have their place. | 0 | 0 | 0 | 0 | 2015-02-17T22:55:00.000 | 3 | 1.2 | true | 28,572,764 | 0 | 0 | 1 | 2 | I've been reading through the source for the cpython HTTP package for fun and profit, and noticed that in server.py they have the __all__ variable set but also use a leading underscore for the function _quote_html(html).
Isn't this redundant? Don't both serve to limit what's imported by from HTTP import *?
Why do they do both? |
Is there a point to setting __all__ and then using leading underscores anyway? | 28,572,885 | 5 | 5 | 234 | 0 | python,cpython | __all__ indeed serves as a limit when doing from HTTP import *; prefixing _ to the name of a function or method is a convention for informing the user that that item should be considered private and thus used at his/her own risk. | 0 | 0 | 0 | 0 | 2015-02-17T22:55:00.000 | 3 | 0.321513 | false | 28,572,764 | 0 | 0 | 1 | 2 | I've been reading through the source for the cpython HTTP package for fun and profit, and noticed that in server.py they have the __all__ variable set but also use a leading underscore for the function _quote_html(html).
Isn't this redundant? Don't both serve to limit what's imported by from HTTP import *?
Why do they do both? |
How to use the Python getpass.getpass in PyCharm | 32,341,555 | 6 | 33 | 22,724 | 0 | python,pycharm | I've run into this running Pycharm CE 4.5 on Windows. The workaround I use is to run your program in debug mode, then you get a console tab where you can enter your password when using getpass. | 0 | 0 | 0 | 0 | 2015-02-18T08:56:00.000 | 6 | 1 | false | 28,579,468 | 1 | 0 | 1 | 1 | I have found getpass does not work in PyCharm. It just hangs.
In fact is seems msvcrt.getch and raw_input also don't work, so perhaps the issue is not with getpass. Instead with the 'i' bit of PyCharm's stdio handling.
The problem is, I can't put my personal password into code as it would end up in SVN which would be visible to other people. So I use getpass to get my password each time.
On searching, all I can find is that "Pycharm does dome hacking to get Django working with getpass" but no hint as to what that hack is....
I've looked at getpass and it uses msvcrt on Windows (so this problem might only be on Windows)
My question is: Is there a workround for this issue? |
Django app - deploy using UWSGI or Phusion Passenger | 28,599,998 | 2 | 1 | 760 | 0 | python,django,deployment,passenger,uwsgi | Production performance is pretty the same, so I wouldn't worry about that. uWSGI has some advanced builtin features like clustering and a cron API while Phusion Passenger is more minimalist, but Phusion Passenger provides more friendly tools for administration and inspection (e.g. passenger-status, passenger-memory-stats, passenger-config system-metrics). | 0 | 1 | 0 | 0 | 2015-02-19T01:27:00.000 | 1 | 0.379949 | false | 28,597,205 | 0 | 0 | 1 | 1 | Which way of deploying Django app is better (or maybe the better question would be what are pros and cons):
using UWSGI,
using Phusion Passenger?
In my particular case the most important advantage for using Passenger is ease of use (on my hosting I need to place single file in project directory and it's done), but what with performance things, etc.?
What do you think? |
Django's "call_command" hangs the application | 28,642,118 | 1 | 1 | 347 | 0 | python,django,parallel-processing,celery,mongoengine | When you make the synchronous calls to external systems it will tie up a thread in the application server, so depending on application server you choose and how many concurrent threads/users you have will determine whether doing it that way will work for you.
Usually when you have long running requests like that it is a good idea to use a background processing system such as celery, like you suggest. | 0 | 1 | 0 | 0 | 2015-02-19T11:50:00.000 | 1 | 1.2 | true | 28,605,646 | 0 | 0 | 1 | 1 | I'm working on a project that uses Django and mongoengine. When a user presses a button, a trigger to a call_command (django.core.management - just calls a script it seems to me) is made which sshs to multiple servers in parallel, copies some files, parses them and stores them in the database.
The problem is that when the button is pressed and the above process is running, if any other user tries to use the website, it doesn't load.
Is this because of mongo's lock? This happens as soon as the button is pressed (so when the connections to other servers are still made, not yet writing to the DB) so I was thinking that it's not a mongo issue.
So is it a Django issue calling the command synchronously? Do I need to use Celery for this task? |
How do I design an API such that code updates don't cause interruptions? | 28,629,932 | 1 | 0 | 33 | 0 | python,api,high-availability | actually there is no silver bullet. you mention two different things. one is availability. it depends on how many nines you want to have in your 9,999... availability. second thing is api change. so:
availability:
some technologies allows you to do hot changes/deloyments. which means pending requests goes the old path, new request goes the new path. if your technology don't support it you can't use it for othe reasons, there are other options
in small scale intranet applications you simply don't care. you stop the world: stop the application, upload new version and start. on application stop many web frameworks stop accepting new connection and wait until all pending requests are finished. if yours don't support it you have 2 options:
ignore it (db will rollback current transaction, user will get error)
implement it yourself (may be challenging).
and you do your best to shorten the inactivity period.
if you can't afford to stop everything then you can do:
clustering. restart one service by one. all the time some server is available. that's not always possible because sometimes you have to change your database and not always you can do it on working system or you can't afford to loose any data in case of update failure
microservices. if you split your application into many independent components connected with persistent queues then you turn of only some parts of your system (graceful degradation). for example you can disable component that writes changes to the database but still allow reads. if you have infrastructure to do it quickly then the update may be unnoticed - requests will be put into queues and picked up by new version
api change:
you version your api. each request says which version it requires.
if you control all your clients / small scale / decide not to support old versions : you don't care. everyone has to update its client.
if not, then again microservices may help. you split your public api from internal api. and you keep all your public api services running and announce then some of them are deprecated. you monitor their usage. when you decide that usage of some version is low enough you announce end-of-life and later you shutdown specific version
that's best i've got for the moment | 0 | 0 | 1 | 0 | 2015-02-20T09:23:00.000 | 1 | 1.2 | true | 28,625,474 | 0 | 0 | 1 | 1 | I'm planning to deliver an API as a web-service. When I update my API's (interpreted) code-base, however, I anticipate possibly needing to restart the API service or even just have a period where the code is being overwritten. This introduces the possibility that incoming API requests may be dropped or, even worse, that processes triggered by the API may be interrupted.
The flask library for python appears to offer something of a solution; by enabling debug mode it will check the modified flag on all python files and, for each request, it will reload any modules that have changed. It's not the performance penalty that puts me off this approach - it's the idea that it looks slightly jilted.
Surely there is an elegant, high-availability approach to what must be a common issue?
Edit: As @piotrek answered below, "there is no silver bullet". One briefly visible comment suggested using DNS to switch to a new API server after an update. |
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error | 28,636,553 | 0 | 0 | 2,600 | 1 | django,python-3.x,heroku,heroku-postgres | It seems to me that you are using raw SQL queries instead of Django ORM calls and this causes portability issues when you switch database engines. I'd strongly suggest to use ORM if it's possible in your case. If not, then I'd say that you need to detect database engine on your own and construct queries depending on current engine.
In this case you could try to use 0 instead of false, I guess this should work both on SQLite and Postgres. | 0 | 0 | 0 | 0 | 2015-02-20T18:50:00.000 | 3 | 0 | false | 28,636,141 | 0 | 0 | 1 | 2 | I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error:
column "is_superuser" is of type integer but expression is of type boolean
LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_superuser" = false, "us...
^
HINT: You will need to rewrite or cast the expression.
The last function call in the stack trace is:
/app/.heroku/python/lib/python3.4/site-packages/django/db/backends/utils.py in execute
return self.cursor.execute(sql, params) ...
▶ Local vars
I don't have access to the base django code, just the code on my app. So any help getting this to work would be really helpful. |
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error | 28,638,965 | -1 | 0 | 2,600 | 1 | django,python-3.x,heroku,heroku-postgres | The problem is caused by a variable trying to change data types (i.e. from a char field to date-time) in the migration files. A database like PostgreSQL might not know how to change the variable type.
So, make sure the variable has the same type in all migrations. | 0 | 0 | 0 | 0 | 2015-02-20T18:50:00.000 | 3 | -0.066568 | false | 28,636,141 | 0 | 0 | 1 | 2 | I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error:
column "is_superuser" is of type integer but expression is of type boolean
LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_superuser" = false, "us...
^
HINT: You will need to rewrite or cast the expression.
The last function call in the stack trace is:
/app/.heroku/python/lib/python3.4/site-packages/django/db/backends/utils.py in execute
return self.cursor.execute(sql, params) ...
▶ Local vars
I don't have access to the base django code, just the code on my app. So any help getting this to work would be really helpful. |
openFile with pandoc 1.13.2 - Windows 8.1 | 28,641,007 | 0 | 0 | 173 | 0 | windows,ipython-notebook,pandoc | I finally solve my problem by adding the full paths to my files (But I have used wkhtmltopdf which is simpler to use for a good result.) | 0 | 0 | 0 | 0 | 2015-02-20T23:53:00.000 | 1 | 0 | false | 28,640,234 | 1 | 0 | 1 | 1 | sorry for my english in my post (it is my first on this forum, and my question is perhaps stupid).
I encounter a problem in converting a html file to pdf file with pandoc.
Here is my code in the console
set Path=%Path%;C:\Users\nicolas\AppData\Local\Pandoc
(redirecting to Pandoc directory)
followed by
pandoc --data-dir=C:\Users\nicolas\Desktop essai.html -o essai.pdf
As indicated, my file is in the Desktop, but I got the following error:
pandoc: essai.html: openFile: does not exist (No such file or directory)
I get the same error if i do (with the file essai.html in the same folder as pandoc.exe):
pandoc essai.html -o essai.pdf
Have you any idea of the cause of my problem? (I precise that the file's name i want to convert is correct).
Remark: My original problem was to create a pdf faithful to the beautiful html file generated by Ipython Notebook via pandoc but I encounter the same kind of problem when i want to convert a .ipynb file in pdf with nbconvert. |
How can i add junit test for a java program using python for continuous integration in jenkins | 31,576,925 | 0 | 0 | 516 | 0 | java,python,junit,jenkins,continuous-integration | Jenkins allows you to run any external command so you can just call your Python script afterwards | 0 | 0 | 0 | 1 | 2015-02-22T05:09:00.000 | 1 | 0 | false | 28,654,775 | 0 | 0 | 1 | 1 | Hello everyone I am a grader for a programming language class and i am trying to use jenkins continuous integration for running some junit tests on their code that is pushed to github.
I was able to get all the committed jobs to the jenkins, but can i run a python file in order to push a testing class to their code and then build their projects an get an xml report about their program test...?? |
Is it possible to write on s3 key using boto? | 28,658,712 | 1 | 0 | 210 | 0 | python,amazon-s3,boto,librsync | There is no way to append to or modify an existing object in S3. You can overwrite it completely with new content and you can have versioning enabled on the bucket so the previous versions of the object are still accessible but modifying an existing object is just not supported by the S3 service or API. | 0 | 0 | 0 | 0 | 2015-02-22T07:31:00.000 | 1 | 0.197375 | false | 28,655,576 | 0 | 0 | 1 | 1 | I have an app built with boto that sync files locally using librsync(wrapped in a python module). I was wondering if is it possible to write on S3 keys so that I could use librsync remotely, for example I would sync a local file with a file in S3 by taking signatures,delta and patch the result. In the boto documentation it says the open_write is not implemented yet. But I do know the folks like dropbox use s3 and librsync too,so there must be a way...Thanks. |
Is it (un)safe to let users edit email Django templates in your app for emails? | 28,676,062 | 0 | 3 | 183 | 0 | python,django,email,templates | I believe there are many answers on here already regarding this; but to summarize what I've found: It is "safe" to do so, but take care what variables/objects you expose to the user (i.e. include in the context of the template to be rendered).
render_to_string('template_name.txt', {'user': Users}) would be really bad :) | 0 | 0 | 0 | 0 | 2015-02-23T14:12:00.000 | 2 | 1.2 | true | 28,675,722 | 0 | 0 | 1 | 2 | I'm creating an small SaaS app in Django. It gathers data from webservers from different organizations. Once in a while it automatically needs to send out notification mails to their customers (domain owners).
I would like to let our users (the webhosters) to change the email templates to their likings/needs, before sending them out. The email templates are plain Django templates, including a number of available variables. So, i created a model for the email templates. Which can be edited by the users through a form. The have access to a limited number of template variables per email template.
Are there any security issues/risks that I need to be aware of? Or is this approach recommended.
My approach is currently aimed at server side rendering of the emails. I also checked out some solutions for client side rendering, like Dust.js, but I'm not yet convinced that it will help me. |
Is it (un)safe to let users edit email Django templates in your app for emails? | 28,676,356 | -1 | 3 | 183 | 0 | python,django,email,templates | It all depends on the context in which the template will be evaluated, just make sure that no variable is passed that should be considered private.
Also, should a security bug be discovered in Django templating system, your web application would be at risk. You would have to validate the input, but you can't really do that, because the input does not have any particular structure.
So try and sandbox the process from the rest of the application, if you can. Or simply ask yourself if this feature is really necessary and if you can't just let the user specify what to include in the message by using a checklist or anything similar. At that point, validating the input becomes trivial and you don't have to expose the full template to the user. | 0 | 0 | 0 | 0 | 2015-02-23T14:12:00.000 | 2 | -0.099668 | false | 28,675,722 | 0 | 0 | 1 | 2 | I'm creating an small SaaS app in Django. It gathers data from webservers from different organizations. Once in a while it automatically needs to send out notification mails to their customers (domain owners).
I would like to let our users (the webhosters) to change the email templates to their likings/needs, before sending them out. The email templates are plain Django templates, including a number of available variables. So, i created a model for the email templates. Which can be edited by the users through a form. The have access to a limited number of template variables per email template.
Are there any security issues/risks that I need to be aware of? Or is this approach recommended.
My approach is currently aimed at server side rendering of the emails. I also checked out some solutions for client side rendering, like Dust.js, but I'm not yet convinced that it will help me. |
Nodejs Server, get JSON data from Python in html client with Ajax | 28,687,176 | 0 | 1 | 932 | 0 | python,ajax,json,node.js | In your application, if you have some requirement of processing results of python server requests in your nodejs application, then you need to call the python server requests in nodejs app with request libray and then process the result. Otherwise, you should simply call python server resources through client side ajax requests.
Thanks | 0 | 0 | 1 | 0 | 2015-02-23T17:06:00.000 | 2 | 0 | false | 28,679,250 | 0 | 0 | 1 | 1 | I need to get data (json) in my html page, with help of Ajax. I have a Nodejs server serving requests.
I have to get the json from server, which is python code to process and produce json as output.
So should i save json in db and access it? (seems complicated just for one single use)
Should i run python server, to serve the requests with json as result (call it directy from html via ajax)
Should i serve requests with nodejs alone, by calling python method from nodejs? if so how to call the python method.
If calling python requires it to run server ? which one to prefer (zeropc, or some kind of web framework?)
Which is the best solution? or Which is preferred over other in what scenario and what factors? |
"Unknown command syncdb" running "python manage.py syncdb" | 35,020,640 | 8 | 32 | 73,141 | 1 | django,sqlite,python-3.x,django-1.9 | the new django 1.9 has removed "syncdb",
run "python manage.py migrate",
if you are trying to create a super user, run "python manage.py createsuperuser" | 0 | 0 | 0 | 0 | 2015-02-24T00:01:00.000 | 10 | 1 | false | 28,685,931 | 0 | 0 | 1 | 6 | I want to create the tables of one database called "database1.sqlite", so I run the command:
python manage.py syncdb
but when I execute the command I receive the following error:
Unknown command: 'syncdb'
Type 'manage.py help' for usage.
But when I run
manage.py help
I don`t see any command suspicious to substitute
python manage.py syncdb
Version of Python I use: 3.4.2 Version of Django I use:1.9
I would be very grateful if somebody could help me to solve this issue.
Regards and thanks in advance |
"Unknown command syncdb" running "python manage.py syncdb" | 34,814,438 | 0 | 32 | 73,141 | 1 | django,sqlite,python-3.x,django-1.9 | You can run the command from the project folder as: "python.exe manage.py migrate", from a commandline or in a batch-file.
You could also downgrade Django to an older version (before 1.9) if you really need syncdb.
For people trying to run Syncdb from Visual Studio 2015:
The option syncdb was removed from Django 1.9 (deprecated from 1.7), but this option is currently not updated in the context menu of VS2015.
Also, in case you didn't get asked to create a superuser you should manually run this command to create one: python.exe manage.py createsuperuser | 0 | 0 | 0 | 0 | 2015-02-24T00:01:00.000 | 10 | 0 | false | 28,685,931 | 0 | 0 | 1 | 6 | I want to create the tables of one database called "database1.sqlite", so I run the command:
python manage.py syncdb
but when I execute the command I receive the following error:
Unknown command: 'syncdb'
Type 'manage.py help' for usage.
But when I run
manage.py help
I don`t see any command suspicious to substitute
python manage.py syncdb
Version of Python I use: 3.4.2 Version of Django I use:1.9
I would be very grateful if somebody could help me to solve this issue.
Regards and thanks in advance |
"Unknown command syncdb" running "python manage.py syncdb" | 36,004,441 | 0 | 32 | 73,141 | 1 | django,sqlite,python-3.x,django-1.9 | Run the command python manage.py makemigratons,and than python manage.py migrate to sync. | 0 | 0 | 0 | 0 | 2015-02-24T00:01:00.000 | 10 | 0 | false | 28,685,931 | 0 | 0 | 1 | 6 | I want to create the tables of one database called "database1.sqlite", so I run the command:
python manage.py syncdb
but when I execute the command I receive the following error:
Unknown command: 'syncdb'
Type 'manage.py help' for usage.
But when I run
manage.py help
I don`t see any command suspicious to substitute
python manage.py syncdb
Version of Python I use: 3.4.2 Version of Django I use:1.9
I would be very grateful if somebody could help me to solve this issue.
Regards and thanks in advance |
"Unknown command syncdb" running "python manage.py syncdb" | 42,688,208 | 1 | 32 | 73,141 | 1 | django,sqlite,python-3.x,django-1.9 | Django has removed python manage.py syncdb command now you can simply use python manage.py makemigrations followed bypython manage.py migrate. The database will sync automatically. | 0 | 0 | 0 | 0 | 2015-02-24T00:01:00.000 | 10 | 0.019997 | false | 28,685,931 | 0 | 0 | 1 | 6 | I want to create the tables of one database called "database1.sqlite", so I run the command:
python manage.py syncdb
but when I execute the command I receive the following error:
Unknown command: 'syncdb'
Type 'manage.py help' for usage.
But when I run
manage.py help
I don`t see any command suspicious to substitute
python manage.py syncdb
Version of Python I use: 3.4.2 Version of Django I use:1.9
I would be very grateful if somebody could help me to solve this issue.
Regards and thanks in advance |
"Unknown command syncdb" running "python manage.py syncdb" | 42,795,652 | 2 | 32 | 73,141 | 1 | django,sqlite,python-3.x,django-1.9 | In Django 1.9 onwards syncdb command is removed. So instead of use that one, you can use migrate command,eg: python manage.py migrate.Then you can run your server by python manage.py runserver command. | 0 | 0 | 0 | 0 | 2015-02-24T00:01:00.000 | 10 | 0.039979 | false | 28,685,931 | 0 | 0 | 1 | 6 | I want to create the tables of one database called "database1.sqlite", so I run the command:
python manage.py syncdb
but when I execute the command I receive the following error:
Unknown command: 'syncdb'
Type 'manage.py help' for usage.
But when I run
manage.py help
I don`t see any command suspicious to substitute
python manage.py syncdb
Version of Python I use: 3.4.2 Version of Django I use:1.9
I would be very grateful if somebody could help me to solve this issue.
Regards and thanks in advance |
"Unknown command syncdb" running "python manage.py syncdb" | 43,525,717 | 0 | 32 | 73,141 | 1 | django,sqlite,python-3.x,django-1.9 | Alternarte Way:
Uninstall Django Module from environment
Edit Requirements.txt a type Django<1.9
Run Install from Requirments option in the enviroment
Try Syncdb again
This worked for me. | 0 | 0 | 0 | 0 | 2015-02-24T00:01:00.000 | 10 | 0 | false | 28,685,931 | 0 | 0 | 1 | 6 | I want to create the tables of one database called "database1.sqlite", so I run the command:
python manage.py syncdb
but when I execute the command I receive the following error:
Unknown command: 'syncdb'
Type 'manage.py help' for usage.
But when I run
manage.py help
I don`t see any command suspicious to substitute
python manage.py syncdb
Version of Python I use: 3.4.2 Version of Django I use:1.9
I would be very grateful if somebody could help me to solve this issue.
Regards and thanks in advance |
does the function in a python program written using sched, impact/block the the other functionality in the program? | 29,127,707 | 0 | 0 | 39 | 0 | python,module,scheduler | For now, changed the function call for the tidy up feature, by using the background scheduler implementation of the APScheduler module of python.
This does not impact the function for serving http requests and has currently solved my problem | 0 | 0 | 0 | 0 | 2015-02-24T15:20:00.000 | 1 | 1.2 | true | 28,699,580 | 0 | 0 | 1 | 1 | I am new to the programming world and trying out something with Python.
My requirement is to have http web server(built using BaseHTTPServer) that runs forever, which takes an input binary file through HTML form based on user selection and returns a set of HTML files back on the web client.
As part of this when the user is selecting his specific input file, there are set of folders created with HTML files written inside those folders in the server, i thought of putting in a tidy up functionality for these folders on the server. So that everyday the tidy up would clean up the folders automatically based on a configuration.
i could build both these modules in my script(http web service & tidy up on server), specifically the tidy up part is achieved using python's sched module
Both of these functionalities are working independently, i.e
when i comment out the function for tidy up, i can access the server url in the browser and the index.html page shows up correctly and further(accepts binary, parsing happens and output htmls are returned)
when i comment out the function for http server, based on the configuration set, i am able to ensure the tidy up functionality is working
But when i have both these functions in place, i see that the tidy up function works/is invoked correctly for the scheduled time, but the index.html page is not loaded when i request for the server on the browser
I researched on the sched module enough to understand that it is just to schedule multiple events on the system by setting time delays and priorities
Not able to work both the functionality
Questions:
Is this a correct approach, using sched to achieve the tidy up?
If yes, what could be the reason that the http service functionality is blocked and only the tidy up is working?
Any advice would be helpful. Thanks |
Checking username availability - Handling of AJAX requests (Google App Engine) | 28,702,936 | 2 | 1 | 170 | 0 | python,ajax,google-app-engine,memcached,google-cloud-datastore | I would recommend the blur event of the username field, combined with some sort of inline error/warning display.
I would also suggest maintaining a memcache of registered usernames, to reduce DB hits and improve user experience - although probably not populate this with a warm-up, but instead only when requests are made. This is sometimes called a "Repository" pattern.
BUT, you can only populate the cache with USED usernames - you should not store the "available" usernames here (or if you do, use a much lower timeout).
You should always check directly against the DB/Datastore when actually performing the registration. And ideally in some sort of transactional method so that you don't have race conditions with multiple people registering.
BUT, all of this work is dependant on several things, including how busy your app is and what data storage tech you are using! | 0 | 0 | 0 | 0 | 2015-02-24T17:31:00.000 | 2 | 0.197375 | false | 28,702,423 | 0 | 0 | 1 | 1 | I want to add the 'check username available' functionality on my signup page using AJAX. I have few doubts about the way I should implement it.
With which event should I register my AJAX requests? We can send the
requests when user focus out of the 'username' input field (blur
event) or as he types (keyup event). Which provides better user
experience?
On the server side, a simple way of dealing with requests would be
to query my main 'Accounts' database. But this could lead to a lot
of request hitting my database(even more if we POST using the keyup
event). Should I maintain a separate model for registered usernames
only and use that to get better results?
Is it possible to use Memcache in this case? Initializing cache with
every username as key and updating it as we register users and use a
random key to check if cache is actually initialized or pass the
queries directly to db. |
BrowserMob only partially loading page on initial load; fine afterwards | 29,100,263 | 1 | 1 | 90 | 0 | python-2.7,selenium-webdriver,browsermob | When you configure the WebDriver in your test code, set the proxy address not as localhost:8080 but as 127.0.0.1:8080. I think that Firefox has some problems resolving the proxy localhost:8080 that it does not have with the explicit form 127.0.0.1:8080. | 0 | 0 | 1 | 0 | 2015-02-24T19:24:00.000 | 1 | 0.197375 | false | 28,704,465 | 0 | 0 | 1 | 1 | I'm trying to use BrowserMob to proxy pages with Selenium WebDriver. When the initial page request is made, many elements of the page fail to load (e.g., css, jquery includes). If I manually refresh the page everything loads as expected.
Has anyone else seen this behavior? Is there a solution?
Thanks! |
For submiting HTML form data should I use the variable, "id", or "name" | 28,707,296 | 2 | 1 | 39 | 0 | python,html,forms,python-3.x | You should use the "name" attribute.
For example using radio buttons, each button will have the same name but different Id. When submitted only the one with a value (the selected one) will be submitted. | 0 | 0 | 0 | 0 | 2015-02-24T22:03:00.000 | 1 | 1.2 | true | 28,707,240 | 1 | 0 | 1 | 1 | I am trying to submit a form via python and I need to know, should I use the "id" value, or the "name" value. They are both different. |
Bokeh Server Files: load_from_config = False | 28,711,218 | 0 | 0 | 132 | 0 | python,python-2.7,bokeh | These files are to store data and plots persistently on a bokeh-server, for instance if you want to publish a plot so that it will always be available. If you are just using the server locally and always want a "clean slate" you can run with --backend=memory to use the in-memory data store for Bokeh objects. | 0 | 0 | 0 | 0 | 2015-02-25T00:15:00.000 | 1 | 0 | false | 28,708,890 | 0 | 0 | 1 | 1 | on Bokeh 0.7.1
I've noticed that when I run the bokeh-server, files appear in the directory that look like bokeh.data, bokeh.server, and bokeh.sets, if I use the default backend, or redis.db if I'm using redis. I'd like to run my server from a clean start each time, because I've found that if the files exist, over time, my performance can be severely impacted.
While looking through the API, I found the option to turn "load_from_config" from True to False. However, tinkering around with this didn't seem to resolve the situation (it seems to only control log-in information, on 0.7.1?). Is there a good way to resolve this and eliminate the need for me to manually remove these files each time? What is the advantage of having these files in the first place? |
Can I use a port-redirect in Apache as a security layer? | 28,710,800 | 1 | 0 | 74 | 0 | apache,security,ipython | The apache proxy seems a viable solution (if that meets your needs for login security). You could probably use iptables to do a port forward from that server port (using localhost probably?) to port 80 on apache. This way nobody will be able to access it directly. | 0 | 0 | 0 | 0 | 2015-02-25T03:33:00.000 | 1 | 0.197375 | false | 28,710,649 | 0 | 0 | 1 | 1 | I'm not an expert web app developer. I have an IPython Notebook server running on some port. The in-built security is not great -- one global password can be set, no support for multiple users or for integrating with (e.g.) active directory or OpenID.
I believe I can use an Apache port redirect to control access. e.g. put a firewall up over the port so external users can't go straight to the notebook, but rather they have to go via port 80 as served by Apache.
Is there some way to write a login page which provides multi-user authentication, then only pass authorised users through to the notebook server?
I apologise in advance if I have used the wrong terminology or glossed over important details. |
Python bottle: iterate through folder in app's route or in template? | 28,722,748 | 1 | 1 | 105 | 0 | python,bottle | In general, a best practice is to do the work in the app, and do (only) presentation in the template. This keeps your so-called business logic as separate as possible from your rendering.
Even if it wasn't a bad idea, I don't even know how you could walk through a directory of files from within a template. The subset of Python that's available to you in a template is pretty constrained.
Hope that helps! | 0 | 1 | 0 | 0 | 2015-02-25T08:17:00.000 | 1 | 0.197375 | false | 28,714,197 | 0 | 0 | 1 | 1 | I'm beginning to work on a Python 3.4 app to serve a little website (mostly media galleries) with the bottle framework. I'm using bottle's 'simple template engine'
I have a YAML file pointing to a folder which contains images and other YAML files (with metadata for videos).
The app or the template should then grab all the files and treat them according to their type.
I'm now on the point where I have to decide whether I should iterate through the folder within the app (in the function behind the @app.routedecorator) or in the template.
Is there a difference in performance / caching between these two approaches?
Where should I place my iteration loops for the best performance and the most "pythonic" way? |
Share choices across Django apps | 28,719,139 | 0 | 5 | 1,335 | 0 | python,django | this problem has several solutions:
put your choices on the settings file and get these values in apps.
set CONSTANT in determine app or model and access choices by onwer name
set choices items by one-to-many and you then can access choices by model
...
also you can mix these ways :) | 0 | 0 | 0 | 0 | 2015-02-25T12:27:00.000 | 3 | 0 | false | 28,719,031 | 0 | 0 | 1 | 2 | In my models I'm using the choices option in some of my fields. But I'm using the same choices in multiple apps in my Django project.
Where should I place my choices and how can I load these choices in all my apps? |
Share choices across Django apps | 28,719,284 | 4 | 5 | 1,335 | 0 | python,django | We usually have quite a few project-specific apps per project here, and to try and keep dependencies reasonably clean we usually have two more apps:
"core" in which we put things shared by other apps (any app can depend on "core", "core" doesn't depend on any app),
and "main" in which tie things together ("main" can depend on any app, no app is allowed to depend on "main").
In your case, these shared choices would belong to core.models. | 0 | 0 | 0 | 0 | 2015-02-25T12:27:00.000 | 3 | 0.26052 | false | 28,719,031 | 0 | 0 | 1 | 2 | In my models I'm using the choices option in some of my fields. But I'm using the same choices in multiple apps in my Django project.
Where should I place my choices and how can I load these choices in all my apps? |
Better logs and tracking bottlenecks in heroku | 28,755,134 | 1 | 0 | 40 | 0 | python,django,logging,heroku | The best tool I found is newrelic.com It hooks nicely into django apps and heroku. It can even show you the bottlenecks due to queries and functions inside your views. | 0 | 0 | 0 | 0 | 2015-02-25T18:24:00.000 | 1 | 1.2 | true | 28,726,763 | 0 | 0 | 1 | 1 | I have a backend server running on heroku. Right now for going through logs all I have been using is the 'heroku logs' command. I have been using that command also to track how long different requests to each endpoint are taking.
Is there a better way to see a list of how long requests to different endpoints are taking and a good way to track bottlenecks for what is slowing down these endpoints? Also is there any good add ons for heroku that can point out bad responses that are not status =200?
I am using python with django if that is relevant. |
Let website user access api flask | 28,750,364 | 0 | 0 | 143 | 0 | python,api,authentication,flask,flask-login | The way I ended up going was combining both approaches. user_logged_in fires whenever a user logs in. I used that method to generate an api-token and store it in the user object at login. Then, when the user wants to make an api call, the token is simply retrieved from the user object.
I'm not sure if this is best practice, but it seems to be working fine. | 0 | 0 | 0 | 0 | 2015-02-25T23:58:00.000 | 1 | 1.2 | true | 28,732,095 | 0 | 0 | 1 | 1 | I have two web applications. One is a website. The other is an API. Both are built using flask. They use different methods of authentication.
The website uses the flask-login library. Specifically, is uses login_user if user.check_password supplied by a form is true.
The api uses a cryptographically signed token. The api is used by mobile applications (ios for example). These applications make a call to /api/login and POST the same username and password that you would expect on the website. The api then returns a token which the app stores and uses for authentication in the future. The token is generated using the itsdangerous library. Specifically, it is created using TimedJSONWebSignatureSerializer.
I am experiencing a confusing problem, now, where one of our website pages needs to access our api. Of course the api won't allow access, because the user doesn't have a properly generated auth token. I have control over every part of the code, but I'm not sure what the most elegant solution is in this case. Should I stop using one of the authentication mechanisms? Should I somehow store the api auth token for the website user?
Any advice would be appreciated.
UPDATE
As I think about this problem, it occurs to me that I could change the token generation process employed by login_user. If login_user used the same token as the api, then presumably I could get the token from the session whenever the user needed to make an api request via the website. Not yet clear if this is insane. |
Does Boto retry on failed md5 checks? | 28,745,850 | 1 | 1 | 329 | 0 | python,amazon-s3,boto | When boto uploads a file to S3 it calculates the MD5 checksum locally, sends that checksum to S3 as the Content-MD5 header and then checks the value of the ETag header returned by the S3 service against the previously computed MD5 checksum. If the ETag header does not match the MD5 it raises an S3DataError exception. This exception is a subclass of ClientError and client errors are not retried by boto.
It is also possible for the S3 service to return a BadDigest error if the Content-MD5 header we provide does not match the MD5 checksum computed by the service. This is a 400 response from S3 and is also considered a client error and would not be retried. | 0 | 0 | 1 | 0 | 2015-02-26T01:06:00.000 | 1 | 1.2 | true | 28,732,751 | 0 | 0 | 1 | 1 | The boto config has a num_retries parameter for uploads.
num_retries
The number of times to retry failed requests to an AWS server. If boto
receives an error from AWS, it will attempt to recover and retry the
request. The default number of retries is 5 but you can change the
default with this option.
My understanding is that this parameter governs how many times to retry on commands like set_content_from_string. According to the documentation, the same command will fail if the md5 checksum does not match upon upload. My question is, will boto also retry upon checksum failure, or does num_retry apply to a separate class of failures? |
how to handle tcp connections on django | 28,760,572 | 1 | 1 | 143 | 0 | python,django | I would recommend using Celery processes in the background and perhaps use the awesome Twisted library for handling your network requirements. | 0 | 0 | 0 | 0 | 2015-02-27T04:49:00.000 | 1 | 1.2 | true | 28,757,736 | 0 | 0 | 1 | 1 | I have a django framework that receives user form. Thi input is used to create a TCP connection with a backhand process and send and receive json on it.
How do I handle several TCP connections using django framework? |
How to fetch data from a website using Python that is being populated by Javascript? | 28,852,463 | 1 | 0 | 272 | 0 | javascript,python,html,web-scraping,beautifulsoup | The Python binding for Selenium and phantomjs (if you want to use a headless browser as backend) are the appropriate tools for this job. | 0 | 0 | 1 | 0 | 2015-02-27T12:42:00.000 | 2 | 0.099668 | false | 28,765,398 | 0 | 0 | 1 | 2 | I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data.
When i saw the HTML Source I noticed its blank there too.
I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python? |
How to fetch data from a website using Python that is being populated by Javascript? | 28,850,142 | 0 | 0 | 272 | 0 | javascript,python,html,web-scraping,beautifulsoup | Yes, you can scrape JS data, it just takes a bit more hacking. Anything a browser can do, python can do.
If you're using firebug, look at the network tab to see from which particular request your data is coming from. In chrome element inspection, you can find this information in a tab named network, too. Just hit ctrl-F to search the response content of the requests.
If you found the right request, the data might be embedded in JS code, in which case you'll have some regex parsing to do. If you're lucky, the format is xml or json, in which case you can just use the associated builtin parser. | 0 | 0 | 1 | 0 | 2015-02-27T12:42:00.000 | 2 | 0 | false | 28,765,398 | 0 | 0 | 1 | 2 | I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data.
When i saw the HTML Source I noticed its blank there too.
I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python? |
How to make a webpage display dynamic data in real time using Python? | 28,787,299 | 0 | 1 | 6,330 | 0 | python,html,css,python-3.x | To achieve your goal, you need good knowledge of javascript, the language for dynamic web pages. You should be familiar with dynamic web techniques, AJAX, DOM, JSON. So the main part is on the browser side. Practically any python web server fits. To "bridge the gap" the keyword is templates. There are quite a few for python, so you can choose, which one suites you best. Some frameworks like django bring their own templating engine.
And for your second question: when a web site doesn't offer an API, perhaps the owner of the site does not want, that his data is abused by others. | 0 | 0 | 0 | 0 | 2015-02-28T21:09:00.000 | 2 | 0 | false | 28,786,932 | 0 | 0 | 1 | 1 | I am working on making a GUI front end for a Python program using HTML and CSS (sort of similar to how a router is configured using a web browser). The program assigns values given by the user to variables and performs calculations with those variables, outputting the results. I have a few snags to work out:
How do I design the application so that the data is constantly updated, in real-time? I.e. the user does not need to hit a "calculate" button nor refresh the page to get the results. If the user changes the value in a field, all other are fields simultaneously updated, et cetera.
How can a value for a variable be fetched from an arbitrary location on the internet, and also constantly updated if/when it is updated at the source? A few of the variables in the program are based on current market conditions (e.g. the current price of gold). I would like to make those values be automatically entered after retrieving them from certain sources that, let us assume, do not have APIs.
How do I build the application to display via HTML and CSS? Considering Python cannot be implemented like PHP, I am seeking a way to "bridge the gap" between HTML and Python, without such a "heavy" framework like Django, so that I can run Python code on the server-side for the webpage.
I have been looking into this for a quite some time now, and have found a wide range of what seem to be solutions, or that are nearly solutions but not quite. I am having trouble picking what I need specifically for my application. These are my best findings:
Werkzeug - I believe Werkzeug is the best lead I have found for putting the application together with HTML and Python. I would just like to be reassured that I am understanding what it is correctly and that is, in fact, a solution for what I am trying to do.
WebSockets - For displaying the live data from an arbitrary website, I believe I could use this protocol. But I am not sure how I would implement this in practice. I.e. I do not understand how to target the value, and then continuously send it to my application. I believe this to be called scraping? |
Let the SQL engine do the constraint check or execute a query to check the constraint beforehand | 28,787,981 | 1 | 1 | 61 | 1 | python,mysql,sql | The latter one you need to do and handle in any case, thus I do not see there is much value in querying for duplicates, except to show the user information beforehand - e.g. report "This username has been taken already, please choose another" when the user is still filling in the form. | 0 | 0 | 0 | 0 | 2015-02-28T22:40:00.000 | 2 | 0.099668 | false | 28,787,814 | 0 | 0 | 1 | 2 | Python application, standard web app.
If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key.
What is the most sensible way to deal with it.
a) Execute a query to check if the primary key already exists and do the checking and error handling in the python app
b) Let the SQL engine reject the insertion with a constraint failure and use exception handling to handle it back in the app
From a speed perspective it might seem that a failed request will take the same amount of time as a successful one, making b faster because its only one request and not two.
However, when you take things in account like read-only db slaves and table write-locks and stuff like that things get fuzzy for my experience on scaling standard SQL databases. |
Let the SQL engine do the constraint check or execute a query to check the constraint beforehand | 28,788,000 | 2 | 1 | 61 | 1 | python,mysql,sql | The best option is (b), from almost any perspective. As mentioned in a comment, there is a multi-threading issue. That means that option (a) doesn't even protect data integrity. And that is a primary reason why you want data integrity checks inside the database, not outside it.
There are other reasons. Consider performance. Passing data into and out of the database takes effort. There are multiple levels of protocol and data preparation, not to mention round trip, sequential communication from the database server. One call has one such unit of overhead. Two calls have two such units.
It is true that under some circumstances, a failed query can have a long clean-up period. However, constraint checking for unique values is a single lookup in an index, which is both fast and has minimal overhead for cleaning up. The extra overhead for handling the error should be tiny in comparison to the overhead for running the queries from the application -- both are small, one is much smaller.
If you had a query load where the inserts were really rare with respect to the comparison, then you might consider doing the check in the application. It is probably a tiny bit faster to check to see if something exists using a SELECT rather than using INSERT. However, unless your query load is many such checks for each actual insert, I would go with checking in the database and move on to other issues. | 0 | 0 | 0 | 0 | 2015-02-28T22:40:00.000 | 2 | 1.2 | true | 28,787,814 | 0 | 0 | 1 | 2 | Python application, standard web app.
If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key.
What is the most sensible way to deal with it.
a) Execute a query to check if the primary key already exists and do the checking and error handling in the python app
b) Let the SQL engine reject the insertion with a constraint failure and use exception handling to handle it back in the app
From a speed perspective it might seem that a failed request will take the same amount of time as a successful one, making b faster because its only one request and not two.
However, when you take things in account like read-only db slaves and table write-locks and stuff like that things get fuzzy for my experience on scaling standard SQL databases. |
How to share memcache items across 2 apps locally using google app eninge sdk | 28,804,617 | 2 | 1 | 63 | 0 | python,google-app-engine | Actually two different App Engine apps cannot see the same items in memcache. Their memcache spaces are totally isolated from each other.
However two different modules of the same app use the same memcache space and can read and write the same items. Modules act like sub-apps. Is that what you meant?
It is also possible to have different versions of an app (or module) running at the same time (for example to do A/B testing), and these also use the same memcache space. | 0 | 1 | 0 | 0 | 2015-03-01T13:04:00.000 | 1 | 1.2 | true | 28,793,857 | 0 | 0 | 1 | 1 | I have 2 Google App Engine applications which share memcache items, one app writes the items and the other apps reads them. This works in production. however - locally using the SDK, items written by one app are not available to the other. Is there a way to make this work? |
Django get_signed_cookie on manually signed cookie | 28,808,418 | -1 | 2 | 820 | 0 | python,django,cookies,django-rest-framework,django-testing | I have found my solution. It was rather straightforward actually.
I tried to set my cookie on my request for the test, which made sense since the cookie should be sent in the request. A better way of testing it is to just GET a resource first (which sets te cookie, which would happen in a normal web-situation as well) and then POST with the set cookie. This worked just fine. | 0 | 0 | 0 | 0 | 2015-03-01T19:32:00.000 | 1 | 1.2 | true | 28,798,079 | 0 | 0 | 1 | 1 | I'm using get_signed_cookie in one of my views, and it works. The problem is in testing it, i'm manually signing my cookie in my test using Django's django.core.signing module with the same salt as my actual code (i'm using a salt that's in my settings).
When i do non-automated tests, all is well. I've compared the cookie that is set in my test with the cookie that is set by Django itself and they're the same structure:
In my test (I'm manually signing the string 'test_cookie_value' in my test):
{'my_cookie': 'test_cookie_value:s5SA5skGmc4As-Weyia6Tlwq_V8'}
Django:
{'my_cookie': 'zTmwe4ATUEtEEAVs:1YRl0a:Lg3KHfpL93HoUijkZlNphZNURu4'}
But when my view tries to get_signed_cookie("my_cookie", False, salt="my_salt"), False is returned.
I think this is strange, and while i'm trying to obey the testing goat (first time TDD'er here) i'm tempted to skip testing this..
I hope someone can give me some guidance, i'm stuck ignoring the goat noises in the background for now. |
Does django adminplus effects performance? | 33,037,343 | 0 | 1 | 83 | 0 | python,django,apache | The problem for slow response was with the custom views which did not have pagination. After implementing this feature, response became fast :) | 0 | 0 | 0 | 0 | 2015-03-02T07:44:00.000 | 1 | 1.2 | true | 28,804,802 | 0 | 0 | 1 | 1 | I have used django-adminplus to register custom views with django-filters in the admin site. This is slowing down the performance of my django project. I am using Apache as my http webserver as also my static file renderer as I 'HAVE' to do it so. I cannot use gninx nor gunicorn. The views render several 100k records. I have about 30-40 custom views registered with search/ filter option on the admin site. How do I improve the performance. I use linux debian, 64 GB RAM. django 1.6.5 . Thx in advance |
API for fetching movie details from IMDB or similar sites? | 28,812,266 | 0 | 0 | 1,396 | 0 | python,django,movie,imdb | If you have a server which is frequently used, you could think about caching the information locally. I suppose it's very likely that most queries will be clustered around a reduced set of movies, so that would speed up your search. | 0 | 0 | 0 | 0 | 2015-03-02T14:02:00.000 | 1 | 0 | false | 28,811,688 | 0 | 0 | 1 | 1 | I've tried using OMDb, it gives all the things I need and has a very simple working too, but it is very very slow. I looped about 200 queries and it took about 3 minutes to complete.
Is there any faster API out there that can give me similar results on querying an movie. The details i'm looking for in particular are genre, actors and director.
My base code is written in python (for my django server), so an API with a wrapper would just make my day |
HG - Find current running revision of a code file | 28,817,753 | 0 | 0 | 119 | 0 | python,django,mercurial | You can use hg id -i to see the currently checked out revision on your server, and hg status to check if the file has been modified relative to that revision. | 0 | 0 | 0 | 1 | 2015-03-02T19:07:00.000 | 2 | 0 | false | 28,817,558 | 0 | 0 | 1 | 1 | I'm using mercurial and I think I may have an older revision of a python file running on my server. How can I tell which revision, of a particular file, is currently being ran on in my web application?
I suspect an older revision because I currently have an error being thrown in sentry that refers to a line of code that no longer exists in my latest code file. I can log directly onto the server and see my latest file, but still my web app runs this old version
I have ruled out browser caching and server caching
Thanks in advance. |
Using group by in DynamoDB | 28,890,074 | 2 | 0 | 3,106 | 1 | python,amazon-web-services,amazon-dynamodb,boto | You can create 2 GSIs: 1 with date as hashKey, 1 with month as hashKey.
Those GSIs will point you to the rows of that month / of that day.
Then you can just query the GSI, get all the rows of that month/day, and do the aggregation on your own.
Does that work for you?
Thanks!
Erben | 0 | 0 | 0 | 0 | 2015-03-02T19:56:00.000 | 1 | 1.2 | true | 28,818,394 | 0 | 0 | 1 | 1 | For a project I have to use DynamoDB(aws) and python(with boto).
I have items with a date and I need to display the count grouped by date or by month.
Something like
by date of the month [1/2: 5, 2/2: 10, 3/2: 7, 4/2: 30, 5/2: 25, ...]
or
by month of the year [January: 5, February: 10, March: 7, ...] |
joining urls with urljoin in python | 30,655,631 | 0 | 0 | 399 | 0 | python,urllib,urlparse | Two dots (..) means go back once in the hierarchy, change the second link to ./v2/meila07a/meila07a.pdf and it should be working fine.
Or you can also change the root one to http://www.jmlr.org/proceedings/papers/v2/, thanks to this change it will no longer dispose of v2 at the end because the root was not set to a proper directory. | 0 | 0 | 1 | 0 | 2015-03-02T20:06:00.000 | 1 | 0 | false | 28,818,559 | 0 | 0 | 1 | 1 | I am trying to do some web scraping but I have some problems in joining relative and root urls
for example the root url is: http://www.jmlr.org/proceedings/papers/v2
and the relative url is: ../v2/meila07a/meila07a.pdf
As I use urljoin in urlparse: the result is odd:
http://www.jmlr.org/proceedings/v2/meila07a/meila07a.pdf
Which is not a valid link. Can anybody help me with that? |
Running aws-lambda function in a existing bucket with files | 56,451,012 | 1 | 4 | 2,230 | 0 | java,python,amazon-s3,thumbnails,aws-lambda | You can add a SQS queue as an event source/trigger for the Lambda, make the slight changes in the Lambda to correctly process a SQS event as opposed to a S3 event, and then using a local script loop through a list of all objects in the S3 bucket (with pagination given the 2MM files) and add them as messages into SQS. Then when you're done, just remove the SQS event source and queue.
This doesn't get around writing a script to list and find and then call the lambda function but the script is really short. While this way does require setting up a queue, you won't be able to process the 2MM files with direct calls due to lambda concurrency limits.
Example:
Set up SQS queue and add as event source to Lambda.
The syntax for reading a SQS message and an S3 event should be pretty similar
Paginate through list_objects_v2 on the S3 bucket in a for-loop
Create messages using send_message_batch
Suggestion:
Depending on the throughput of new files landing in your bucket, you may want to switch to S3 -> SQS -> Lambda processing anyways instead of direct S3 -> Lambda calls. For example, if you have large bursts of traffic then you may hit your Lambda concurrency limit, or if an error occurs and you want to keep the message (can be resolved by configuring a DLQ for your lambda). | 0 | 0 | 0 | 1 | 2015-03-03T02:40:00.000 | 2 | 0.099668 | false | 28,823,172 | 0 | 0 | 1 | 1 | I'm planning to migrate existing image processing logic to AWS lambda. Lambda thumbnail generator is better than my previous code so I want to re-process all the files in an existing bucket using lamdba.
Lambda seems to be only event driven, this means that my lamdba function will only be called via a PUT event. Since the files are already in the bucket this will not trigger any events.
I've considered creating a new bucket and moving the files from my existing bucket to a new bucket. This will trigger new PUT events, but my bucket has 2MM files so I refuse to consider this hack as a viable options. |
GAE blobstore upload fails with CSRF token missing | 28,857,208 | 0 | 0 | 228 | 0 | python,google-app-engine,flask,blobstore,flask-wtforms | Okey, so the real problem was that I was giving an absolute url to the successpath argument (i.e. the first) of blobstore.create_upload_url(), causing the request notifying about the success caused a csrf error when loading the root path (/). I changed it to a path relative to the root and now just using @csrf.exempt as normal works fine. | 0 | 1 | 0 | 0 | 2015-03-03T22:27:00.000 | 2 | 0 | false | 28,843,234 | 0 | 0 | 1 | 1 | I'm running flask on app engine. I need to let users upload some files. For security reasons I have csrf = CsrfProtect(app) on the whole app, with specific url's exempted using the @csrf.exempt decorator in flask_wtf. (Better to implicitly deny than to implicitly allow.)
Getting an upload url from blobstore with blobstore.create_upload_url works fine, but the upload itself fails with a 400; CSRF token missing or incorrect.
This problem is on the development server. I have not tested it on the real server, since it is in production.
How do I exempt the /_ah/ path so the uploads work? |
Task queue: Allow only one task at a time per user | 28,849,410 | 1 | 2 | 406 | 0 | python,google-app-engine,task-queue | You can specify as many queues as you like in queue.yaml rather than just using the default push queue. If you feel that no more than, say, five users at once are likely to contest for simultaneous use of them then simply define five queues. Have a global counter that increases by one and wraps back to 1 when it exceeds five. Use it to assign which queue a given user gets to push his or her tasks to at the time of the request. With this method, when you have six or more users concurrently adding tasks, you are no worse off than you currently are (in fact, likely much better off).
If you find the server overloading, turn down the default "rate: 5/s" to a lower value for some or all of the queues if you have to, but first try lowering the bucket size, because turning down the rate is going to slow things down when there are not multiple users. Personally, I would first try only turning down the four added queues and leave the first queue fast to solve this if you have performance issues that you can't resolve by tuning the bucket sizes. | 0 | 1 | 0 | 0 | 2015-03-04T07:27:00.000 | 1 | 0.197375 | false | 28,848,740 | 0 | 0 | 1 | 1 | In my application, I need to allow only one task at a time per user. I have seen that we can set max_concurrent_requests: 1 in queue.yaml. but this will allows only one task at a time in a queue.
When a user click a button, a task will be initiated and it will add 50 task to the queue. If 2 user click the button in almost same time total task count will be 100. If i give max_concurrent_requests: 1 it will run only one task from any of these user.
How do i handle this situation ? |
how to generate custom 403 HTTP responses in Plone (prevent redirect) | 28,862,411 | 0 | 1 | 172 | 0 | python,plone | You could override Products/CMFPlone/skins/plone_login/require_login.py and add any logic that you want there to give a custom response.
Or bypass require_login completely and use an own browser view to handle this. In your Plone Site go to acl_users/credentials_cookie_auth/manage_propertiesForm and for the Login Form property replace require_login with your browser view name. | 0 | 0 | 0 | 0 | 2015-03-04T09:20:00.000 | 2 | 0 | false | 28,850,653 | 0 | 0 | 1 | 1 | Upon unauthorized access, Plone by default provides a redirect to login form. I need to prevent this for certain subpaths (ie. not globally), and instead return 403 Forbidden, with a custom (short) HTML page, after the (otherwise) normal Plone authentication & authorization has taken place.
I looked into ITraversable but using that takes place too early in the request processing chain - ie. before auth etc.
One possible yet unexplored method is having a custom view injected to URL path that performs auth checks on the object that maps to the remaining subpath. So there could be a request with URL something like http://myplone/somepath/customview/withcustom403, where:
The customview view implements IPublishTraverse, and its publishTraverse() returns itself
The same view then next validates, in its __call__ method, the access to withcustom403 object, ie. by calling getSecurityManager().validate()
If validation fails, the view sets response to 403 and returns the custom HTML
Would this work? Or is there some event triggered after auth takes place, but before Plone calls response.Unauthorized() (triggering redirect), that would provide a cleaner solution?
This is for current Plone 4.3.4. |
Django Tests run faster with no internet connection | 28,864,271 | 2 | 2 | 523 | 0 | python,django,testing,python-unittest | This most likely means you've got some component installed which is trying to make network connections. Possibly something that does monitoring or statistics gathering?
The simplest way to figure out what's going on is to use tcpdump to capture your network traffic and see what's going on. To do that:
Run tcpdump -i any (or tcpdump -i en1 if you're on a mac; the airport is usually en1, but you can double check with ifconfig)
Watch the traffic to get some idea what's normal
Run your test suite
Watch the traffic printed by tcpdump to see if anything obviously jumps out at you | 0 | 0 | 0 | 1 | 2015-03-04T20:22:00.000 | 1 | 1.2 | true | 28,864,152 | 0 | 0 | 1 | 1 | I have a django test suite that builds a DB from a 400 line fixture file. It runs unfortunately slow. Several seconds per test.
I was on the train yesterday developing without internet access, with my wifi turned off, and I noticed my tests ran literally 10x faster without internet. And they are definitely running correctly.
Everything is local, it all runs fine without an internet connection. The tests themselves do not hit any APIs or make any other connections, so it seems it must be something else. |
Django keeps calling another package to paginate -- how? | 28,939,075 | 0 | 0 | 36 | 0 | python,django,pagination | You've all been really helpful. However the my confusion came from the fact that paginator was being added to the context, yet there was a statement {% load paginator %} at the top of the template. I thought they were the same, but no. The paginator from the context was unused, and the load statement pulled in the bad paginator, which was registered with the templating engine.
The fix is obvious: remove the load statement, including the context paginator and use that one. | 0 | 0 | 0 | 0 | 2015-03-05T12:05:00.000 | 2 | 1.2 | true | 28,877,499 | 0 | 0 | 1 | 1 | My code imports Paginator from django.core.paginator. (Django 1.6.7)
However, when run, somehow it is calling a custom paginator. I don't want it to do this, as the custom paginator template has been removed in an upgrade. I just want Django's Paginator to be used, but I can't find how to workout where it's overriding Django's Paginator with our broken one.
This may not be so much of a Django question and more of a generic Python question. All the usual things like grepping the code, inserting ipdb's, judicious use of find etc yield no help. |
create input form for 540 fields in django. Best approach? | 28,894,444 | 1 | 0 | 50 | 0 | python,django,django-forms,django-formwizard | One approach that you could consider is splitting up your model into semantic slices, each one being a model on its own with a more digestable number of fields.
Then map these "slice-models" back to your main object using a one-to-one relationship (implemented by OneToOneField).
In your wizard you could start a transaction at the beginning and commit only if everything ran through nicely. | 0 | 0 | 0 | 0 | 2015-03-06T07:53:00.000 | 2 | 0.099668 | false | 28,894,339 | 0 | 0 | 1 | 1 | I'm new to django and would like to hear your opinion on how to create a form for
a table with 540 fields. What would be the best approach? Is it best to split the modelform into multiple components or create a template that collects the input in parts (multiple inputs per page) and proceeds through all fields? It would be great if you could point me to some information and/or examples.
Thanks. |
How to create multiple entities dynamically in google app engine using google data storage(python) | 29,608,781 | 1 | 0 | 165 | 0 | python,google-app-engine,google-cloud-datastore | It's not recommended to dynamically create a new table. You need to redesign your database relation structure.
For example in a user messaging app instead of making a new table for every new message [ which contains message and user name] , you should rather create a User table and Messagestable separately and implement a many to one relation between the two tables. | 0 | 1 | 0 | 0 | 2015-03-06T12:29:00.000 | 1 | 1.2 | true | 28,898,827 | 0 | 0 | 1 | 1 | I wish to implement this:
There should be an entity A with column 1 having values a,b,c...[dynamically increases by user's input]
There should be another entity B for each values of a , b , c..
How should I approach this problem?
Should I dynamically generate other entities as user creates more [a,b,c,d... ] ?
If yes , how?
Any other way of implementation of the same problem,? |
What can go wrong if I use SimpleCache in my Flask app | 50,258,834 | 1 | 8 | 3,322 | 0 | python,python-2.7,caching,flask,gunicorn | For your use case with gunicorn, there is no multi-threading issue since each service run single-threadedly in its own process. But a potential problem would be "dirty" read of the data.
Think about the following case:
process1 read from db and populate its own cache, cache1
process2 read from the same table using the same query and populate its own cache, cache2
process2 update the table with new data and invalidate old cache2
process1 execute the same query again, reading from cache1 with the outdated data!. This is when problem happens because process1/cache1 is not aware of the database update | 0 | 0 | 0 | 0 | 2015-03-06T13:43:00.000 | 2 | 0.099668 | false | 28,900,091 | 0 | 0 | 1 | 1 | We are using the following setup: NGINX+Gunicorn+Flask. We need to add just a little bit of caching, no more than 5Mb per Flask worker. SimpleCache seems to be simplest possible solution - it uses memory locally, inside the Python process itself.
Unfortunately, the documentation states the following:
"Simple memory cache for single process environments. This class
exists mainly for the development server and is not 100% thread safe."
However, I fail to see where thread safety would matter at all in our setup. I think that Gunicorn keeps several Flask workers running, and each worker has its own small cache. What can possibly go wrong? |
Django REST Backend for mobile app with Facebook login | 38,586,945 | 0 | 1 | 779 | 0 | django,facebook,rest,django-rest-framework,python-social-auth | i didn't unerstand the first case, when you are using facebook login it does the authentication and we will register the user with the access token provided by facebook. When ever user log in we are not worried about the password, authentication is not done on our end. so when ever user tries to login it contacts facebook if everything goes good there, it will give you a token with that user can login. | 0 | 0 | 0 | 0 | 2015-03-06T16:28:00.000 | 1 | 0 | false | 28,903,187 | 0 | 0 | 1 | 1 | I have to implement a REST backend for mobile applications.
I will have to use Django REST Framework.
Among the features that I need to implement there will be the registration user and login.
Through the mobile application the user can create an account using ONLY the Facebook login.
Then, the application will take the information from Facebook using the token-facebook and it will send this data to my server.
I tried using python_social about Facebook authentication and user registration using the Facebook token.
At this point I have doubts:
think there could be two choices:
1:
The mobile application use the Facebook-login to retrieve user data and will send a request to my server to create a new user with the Facebook data user and passing the Facebook-token.
In this case, in the server side, it will not be integrated python_social and facebook-token is a simple profile field.
Doubts: how can you implement the next login (which password Is necessary to use?)
2:
The second possibility is to use python_social. In this way there are no problems for subsequent logins. The token Facebook will be used to retrieve the data (and validate the user) by calling: do_auth
But in this case, for each user, the server will have to make a request to Facebbok (which actually is possible to avoid: the mobile application has already recovered all the data)
What is the best case? What do you usually use for authentication backend rest with Facebook? |