Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What is the difference between Django and Python?
| 17,052,782 | 6 | 5 | 9,241 | 0 |
python,django
|
Python is a programming language. Django is a web framework built using Python, designed to simplify the creation of websites. It provides a set of common functionality to reduce the amount of trivial code that you need to write.
Django provides:
An administration panel
A database modeling layer
A templating system
Form generation and validation.
and other common functionality.
| 0 | 0 | 0 | 0 |
2013-06-11T20:02:00.000
| 2 | 1 | false | 17,052,725 | 1 | 0 | 1 | 1 |
I'm looking at a job possibility that has a need for both Django and Python. I have some experience with Python but none with Django, nor do I know precisely what Django is. Can someone please explain the difference between Django and Python, how they are related and what they are used for?
Thanks in advance for all your help.
|
Google App Engine Launcher not starting
| 17,086,966 | 1 | 1 | 291 | 0 |
python,windows,google-app-engine
|
I was having same problem with google app engine 1.8.0 then i installed the latest 1.8.1 and the issue fixed!
| 0 | 1 | 0 | 0 |
2013-06-13T04:38:00.000
| 1 | 0.197375 | false | 17,079,358 | 0 | 0 | 1 | 1 |
I'm install google app engine on my laptop and when i clicked on google app engine launcher icon, mouse change to loading icon then nothing run, nothing display and no error reported, just nothing.
My laptop running with WIN7 64bit, Python27 installed.
Please help.
|
django 1.5 update ALLOWED_HOSTS failing SuspiciousOperation
| 17,105,379 | 0 | 5 | 4,184 | 0 |
python,django,django-1.5
|
Resolved. I had deploy settings in a different file overriding the allowed_hosts in settings.py. Appologies missed this before posting. Thanks for the responses received.
| 0 | 0 | 0 | 0 |
2013-06-13T17:00:00.000
| 1 | 0 | false | 17,092,893 | 0 | 0 | 1 | 1 |
I have updated to django 1.5 and am getting the following message:
SuspiciousOperation: Invalid HTTP_HOST header (you may need to set ALLOWED_HOSTS): localhost:8000
I have tried localhost, 127.0.0.1, localhost:8000 in ALLOWED_HOSTS. I have also tried ['*'] all without success.
Anybody any ideas where I am going wrong? Works as expected with DEBUG=False
|
Get raw query string in flask
| 21,792,010 | 5 | 6 | 4,634 | 0 |
python,flask,query-string
|
request.query_string also seems to work.
| 0 | 0 | 0 | 0 |
2013-06-13T17:27:00.000
| 2 | 0.462117 | false | 17,093,372 | 0 | 0 | 1 | 1 |
Is there a way to get the raw query string or a list of query string parameters in Flask?
I know how to get query string parameters with request.args.get('key'), but I would like to be able to take in variable query strings and process them myself. Is this possible?
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,746 | 1 | 31 | 33,438 | 0 |
android,python,django,web,localhost
|
If both are connected to the same network, all you need to do is provide the IP address of your server (in your network) in your Android app.
| 0 | 1 | 0 | 0 |
2013-06-14T20:24:00.000
| 9 | 0.022219 | false | 17,116,718 | 0 | 0 | 1 | 6 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 61,816,349 | 0 | 31 | 33,438 | 0 |
android,python,django,web,localhost
|
Try this
python manage.py runserver
then connect both tablet and system to same wifi and browse in the address
eg: python manage.py runserver 192.168.0.100:8000
In tablet type that url in adress bar
| 0 | 1 | 0 | 0 |
2013-06-14T20:24:00.000
| 9 | 0 | false | 17,116,718 | 0 | 0 | 1 | 6 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,796 | 1 | 31 | 33,438 | 0 |
android,python,django,web,localhost
|
need to know the ip address of your machine ..
Make sure both of your machines (tablet and computer) connected to same network
192.168.0.22 - say your machine address
do this :
192.168.0.22:8000 -- from your tablet
this is it !!!
| 0 | 1 | 0 | 0 |
2013-06-14T20:24:00.000
| 9 | 0.022219 | false | 17,116,718 | 0 | 0 | 1 | 6 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,791 | 16 | 31 | 33,438 | 0 |
android,python,django,web,localhost
|
You can find out what the ip address of your PC is with the ipconfig command in a Windows command prompt. Since you mentioned them being connected over WiFi look for the IP address of the wireless adapter.
Since the tablet is also in this same WiFi network, you can just type that address into your tablet's browser, with the :8000 appended to it and it should pull up the page.
| 0 | 1 | 0 | 0 |
2013-06-14T20:24:00.000
| 9 | 1.2 | true | 17,116,718 | 0 | 0 | 1 | 6 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 17,116,785 | 6 | 31 | 33,438 | 0 |
android,python,django,web,localhost
|
127.0.0.1 is a loopback address that means, roughly, "this device"; your PC and your android tablet are separate devices, so each of them has its own 127.0.0.1. In other words, if you try to go to 127.0.0.1 on your Android tab, it's trying to connect to a webserver on the Android device, which is not what you want.
However, you should be able to connect over the wifi. On your windows box, open a command prompt and execute ipconfig. Somewhere in the output should be your windows box's address, probably 192.168.1.100 or something similar. You tablet should be able to see the Django server at that address.
| 0 | 1 | 0 | 0 |
2013-06-14T20:24:00.000
| 9 | 1 | false | 17,116,718 | 0 | 0 | 1 | 6 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 48,594,665 | 23 | 31 | 33,438 | 0 |
android,python,django,web,localhost
|
Though this thread was active quite a long time ago. This is what worked for me on windows 10. Posting it in details. Might be helpful for the newbies like me.
Add ALLOWED_HOSTS = ['*'] in django settings.py file
run django server with python manage.py 0.0.0.0:YOUR_PORT. I used 9595 as my port.
Make firewall to allow access on that port:
Navigate to control panel -> system and Security -> Windows Defender Firewall
Open Advanced Settings, select Inbound Rules then right click on it and then select New Rule
Select Port, hit next, input the port you used (in my case 9595), hit next, select allow the connections
hit next again then give it a name and hit next for the last time.
Now find the ip address of your PC.
Open Command Promt as adminstrator and run ipconfig command.
You may find more than one ip addresses. As I'm connected through wifi I took the one under Wireless LAN adapter WiFi. In my case it was 192.168.0.100
Note that this ip may change when you reconnect to the network. So you need to check it again then.
Now from another device (pc, mobile, tablet etc.) connected to the same network go to ip_address:YOUR_PORT (in my case 192.168.0.100:9595)
Hopefully you'll be good to go !
| 0 | 1 | 0 | 0 |
2013-06-14T20:24:00.000
| 9 | 1 | false | 17,116,718 | 0 | 0 | 1 | 6 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
Embedding a TinyMCE editor in PyQT QWebkit
| 17,288,641 | 0 | 0 | 239 | 0 |
javascript,python,tinymce,pyqt,qwebkit
|
EvaluateJavaScript does make javascript function calls, or embed a whole javascript file. The following details out the attempts to solve the problem:
The approach of first reading the tinyMCE.js file and then using that in an evaluatejavascript method embeds the javascript somewhere, and can't be sniffed out in a webkit console. When loading files using the evaluatejavascript method, any dependencies, such as the ones that tinymce require, are not loaded. I think it's because javascript calls are "attached" to the webkit but not embedded in the frame's DOM itself.
The second approach consists of creating a webkit page and loading an html file. The html file itself embeds the javascript, so the component works like a "browser". In tinymce's configuration, toolbars and unnecessary parts were hidden. TinyMCE version 3 worked well with PyQt4. When the 4th version was embedded in an html page however, textareas were not being converted to tinymce editors. The console itself shows 'undefined' error messages, deduced to the assumption that tinymce 4 uses different javascript syntax and a different compiler.
And so ends my quest to write a stand-alone webkit editor. :)
| 1 | 0 | 0 | 0 |
2013-06-15T01:29:00.000
| 1 | 0 | false | 17,119,388 | 0 | 0 | 1 | 1 |
as the question states, I wish to embed a tinymce editor in a PyQT webkit component.
As far as I understand, evaluateJavascript allows for js functions to be called.
However, when I try loading tinymce.min.js, the editor does not display anything at all. As suspected, when evaluating a javascript that 'loads' other javascript files, they don't actually get loaded.
At this point, I feel lost. I will try to manually load 'plugins' that will be specified in tinymce's init function and will update this.
Till that time, any help would be really appreciated.
|
Python Request Module - Google App Engine
| 35,530,496 | 0 | 4 | 4,323 | 0 |
python,google-app-engine
|
You need to add the requests/requests sub-folder to your project. From your script's location (.), you should see a file at ./requests/__init__.py.
This applies to all modules you include for Google App Engine. If it doesn't have a __init__.py directly under that location, it will not work.
You do not need to add the module to app.yaml.
| 0 | 1 | 0 | 0 |
2013-06-15T21:36:00.000
| 2 | 0 | false | 17,128,130 | 0 | 0 | 1 | 1 |
I'm trying to import the requests module for my app which I want to view locally on Google App Engine. I am getting a log console error telling me that "no such module exists".
I've installed it in the command line (using pip) and even tried to install it in my project directory. When I do that the shell tells me:
"Requirement already satisfied (use --upgrade to upgrade): requests in /Library/Python/2.7/site-packages".
App Engine is telling me that the module doesn't exist and the shell says it's already installed it.
I don't know if this is a path problem. If so, the only App Engine related application I can find in my mac is the launcher?
|
South skip broken migrations
| 17,128,801 | 0 | 2 | 1,534 | 0 |
python,django,django-south
|
I usually make a temporary modification to the migration script that fails. Comment out or modify the parts that are not needed, run the migrations, then restore everything to the way it was before.
It's not ideal, and it involves some duplication of work - you have to do the same steps both on dev machine and on the server, but it lets you preserve South support and work around the failing migration.
| 0 | 0 | 0 | 0 |
2013-06-15T23:07:00.000
| 2 | 0 | false | 17,128,740 | 0 | 0 | 1 | 1 |
I am using a 3rd party app inside my django application and the older versions of it had a dependancy on the django auth model, but the newer version supports the custom auth model of django 1.5.
The problem I am having is that when I install the app and migrate app, it breaks on the migration 002 because it is referencing a table that the final version of the app doesn't need, therefore i dont have.
If i turn off south and just do a syncdb everything works fine. But then I will have to do fake migrations for all my other apps. Is there an easy way that I can have either south skip these errors and keep proceeding with the migrations or south just use the models.py to create the schema and then for me to do a fake migration for just that one app?
Thanks for your help :)
|
Simultanous AJAX requests and MySQL database data visibility in Django
| 17,132,550 | 0 | 0 | 323 | 0 |
python,mysql,database,django
|
I have solved this issue by wrapping my view in the @transaction.autocommit decorator and executing transaction.commit() immediately before checking in the database if an answer with a particular client_id exists. This accomplishes the "refresh" I was aiming for.
| 0 | 0 | 0 | 0 |
2013-06-16T10:30:00.000
| 1 | 0 | false | 17,132,334 | 0 | 0 | 1 | 1 |
I have a Django app with a MySQL database which allows answering of questions on an HTML page. The answers get sent to the server via AJAX calls. These calls are initiated by various JavaScript events and can often be fired multiple times for one answer. When this happens, multiple save requests for one answer get sent to the server.
In order to avoid duplicate answers, each answer has a client-side ID generated the first time it gets saved - client_id. Before creating a new answer server-side, the Django app first checks the DB to see if an answer with such a client_id exists. If one does, the second save requests updates the answer instead of creating a new one.
In Chrome, when a text input field is focused, and the user clicks outside of the Chrome window, two save requests get fired one after the other. The server receives them both. Let's say that for the sake of the example the client_id is 71.
The first request checks the DB and sees that no answers with a client_id 71 exist. It creates a new answer and saves in the the DB. I am debugging with breakpoints and at this time, I see in my external MySQL database viewer that indeed the answer is saved. In my IDE, when I execute Answer.objects.filter(client_id=71) I get the answer as well. I let the debugger continue.
Immediately my second breakpoint fires for the second AJAX save answer request. Now a curious thing happens. In my IDE, when I execute Answer.objects.filter(client_id=71) I see no answers! My external tool confirms that the answer is there. So my code creates a new answer and saves it. Now if in my IDE I execute Answer.objects.filter(client_id=71) I see two answers with that client_id.
I am guessing that the DB connection or MySQL uses some kind of time-based method of keeping views constant, but it is causing me problems here. I would like a live insight into the state of the DB.
I am not using any transaction management, so Django should be doing auto_commit.
How can I instruct the DB connect to "refresh" or "reset" itself to take into consideration data which is actually in the DB?
|
Trying to understand Python and all its moving parts - What is the difference between Tkinter and Django
| 17,137,094 | 3 | 0 | 2,632 | 0 |
python,django,model-view-controller,event-handling,tkinter
|
Tkinter is a GUI library (for desktop applications) and Django is for web development. Both are completely different and in fact it is useless to compare them even.
| 1 | 0 | 0 | 0 |
2013-06-16T19:49:00.000
| 2 | 0.291313 | false | 17,137,050 | 0 | 0 | 1 | 1 |
I have a school project where my team needs to build a board game. We want to use Python based on all the good things we have heard. I have been researching MVC frameworks and came across Django (its part of my installation on Pydev). I have a Mac, fyi.
I have also been looking up Tkinter but cant seem to understand what the difference is between Django and Tkinter. Why would you use one over the other? I understand that Django is for Web Development. And I think I understand that Tkinter is for building GUI's right?
The board game will have multiple players who should all get updated when one of the players makes a move.
Can any of you point me to where I should be looking online based on what I am trying to do? I am not looking for code, but just the right website with some good documentation and tutorials that will help me out. Thanks again, Mash
|
How to get PyCharm to check PEP8 code style?
| 58,257,720 | 1 | 30 | 58,662 | 0 |
python,pycharm
|
Well, I wish I had a better answer, but what helped me was simply the following:
switch the interpreter from a remote one to a system one
wait until the Pycharm indexing is done
switch the interpreter back to the initial/desired one
| 0 | 0 | 0 | 0 |
2013-06-17T02:00:00.000
| 5 | 0.039979 | false | 17,139,485 | 1 | 0 | 1 | 2 |
I'm using PyCharm (v 2.7.2) to develop a Django app, but I can't get it to check PEP8 style violations.
I have enabled "PEP8 coding style violation" in the "Inspctions" section of the settings, but PyCharm doesn't highlight the style violations.
Is there a way to fix this?
|
How to get PyCharm to check PEP8 code style?
| 34,655,470 | 13 | 30 | 58,662 | 0 |
python,pycharm
|
Mine wasn't showing up due to the color scheme. By default it's marked as "weak warning", so you might have to edit the appearance to make it visible. Editor > Colors & Fonts > General > Errors and Warnings.
| 0 | 0 | 0 | 0 |
2013-06-17T02:00:00.000
| 5 | 1 | false | 17,139,485 | 1 | 0 | 1 | 2 |
I'm using PyCharm (v 2.7.2) to develop a Django app, but I can't get it to check PEP8 style violations.
I have enabled "PEP8 coding style violation" in the "Inspctions" section of the settings, but PyCharm doesn't highlight the style violations.
Is there a way to fix this?
|
Forms between Django Client and Django Piston API
| 17,200,127 | 0 | 0 | 75 | 0 |
python,django,rest,django-forms,django-piston
|
Use Django Tastypie, it's a much more robust REST framework than Piston :)
| 0 | 0 | 0 | 0 |
2013-06-17T05:15:00.000
| 1 | 0 | false | 17,140,809 | 0 | 0 | 1 | 1 |
Is there any easy way to process the ModalForm in Django Piston API to a Django Client? ,
On documentations @validate decorator is mentioning but I couldn't find the way to send forms from API to the Django Client. I feel like it is possible to use Django Forms from API like in local on Client side.
|
How can I serve arbitrary request paths?
| 17,305,631 | 0 | 1 | 47 | 0 |
python,zope
|
There are two adapters needed for this. One adapts the ZODB context one wishes to use and zope.publisher.interfaces.IRequest, while providing zope.traversing.interfaces.ITraversable (view). The second adapts the previous objects instantiated view and zope.publisher.interfaces.browser.IBrowserRequest, while providing zope.publisher.interfaces.IPublishTraverse (traverser). I subclassed BrowserView for both adapters.
Inside the traverser, the publishTraverse method will be called successively for each URL part that is being traversed and returns a view for that URL part.
| 0 | 0 | 1 | 0 |
2013-06-17T15:50:00.000
| 1 | 0 | false | 17,151,693 | 0 | 0 | 1 | 1 |
How can I serve arbitrary paths zope.browserrsource does for @@ and ++resource++ URIs in Zope?
|
Exception Handling guideline- Python vs Java
| 17,158,594 | 2 | 9 | 2,973 | 0 |
java,python,exception-handling
|
OK, I can try and give an answer which I'll keep as neutral as it can be... (note: I have done Python professionally for a few months, but I am far from mastering the language in its entirety)
The guidelines are "free"; if you come from a Java background, you will certainly spend more time than most Python devs out there looking for documentation on what is thrown when, and have more try/except/finally than what is found in regular python code. In other words: do what suits you.
Apart from the fact that they can be thrown anywhere, at any moment, Python has multi-exception catch (only available in Java since 7), with (somewhat equivalent to Java 7's try-with-resources), you can have more than one except block (like Java can catch more than once), etc. Additionally, there are no real conventions that I know of on how exceptions should be named, so don't be fooled if you see SomeError, it may well be what a Java dev regards as a "checked exception" and not an Error.
| 0 | 0 | 0 | 0 |
2013-06-17T23:01:00.000
| 3 | 1.2 | true | 17,158,233 | 1 | 0 | 1 | 2 |
I am original Java developer, for me, checked Exception in Java is obviously/easy enough for me to decide to catch or throw it to the caller to handle later. Then it comes Python, there is no checked exception, so conceptually, nothing forces you to handle anything(In my experience, you don't even know what exceptions are potentially thrown without checking the document). I've been hearing quite a lot from Python guys that, in Python, sometimes you better just let it fail at runtime instead of trying to handle the exceptions.
Can someone give me some pointers regarding to:
what's the guideline/best practice for Python Exception Handling?
what's the difference between Java and Python in this regard?
|
Exception Handling guideline- Python vs Java
| 17,158,373 | -3 | 9 | 2,973 | 0 |
java,python,exception-handling
|
Best practice is to handle appropriate exceptions in the appropriate place. Only you, as developer, can decide which part of your code should catch exceptions. This should become apparent with decent unit testing. If you have unhandled exceptions, they will show up.
You already described the differences. At a more fundamental level, Java's designers think they know better than you how you should code, and they'll force you to write lots of it. Python, by contrast, assumes that you are an adult, and that you know what you want to do. This means that you can shoot yourself in the foot if you insist on so doing.
| 0 | 0 | 0 | 0 |
2013-06-17T23:01:00.000
| 3 | -0.197375 | false | 17,158,233 | 1 | 0 | 1 | 2 |
I am original Java developer, for me, checked Exception in Java is obviously/easy enough for me to decide to catch or throw it to the caller to handle later. Then it comes Python, there is no checked exception, so conceptually, nothing forces you to handle anything(In my experience, you don't even know what exceptions are potentially thrown without checking the document). I've been hearing quite a lot from Python guys that, in Python, sometimes you better just let it fail at runtime instead of trying to handle the exceptions.
Can someone give me some pointers regarding to:
what's the guideline/best practice for Python Exception Handling?
what's the difference between Java and Python in this regard?
|
Is there any simple way to store the user location while registering in database
| 17,159,679 | 0 | 0 | 163 | 1 |
python,django,ip
|
Not in any reliable way, or at least not in Django. The problem is that user IPs are usually dynamic, hence the address is changing every couple of days. Also some ISPs soon will start to use a single IP for big blocks of users (forgot what this is called) since they are running out of IPv4 IP addresses... In other words, all users from that ISP within a whole state or even country will have a single IP address.
So using the IP is not reliable. You could probably figure out the country or region of the user with reasonable accuracy however my recommendation is not to use the IP for anything except logging and permission purposes (e.g. blocking a spam IP).
If you want user locations, you can however use HTML5 location API which will have a much better shot of getting more accurate location since it can utilize other methods such us using a GPS sensor in a phone.
| 0 | 0 | 0 | 0 |
2013-06-18T02:12:00.000
| 4 | 0 | false | 17,159,576 | 0 | 0 | 1 | 1 |
I have the user registration form made in django.
I want to know the city from which the user is registering.
Is there any way that i get the IP address of the user and then somehow get the city for that IP. using some API or something
|
Is there a onSessionClose in WampServerProtocol?
| 17,166,189 | 1 | 1 | 156 | 0 |
python,autobahn
|
There is no WAMP specific session close (since WAMP does not have a closing handshake separate from WebSocket). You can use the onClose hook.
Another point you might have a look at: the recommended way of accessing databases from Twisted applications is via twisted.enterprise.adbapi which automatically manages a database connection pool on a background thread pool - independent of frontend protocol instances (like WAMP protocol instances).
Disclaimer: I am original author of Autobahn and work for Tavendo.
| 0 | 1 | 0 | 0 |
2013-06-18T04:54:00.000
| 1 | 1.2 | true | 17,160,797 | 0 | 0 | 1 | 1 |
I'm using Autobahn Python to make a WAMP server. I open up a database connection in onSessionOpen of my subclass of WampServerProtocol, and of course need to close it when the connection closed. However, I can't find a session close handler in either the tutorials or the docs.
|
Running a Celery worker in unittest
| 18,316,377 | 1 | 6 | 774 | 0 |
python,unit-testing,integration-testing,celery
|
I'm not sure if it's worthwhile to explicitly test the transportation mechanism (i.e. the sending of the task parameters through celery) using a unit test. Personally, I would write my test as follows (can be split up in several unit tests):
Use the code from project B to generate a task with sample parameters.
Encode the task parameters using the same method used by Celery (i.e. pickling the parameters or encoding them as JSON).
Decode the task parameters again, checking that no corruption occured.
Call the task function, making sure that it produces the correct result.
Perform the same encoding/decoding sequence for the results of the task function.
Using this method, you will be able to test that
The task generation works as intended
The encoding & decoding of the task parameters and results works as expected
If necessary, you can still independently test the functioning of the transportation mechanism using a system test.
| 0 | 1 | 0 | 1 |
2013-06-19T02:20:00.000
| 1 | 0.197375 | false | 17,181,923 | 0 | 0 | 1 | 1 |
I have the following setup:
Django-Celery project A registers task foo
Project B: Uses Celery's send_task to call foo
Project A and project B have the same configuration: SQS, msgpack
for serialization, gzip, etc.
Each project lives on a different github repository
I've unit-tested calls to "foo" in project A, without using Celery at all, just foo(1,2,3) and assert the result. I know that it works.
I've unit-tested that send_task in project B sends the right parameters.
What I'm not testing, and need your advise on is the integration between the two projects. I would like to have a unittest that would:
Start a worker in the context of project A
Send a task using the code of project B
Assert that the worker started in the first step gets the task, with the parameters I sent in the second step, and that the foo function returned the expected result.
It seems to be possible to hack this by using python's subprocess and parsing the output of the worker, but that's ugly. What's the recommended approach to unit-testing in cases like this? Any code snippet you could share? Thanks!
|
Unable to locate the element while using selenium-webdriver
| 17,183,255 | 0 | 0 | 157 | 0 |
python,selenium-webdriver
|
Most of the times im using By.xpath and it works specially if you use contains in your xpath. For example : //*[contains(text(),'ABC')]
This will look for all the elements that contains string 'ABC'
In your case you can replace ABC with Delete Log File
| 0 | 0 | 1 | 0 |
2013-06-19T04:49:00.000
| 2 | 0 | false | 17,183,068 | 0 | 0 | 1 | 1 |
I am very much new to selenium WebDriver and I am trying to automate a page which has a button named "Delete Log File". Using FireBug I got to know that, the HTML is described as
and also the css selector is defined as "#DeleteLogButton" using firepath
hence I used
browser.find_element_by_css_selector("#DeleteLogButton").click() in webdriver to click on that button but its now working and also, I tried,
browser.find_element_by_id("DeleteLogButton").click() to click on that button. Even this did not find the solution for my problem...
Please help me out in resolving the issue.
|
Languages compiling/interpreting to Javascript (such as Ruby/Python/Coffescript)
| 17,195,327 | 0 | 0 | 192 | 0 |
javascript,python,ruby,node.js,opalrb
|
You shouldn't even need to understand the syntax of Javascript. Just an understanding of the DOM should suffice. Having said that, all the DOM examples will be in JS syntax, so reading them will be tricky. Being able to debug the transpiled javascript is also usefulo
Correct. You can write a server an client with javascript in all the places
Also correct. This is probably a better option, as these languages map more closely to the underlying javascript.
| 0 | 0 | 0 | 0 |
2013-06-19T15:20:00.000
| 1 | 1.2 | true | 17,195,096 | 1 | 0 | 1 | 1 |
Newbie self-learner diving into web development here. My goal is to learn how to build web-apps. Three quick questions:
Ruby and Python seem to have offshoots that compile their respective code to Javascript (i.e. Opal/Pyjamas). If I can get an understanding of the DOM, i.e. the DOM, do I have to even learn the full language of Javascript or can I just rely on Ruby/Python compiling to JS?
Everyone seems to be talking about node.js allowing for javascript on both the browser and server. Does that mean that if I know Javascript and use Node, I don't need python or ruby for web dev?
If node.js allows for server/client side javascript, couldn't someone just learn something like Coffeescript or Typescript and throw python, ruby or php aside?
|
Scrapy reversed item ordening for preparing in db
| 17,213,740 | 0 | 2 | 201 | 1 |
python,scrapy
|
Items in a database are have not a special order if you don't impose it. So you should add a timestamp to your table in the database, keep it up-to-date (mysql has a special flag to mark a field as auto-now) and use ORDER BY in your queries.
| 0 | 0 | 0 | 0 |
2013-06-20T12:20:00.000
| 3 | 1.2 | true | 17,213,515 | 0 | 0 | 1 | 2 |
I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess.
My questions are:
how can i get the same order as the items of the website itself.
how can i reverse this order of question 1.
So items on website:
A
B
C
D
E
adding order in my sql:
E
D
C
B
A
|
Scrapy reversed item ordening for preparing in db
| 17,221,923 | 1 | 2 | 201 | 1 |
python,scrapy
|
It's hard to say without the actual code, but in theory..
Scrapy is completely async, you cannot know the order of items that will be parsed and processed through the pipeline.
But, you can control the behavior by "marking" each item with priority key. Add a field priority to your Item class, in the parse_item method of your spider set the priority based on the position on a web page, then in your pipeline you can either write this priority field to the database (in order to have an ability to sort later), or gather all items in a class-wide list, and in close_spider method sort the list and bulk insert it into the database.
Hope that helps.
| 0 | 0 | 0 | 0 |
2013-06-20T12:20:00.000
| 3 | 0.066568 | false | 17,213,515 | 0 | 0 | 1 | 2 |
I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess.
My questions are:
how can i get the same order as the items of the website itself.
how can i reverse this order of question 1.
So items on website:
A
B
C
D
E
adding order in my sql:
E
D
C
B
A
|
API to access a user's download history in Google Play?
| 17,216,777 | 0 | 1 | 199 | 0 |
android,python,django
|
That is not possible, as the history of google play only belongs to that particular app.
App to app communication is not possible to take data, unless using a content provider.
As every app is treated as a seperate user by the linux kernel.
| 0 | 0 | 0 | 0 |
2013-06-20T14:42:00.000
| 1 | 1.2 | true | 17,216,646 | 0 | 0 | 1 | 1 |
for a website I am making in Django, I need to see what apps a user has downloaded in the past. I know I can federated login through Django and the OpenID to have users login through their google accounts. However, is there any API out there that can allow me to see what android applications this user has downloaded in the past? This would include names, versions, etc. of all android applications the user has downloaded to their account in the past, in addition to what type of phone/device the applications were donwloaded to. I looked at the Google Play APIs on their site and it didn't seem like there was anything that allowed for it. Please let me know if you have any advice or if there is anything that you know of that could help me!
Thanks!
|
Running the Command on Windows Command prompt using HTML button
| 17,218,531 | 0 | 1 | 6,859 | 0 |
html,windows,python-2.7,command,command-prompt
|
I believe the correct answer is you cannot. Feel free to let me know otherwise if you find out a way to do it.
| 0 | 0 | 0 | 1 |
2013-06-20T15:49:00.000
| 1 | 0 | false | 17,218,183 | 0 | 0 | 1 | 1 |
I would like to run the command python abc.py in the windows command prompt when the button on html page is clicked.The python file is located at C:/abc.py> I would like to know how to code the html page to do this process.Thank you for the help.
|
MATLAB to web app
| 17,220,530 | 0 | 0 | 934 | 0 |
python,django,matlab,web-applications,octave
|
You could always just host the MATLAB code and sample .mat on a website for people to download and play with on their own machines if they have a MATLAB license. If you are looking at having some sort of embedded app on your website you are going to need to rewrite your code in another language. The project sounds doable in python using the packages you mentioned however hosting it online will not be as simple as running a program from your command line. Django would help you build a website but I do not think that it will allow you to just run a python script in the browser.
| 0 | 0 | 0 | 0 |
2013-06-20T16:51:00.000
| 2 | 0 | false | 17,219,344 | 0 | 1 | 1 | 2 |
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest/most efficient way of doing this? I have experience programming in Python and little experience programming in Java.
Here are the options that I have considered:
1. MATLAB Builder JA (too expensive)
2. Rewrite entire MATLAB function into Java (not experienced enough in Java)
3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX)
4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django.
I do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?
|
MATLAB to web app
| 17,224,492 | 1 | 0 | 934 | 0 |
python,django,matlab,web-applications,octave
|
A cheap and somewhat easy way (with limited functionality) would be:
Install MATLAB on your server, or use the MATLAB Compiler to create a stand alone executable (not sure if that comes with your version of MATLAB or not). If you don't have the compiler and can't install MATLAB on your server, you could always go to a freelancing site such as elance.com, and pay someone $20 to compile your code for you into a windows exe file.
Either way, the end goal is to make your MATLAB function callable from the command line (the server will be doing the calling) You could make your input arguments into the slider value, and the .mat files you want to open, and the compiled version of MATLAB will know how to handle this. Once you do that, have the code create a plot and save an image of it. (using getframe or other figure export tools, check out FEX). Have your server output this image to the client.
Tah-dah, you have a crappy low cost work around!
I hope this helps , if not, I apologize!
| 0 | 0 | 0 | 0 |
2013-06-20T16:51:00.000
| 2 | 0.099668 | false | 17,219,344 | 0 | 1 | 1 | 2 |
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest/most efficient way of doing this? I have experience programming in Python and little experience programming in Java.
Here are the options that I have considered:
1. MATLAB Builder JA (too expensive)
2. Rewrite entire MATLAB function into Java (not experienced enough in Java)
3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX)
4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django.
I do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?
|
Flask login together with client authentication methods for RESTful service
| 21,565,425 | 0 | 0 | 527 | 0 |
python,authentication,client,restful-authentication
|
So, you've officially bumped into one of the most difficult questions in modern web development (in my humble opinion): web authentication.
Here's the theory behind it (I'll answer your question in a moment).
When you're building complicated apps with more than a few users, particularly if you're building apps that have both a website AND an API service, you're always going to bump into authentication issues no matter what you're doing.
The ideal way to solve these problems is to have an independent auth service on your network. Some sort of internal API that EXCLUSIVELY handles user creation, editing, and deletion. There are a number of benefits to doing this:
You have a single authentication source that all of your application components can use: your website can use it to log people in behind the scenes, your API service can use it to authenticate API requests, etc.
You have a single service which can smartly managing user caching -- it's pretty dangerous to implement user caching all over the place (which is what typically happens when you're dealing with multiple authentication methods: you might cache users for the API service, but fail to cache them with the website, stuff like this causes problems).
You have a single service which can be scaled INDEPENDENTLY of your other components. Think about it this way: what piece of application data is accessed more than any other? In most applications, it's the user data. For every request user data will be needed, and this puts a strain on your database / cache / whatever you're doing. Having a single service which manages users makes it a lot nicer for you to scale this part of the application stack easily.
Overall, authentication is really hard.
For the past two years I've been the CTO at OpenCNAM, and we had the same issue (a website and API service). For us to handle authentication properly, we ended up building an internal authentication service like described above, then using Flask-Login to handle authenticating users via the website, and a custom method to authenticate users via the API (just an HTTP call to our auth service).
This worked really well for us, and allowed us to scale from thousands of requests to billions (by isolating each component in our stack, and focusing on user auth as a separate service).
Now, I wouldn't recommend this for apps that are very simple, or apps that don't have many users, because it's more hassle than it's worth.
If you're looking for a third party solution, Stormpath looks pretty promising (just google it).
Anyhow, hope that helps! Good luck.
| 0 | 0 | 1 | 0 |
2013-06-20T16:59:00.000
| 1 | 0 | false | 17,219,512 | 0 | 0 | 1 | 1 |
Here is the situation:
We use Flask for a website application development.Also on the website sever, we host a RESTful service. And we use Flask-login for as the authentication tool, for BOTH the web application access and the RESTful service (access the Restful service from browsers).
Later, we find that we need to, also, access the RESTful from client calls (python), so NO session and cookies etc. This gives us a headache regarding the current authentication of the RESTful service.
On the web, there exist whole bunch of ways to secure the RESTful service from client calls. But it seems no easy way for them to live together with our current Flask-login tool, such that we do not need to change our web application a lot.
So here are the question:
Is there a easy way(framework) so the RESTful services can support multiple authentication methods(protocols) at the same time. Is this even a good practice?
Many thanks!
|
Is there a way in python to trigger an action upon exiting a function
| 17,221,983 | 0 | 1 | 347 | 0 |
python,events
|
Instead of calling that function, make a function that calls both
-function a (your function)
-function b (the exit function code)
call this meta function instead
| 0 | 0 | 0 | 0 |
2013-06-20T19:14:00.000
| 2 | 0 | false | 17,221,936 | 1 | 0 | 1 | 1 |
I'm in the process of adding a bit of code to a django system that needs to make a specific function call upon exiting a function. Most of the code that I'm updating has several exit points throughout, which requires that I add a one-liner immediately before each one of them. A wee bit ugly.
What I'd like to do is to simply say, "upon exiting this function, do this", much like the atexit module (from what I've found so far anyway), but to be triggered upon exiting the function rather than the entire script.
Is there anything I can use that works that way?
(I'm using Python 2.7.3 by the way)
|
Why do Flask Extensions exist?
| 17,223,377 | 4 | 1 | 99 | 1 |
python,sqlalchemy,flask,flask-sqlalchemy
|
The extensions exist to extend the functionality of Flask, and reduce the amount of code you need to write for common usage patterns, like integrating your application with SQLAlchemy in the case of flask-sqlalchemy, or login handling with flask-login. Basically just clean, reusable ways to do common things with a web application.
But I see your point with flask-sqlalchemy, its not really that much of a code saver to use it, but it does give you the scoped-session automatically, which you need in a web environment with SQLAlchemy.
Other extensions like flask-login really do save you a lot of boilerplate code.
| 0 | 0 | 0 | 0 |
2013-06-20T20:06:00.000
| 1 | 1.2 | true | 17,222,824 | 0 | 0 | 1 | 1 |
Lets take SQLAlchemy as an example.
Why should I use the Flask SQLAlchemy extension instead of the normal SQLAlchemy module?
What is the difference between those two?
Isn't is perfectly possible to just use the normal module in your Flask app?
|
Django filter by week
| 17,242,425 | 0 | 2 | 1,992 | 0 |
python,django,date,python-datetime
|
There's no predefined query field lookup that can do this for weeks. I'd either reuse code from the WeekArchiveView date-based generic view, or ideally directly subclass it - it already handles the filtering.
| 0 | 0 | 0 | 0 |
2013-06-21T18:48:00.000
| 1 | 0 | false | 17,242,335 | 0 | 0 | 1 | 1 |
Is there a way to easily filter Date objects by week? Basically I want to do something like
items= RelevantObject.objects.filter(date__week=32)
I've found that it's possible to do this for year and month, but it doesn't seem like the capability for week is built in. Is there a "right" way to do this? It seems like it shouldn't be too difficult.
Thanks
|
Screenshot local page with Selenium and PhantomJS
| 17,243,970 | 1 | 0 | 630 | 0 |
selenium,python-3.x,local,phantomjs,webpage-screenshot
|
I feel silly now.
I needed another forward slash
driver.get("file:///
| 0 | 0 | 1 | 0 |
2013-06-21T19:29:00.000
| 1 | 0.197375 | false | 17,242,929 | 0 | 0 | 1 | 1 |
I am using Selenium with PhantomJS as the webdriver in order to render webpages using Python.
The pages are on my local drive.
I need to save a screenshot of the webpages.
Right now, the pages all render completely black.
The code works perfect on non-local webpages.
Is there a way to specify that the page is local?
I tried this:
driver.get("file://...
but it did not work.
Thanks!
|
What exactly is virtualEnv isolating? Just python related packages or more?
| 17,254,371 | 2 | 1 | 457 | 0 |
python,virtualenv,virtualenvwrapper
|
Only python packages installed inside virtualenvironment are isolated.
System packages are not.
| 0 | 0 | 0 | 0 |
2013-06-22T19:01:00.000
| 1 | 1.2 | true | 17,254,295 | 1 | 0 | 1 | 1 |
When I create a new virtualEnv, if I install Django inside a new environment, it's isolated.
But what if I'm inside a virtualEnv and I install emacs, and mysql or such. These packages have nothing to do with python. Would the emacs and mysql packages that I installed be global or isolated to one virtualEnv only?
Thanks
|
Django: Reasonable to store session data in Class Variable?
| 17,261,090 | 2 | 1 | 192 | 0 |
python,django,class
|
I see a lot of reasons not to do it.
I'm sure caching solutions offer robust solutions to memory management. This includes running as a daemon. Having cache invalidation, setting lifetimes on the data.
By setting a classvariable, you are forgoing the above things.
Additionally, caching solutions provide a clean documented, api for interfacing with them.
| 0 | 0 | 0 | 0 |
2013-06-23T12:53:00.000
| 1 | 1.2 | true | 17,260,920 | 0 | 0 | 1 | 1 |
I have a class which parses multiple urls/feeds
and stores hashes of entries. Formerly I had put the hashes into a session variable,
but instead of hitting the db I now switched to a class variable in the form of {request.user.id : [hashes]}. Is this bad practice? Any reasons against it?
|
Error while using Scrapy : ['scrapy.telnet.TelnetConsole': No module named conch twisted]
| 29,180,806 | 0 | 5 | 3,304 | 0 |
python,scrapy,twisted
|
Ensure you have the python development headers:
apt-get install build-essential python-dev
Install scrapy with pip:
pip install Scrapy
| 0 | 0 | 0 | 0 |
2013-06-23T17:45:00.000
| 3 | 0 | false | 17,263,509 | 1 | 0 | 1 | 1 |
In Ubuntu 13.04, I have installed Scrapy for python-2.7, from the tarball. Executing a crawl command results in the below error:
ImportError: Error loading object 'scrapy.telnet.TelnetConsole': No module named conch
I've also tried installing twisted conch using easy_install and using the tarball. I have also removed the scrappy.egg and .info and the main scrappy folder from the python path.
Reinstalling scrapy does not help as well.
Can some one point me in the right direction?
|
How can I find (and scrape) all web pages on a given domain using Python?
| 17,265,028 | 0 | 4 | 1,831 | 0 |
python,http,dns
|
You can't. Pages not only can pages be dynamically generated based on backend database data and search queries or other input that your program supplies to the website, but there is a nearly infinite list of possible pages, and the only way to know which ones exist is to test and see.
The closest you can get is to scrape a website based on hyperlinks between pages in the page content itself.
| 0 | 0 | 1 | 0 |
2013-06-19T20:39:00.000
| 2 | 1.2 | true | 17,265,027 | 0 | 0 | 1 | 1 |
How would I scrape a domain to find all web pages and content?
For example: www.example.com, www.example.com/index.html, www.example.com/about/index.html and so on..
I would like to do this in Python and preferable with Beautiful Soup if possible..
|
How to seamlessly maintain code of django celery in a multi node environment
| 17,270,294 | 2 | 1 | 582 | 0 |
python,django,celery,django-celery,celery-task
|
For this type of situation I have in the past made a egg of all of my celery task code that I can simply rsync or copy in some fashion to my worker nodes. This way you can edit your celery code in a single project that can be used in your django and on your work nodes.
So in summary create a web-app-celery-tasks project and make it into an installable egg and have a web-app package that depends on the celery tasks egg.
| 0 | 1 | 0 | 0 |
2013-06-24T05:52:00.000
| 1 | 1.2 | true | 17,268,766 | 0 | 0 | 1 | 1 |
I have a Django application which uses django-celery, celery and rabbitmq for offline, distributed processing.
Now the setup is such that I need to run the celery tasks (and in turn celery workers) in other nodes in the network (different from where the Django web app is hosted).
To do that, as I understand I will need to place all my Django code in these separate servers. Not only that, I will have to install all the other python libraries which the Django apps require.
This way I will have to transfer all the django source code to all possible servers in the network, install dependencies and run some kind of an update system which will sync all the sources across nodes.
Is this the right way of doing things? Is there a simpler way of
making the celery workers run outside the web application server
where the Django code is hosted ?
If indeed there is no way other than to copy code and replicate in
all servers, is there a way to copy only the source files which the
celery task needs (which will include all models and views - not so
small a task either)
|
"Field not found" when field is present
| 17,277,267 | 4 | 1 | 2,904 | 0 |
python,quickfix
|
This message has a repeating group of two MDEntries. Field 290 appears in the first one, but not the second one. Your code is probably trying to extract 290 from the second one, and is thus getting the error.
Group 1 (has 290):
279=2☺55=ZN☺48=00A0IN00ZNZ☺10455=ZNU3☺167=FUT☺207=CBOT☺15=USD☺200=201309☺290=1☺269=0☺270=126.4375☺271=9☺387=12237☺
Group 2 (lacks 290):
279=0☺269=0☺270=126.421875☺271=57☺
Examine your code that's extracting 290. Put in an if-field-is-present check so that it doesn't try to extract a field that's not there.
| 0 | 0 | 0 | 0 |
2013-06-24T07:37:00.000
| 2 | 1.2 | true | 17,270,259 | 0 | 0 | 1 | 1 |
I send a standard Market Data Incremental Refresh Request message (35 = V) and begin receiving incremental refreshes. Most of the time everything is absolutely fine and dandy. However, every once in a while, I get a strange Field not found message. For example:
(8=FIX.4.2☺9=00221☺35=X☺49=XXX☺56=XXX☺34=4☺52=20130624-07:27:06.706☺262=XXX☺268=2☺279=2☺55=ZN☺48=00A0IN00ZNZ☺10455=ZNU3☺167=FUT☺207=CBOT☺15=USD☺200=201309☺290=1☺269=0☺270=126.4375☺271=9☺387=12237☺279=0☺269=0☺270=126.421875☺271=57☺10=176☺)
Field not found
(Message 4 Rejected: Conditionally Required Field Missing:290)
(8=FIX.4.2☺9=119☺35=j☺34=3☺49=XXX☺52=20130624-07:27:07.037☺56=XXX☺45=4☺58=Conditionally Required Field Missing (290)☺372=X☺380=5☺10=144☺)
I've cut some fields containing personal information or irrelevant information. But as you can see, it is explicitly message 4 that is being rejected, because it lacks field 290, when in fact 290 is clearly there.
So, what's the deal? Has anyone seen this kind of behavior before?
I'm using the Python bindings. Fix 4.2, Python 2.7.
And for the sake of completeness, here's a message (the very next one) that didn't get rejected:
(8=FIX.4.2☺9=00188☺35=X☺49=XXX☺56=XXX☺34=5☺52=20130624-07:27:06.706☺262=XXX☺268=1☺279=1☺55=ZB☺48=00A0IN00ZBZ☺10455=ZBU3☺167=FUT☺207=CBOT☺15=USD☺200=201309☺290=1☺269=1☺270=135.15625☺271=13☺387=5111☺10=156☺
(And no, the difference in tag 55 between the rejected and accepted messages is not the cause of this. QuickFix found 290 in plenty of 55=ZN messages.)
I know this is a pretty technical question but am hoping there is a QuickFix guru out there who might know what's going on.
Thanks for any help.
|
What's the recommended way for storing a phone number?
| 17,270,535 | 4 | 1 | 191 | 0 |
python,django,django-models,phone-number
|
I always use a simple CharField, since phone numbers differ so greatly from region to region and country to country. Some people might even use characters instead of numbers - according to the numeric keyboard on phones.
Maybe adding a Choicefield for country prefix is a good idea, but that is as far as I would go.
I would never check a phone number field for any "invalid" data like dashes, spaces etc, because your users might dislike receiving an error message and because of that do not submit a phone number at all.
After all a phone number will be dialled by a person in your office. And they can - and should - verify the number personally.
| 0 | 0 | 0 | 0 |
2013-06-24T07:52:00.000
| 3 | 1.2 | true | 17,270,474 | 0 | 0 | 1 | 1 |
Trying to get the best way to store a phone # in Django.
At the monent i'm using Charfield and checking if it's a number ...
|
Confused about DBus
| 17,271,574 | 1 | 1 | 460 | 0 |
python,dbus
|
So does this mean that my website needs to run a DBUS service to
allow me to call methods from it into my program?
A dbus background process (a daemon) would run on your web server, yes.
In fact dbus provides two daemons. One is a system daemon which permits
objects to receive system information (e.g. printer availability for exampple)
and the second is a general user application to application IPC daemon. It is the
second daemon that you definitely use for different applications to communicate.
I am coding in Python, so I am not sure if I can run a Python script
on my website that would allow me to run a DBUS service.
There is no problem using python; dbus has bindings for many languages (e.g Java, perl, ruby, c++, Python). dbus objects can be mapped to python objects.
the most logical solution would be to run a single DBUS service that
somehow imports method from different programs and can be queried by
others who want to run those methods. Is that possible?
Correct - dbus provides a mechanism by which a client process will create dbus object or objects which allow that process to other services to other dbus-aware processes.
| 0 | 1 | 0 | 0 |
2013-06-24T08:22:00.000
| 2 | 0.099668 | false | 17,270,936 | 0 | 0 | 1 | 1 |
Ok, so, I might be missing the plot a bit here, but would really like some help. I am quite new to development etc. and have now come to a point where I need to implement DBus (or some other inter-program communication). I am finding the concept a bit hard to understand though.
My implementation will be to use an HTML website to change certain variables to be used in another program, thus allowing for the program to be dynamically changed in its working. I am doing this on a raspberry PI using Raspbian. I am running a webserver to host my website, and this is where the confusion comes in.
As far as I understand, DBus runs a service which allows you to call methods from a program in another program. So does this mean that my website needs to run a DBUS service to allow me to call methods from it into my program? To complicate things a bit more, I am coding in Python, so I am not sure if I can run a Python script on my website that would allow me to run a DBUS service. Would it be better to use JavaScript?
For me, the most logical solution would be to run a single DBUS service that somehow imports method from different programs and can be queried by others who want to run those methods. Is that possible?
Help would be appreciated!
Thank you in advance!
|
Good way to convert Python (django) entity classes to Java
| 17,273,153 | 1 | 0 | 999 | 0 |
java,python,django,converter
|
Sorry, I can't comment as I have low rep. But would it be an option to parse the python into JSON objects, and Java use Jackson or GSON to parse them back into class objects?
| 0 | 0 | 0 | 0 |
2013-06-24T09:17:00.000
| 1 | 1.2 | true | 17,271,955 | 0 | 0 | 1 | 1 |
I'm looking for a good way to "copy" / convert a model from Python source code to Java source code. My idea is to use the Python django framework on a server to generate entity model classes. On the other side I would like to convert the entity classes to Java to use them in a native Android project.
Do you have any recommendations what I can use to convert the python entity classes to Java? It should be possible to trigger the convertion every time I change the model in python.
Best regards,
Michael
PS: If you're interessted, this is what the project structure will look like:
python django project
connects to the database
will be used to generate entity model classes
using REST API for data exchange between Android devices and the server
java model library
this will be my Java library which should contain the converted model of the python django project
android project
this will be my android app which will use the model of the java model library
it should interact with the server via REST API. That's why the model in the java and python project have to be equals.
|
How to create models if I am using various types of database simultaneously?
| 17,289,054 | 0 | 3 | 79 | 1 |
python,database,flask,flask-sqlalchemy
|
It's not an efficient model, but this would work:
You can write three different APIs (RESTful pattern is a good idea). Each will be an independent Flask application, listening on a different port (likely over localhost, not the public IP interface).
A forth Flask application is your main application that external clients can access. The view functions in the main application will issue API calls to the other three APIs to obtain data as they see fit.
You could optimize and merge one of the three database APIs into the main application, leaving only two (likely the two less used) to be implemented as APIs.
| 0 | 0 | 0 | 0 |
2013-06-24T13:41:00.000
| 2 | 0 | false | 17,276,970 | 0 | 0 | 1 | 1 |
I have a flask application which use three types of databases - MySQL, Mongo and Redis. Now, if it had been simple MySQL I could have use SQLAlchemy or something on that line for database modelling. Now, in the current scenario where I am using many different types of database in a single application, I think I will have to create custom models.
Can you please suggest what are the best practices to do that? Or any tutorial indicating the same?
|
MediaWiki API: can it be used to create new articles programatically?
| 17,282,177 | 1 | 1 | 94 | 0 |
python,bots,mediawiki-api
|
You can create a new article simply by editing a page that doesn't exist yet.
| 0 | 0 | 1 | 0 |
2013-06-24T16:24:00.000
| 1 | 1.2 | true | 17,280,297 | 0 | 0 | 1 | 1 |
I've been charged with migrating a large amount of simple web pages into wikiMedia articles. I've been researching the API and PyWikiBot but it seems that all it allows you to do is edit and retrieve what is already there. Can these tools be used to create a brand new article with content, a title and links to itself etc?
If not, can anyone suggest a way to make large scale automated entries to the MediaWiki?
|
Django Manage.py Gives No South Commands
| 17,293,912 | 1 | 3 | 379 | 0 |
python,django,django-south,database-migration
|
If you have installed South in your virtualenv then, once you are in the virtualenv, try to execute python manage.py help instead of ./manage.py help.
| 0 | 0 | 0 | 0 |
2013-06-25T02:02:00.000
| 3 | 0.066568 | false | 17,287,996 | 0 | 0 | 1 | 3 |
When I do a "./manage.py help", it gives me NO south commands, although South has been installed.
I have the latest version of south installed for my django project, which is South==0.8.1.
I have added "south" to my INSTALLED_APPS in settings.py
I have done a manage.py syncdb, and there is a "south_migrationhistory" database table created.
However, when I do a "./manage.py help", it gives me NO south commands.
I have tried uninstalling and re-installing south, but I still get no south commands when I do a ./manage.py help
Any suggestions would be greatly appreciated.
thanks
|
Django Manage.py Gives No South Commands
| 23,678,798 | 0 | 3 | 379 | 0 |
python,django,django-south,database-migration
|
You might want to check if the INSTALLED_APPS is not overwritten somewhere outside the settings.py file. E.g. if you are using a separate settings file for a local settings.
| 0 | 0 | 0 | 0 |
2013-06-25T02:02:00.000
| 3 | 0 | false | 17,287,996 | 0 | 0 | 1 | 3 |
When I do a "./manage.py help", it gives me NO south commands, although South has been installed.
I have the latest version of south installed for my django project, which is South==0.8.1.
I have added "south" to my INSTALLED_APPS in settings.py
I have done a manage.py syncdb, and there is a "south_migrationhistory" database table created.
However, when I do a "./manage.py help", it gives me NO south commands.
I have tried uninstalling and re-installing south, but I still get no south commands when I do a ./manage.py help
Any suggestions would be greatly appreciated.
thanks
|
Django Manage.py Gives No South Commands
| 17,291,512 | 0 | 3 | 379 | 0 |
python,django,django-south,database-migration
|
If you're using virtualenv, ensure that it is activated when running ./manage.py help command. Django will just pass all import errors when showing available commands.
| 0 | 0 | 0 | 0 |
2013-06-25T02:02:00.000
| 3 | 0 | false | 17,287,996 | 0 | 0 | 1 | 3 |
When I do a "./manage.py help", it gives me NO south commands, although South has been installed.
I have the latest version of south installed for my django project, which is South==0.8.1.
I have added "south" to my INSTALLED_APPS in settings.py
I have done a manage.py syncdb, and there is a "south_migrationhistory" database table created.
However, when I do a "./manage.py help", it gives me NO south commands.
I have tried uninstalling and re-installing south, but I still get no south commands when I do a ./manage.py help
Any suggestions would be greatly appreciated.
thanks
|
Flask end response and continue processing
| 17,308,839 | 4 | 30 | 24,216 | 0 |
python,flask
|
I had a similar problem with my blog. I wanted to send notification emails to those subscribed to comments when a new comment was posted, but I did not want to have the person posting the comment waiting for all the emails to be sent before he gets his response.
I used a multiprocessing.Pool for this. I started a pool of one worker (that was enough, low traffic site) and then each time I need to send an email I prepare everything in the Flask view function, but pass the final send_email call to the pool via apply_async.
| 0 | 0 | 0 | 0 |
2013-06-25T09:09:00.000
| 6 | 0.132549 | false | 17,293,311 | 0 | 0 | 1 | 1 |
Is there a way in Flask to send the response to the client and then continue doing some processing? I have a few book-keeping tasks which are to be done, but I don't want to keep the client waiting.
Note that these are actually really fast things I wish to do, thus creating a new thread, or using a queue, isn't really appropriate here. (One of these fast things is actually adding something to a job queue.)
|
How to debug a Flask app
| 52,030,732 | 3 | 174 | 341,826 | 0 |
python,debugging,flask
|
Quick tip - if you use a PyCharm, go to Edit Configurations => Configurations and enable FLASK_DEBUG checkbox, restart the Run.
| 0 | 0 | 0 | 0 |
2013-06-26T00:51:00.000
| 17 | 0.035279 | false | 17,309,889 | 0 | 0 | 1 | 5 |
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
|
How to debug a Flask app
| 54,051,191 | 1 | 174 | 341,826 | 0 |
python,debugging,flask
|
Use loggers and print statements in the Development Environment, you can go for sentry in case of production environments.
| 0 | 0 | 0 | 0 |
2013-06-26T00:51:00.000
| 17 | 0.011764 | false | 17,309,889 | 0 | 0 | 1 | 5 |
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
|
How to debug a Flask app
| 58,817,088 | 10 | 174 | 341,826 | 0 |
python,debugging,flask
|
To activate debug mode in flask you simply type set FLASK_DEBUG=1 on your CMD for windows, or export FLASK_DEBUG=1 on Linux terminal then restart your app and you are good to go!!
| 0 | 0 | 0 | 0 |
2013-06-26T00:51:00.000
| 17 | 1 | false | 17,309,889 | 0 | 0 | 1 | 5 |
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
|
How to debug a Flask app
| 41,045,846 | -4 | 174 | 341,826 | 0 |
python,debugging,flask
|
If you are running it locally and want to be able to step through the code:
python -m pdb script.py
| 0 | 0 | 0 | 0 |
2013-06-26T00:51:00.000
| 17 | -1 | false | 17,309,889 | 0 | 0 | 1 | 5 |
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
|
How to debug a Flask app
| 71,165,056 | 0 | 174 | 341,826 | 0 |
python,debugging,flask
|
If you're using VSCode, press F5 or go to "Run" and "Run Debugging".
| 0 | 0 | 0 | 0 |
2013-06-26T00:51:00.000
| 17 | 0 | false | 17,309,889 | 0 | 0 | 1 | 5 |
How are you meant to debug errors in Flask? Print to the console? Flash messages to the page? Or is there a more powerful option available to figure out what's happening when something goes wrong?
|
Is there any simple way to check radio buttons when page loads, based on the what is checked before the page submit?
| 17,315,760 | 1 | 0 | 125 | 0 |
python,django,django-templates
|
Since requests are stateless, you will have to somehow "save" the state of your radio buttons. One option would be to use sessions, the other would be to use a form and instantiate it with the submitted data.
| 0 | 0 | 0 | 0 |
2013-06-26T08:33:00.000
| 1 | 1.2 | true | 17,315,214 | 0 | 0 | 1 | 1 |
I am using django.
My webpage works like this, If i check the radio button and click on submit. it redirects to the same page with jobs redefined on the basis of which radiobuttons were checked. My problem is after loading the page none of the radio buttons are checked.
so I would like to know is there any method so that when redirect the same page(ie form action="") the previous selected radio buttons(ie before submit) are selected in this page too?
|
Determining sound quality from an audio recording?
| 17,323,482 | 0 | 2 | 3,067 | 0 |
python,audio,noise
|
Not quite my field but I suspect that if you get a spectrum, (do a Fourier transform maybe), and compare "good" and "noisy" recordings you will find that the noise contributes to a cross spectrum level that is higher in the bad recordings than the good. Take a look at the signal processing section in SciPy - this can probably help.
| 0 | 0 | 0 | 1 |
2013-06-26T14:37:00.000
| 3 | 0 | false | 17,323,142 | 0 | 0 | 1 | 2 |
Is there any way to algorithmically determine audio quality from a .wav or .mp3 file?
Basically I have users with diverse recording setups (i.e. they are from all over the world and I have no control over them) recording audio to mp3/wav files. At which point the software should determine whether their setup is okay or not (tragically, for some reason they are not capable of making this determination just by listening to their own recordings, and so occasionally we get recordings that are basically impossible to understand due to low volume or high noise).
I was doing a volume check to make sure the microphone level was okay; unfortunately this misses cases where the volume is high but the clarity is low. I'm wondering if there is some kind of standard scan I can do (ideally in Python) that detects when there is a lot of background noise.
I realize one possible solution is to ask them to record total silence and then compare to the spoken recording and consider the audio "bad" if the volume of the "silent" recording is too close to the volume of the spoken recording. But that depends on getting a good sample from the speaker both times, which may or may not be something I can depend on.
So I'm wondering if instead there's just a way to scan through an audio file (these would be ~10 seconds long) and recognize whether the sound file is "noisy" or clear.
|
Determining sound quality from an audio recording?
| 17,326,588 | 1 | 2 | 3,067 | 0 |
python,audio,noise
|
It all depends on what your quality problems are, which is not 100% clear from your question, but here are some suggestions:
In the case where volume is high and clarity is low, I'm guessing the problem is that the user has the input gain too high. After the recording, you can simply check for distortion. Even better, you can use Automatic Gain Control (AGC) durring recording to prevent this from happening in the first place.
In the case of too much noise, I'm assuming the issue is that the speaker is too far from the mike. In this case Steve's suggestion might work, but to make it really work, you'd need to do a ton of work comparing sample recordings and developing statistics to see how you can discriminate. In practice, I think this is too much work. A simpler alternative that I think will be easier and more likely to work (although not necessarily guaranteed) would be to create an envelope of your signal, then create a histogram from that and see how the histogram compares to existing good and bad recordings. If we are talking about speech only, you could divide the signal into three frequency bands (with a time-domain filter, not an FFT) to give you an idea of how much is noise (the high and low bands) and how much is sound you care about (the center band).
Again, though, I would use an AGC durring recording and if the AGC finds it needs to set the input gain too high, it's probably a bad recording.
| 0 | 0 | 0 | 1 |
2013-06-26T14:37:00.000
| 3 | 0.066568 | false | 17,323,142 | 0 | 0 | 1 | 2 |
Is there any way to algorithmically determine audio quality from a .wav or .mp3 file?
Basically I have users with diverse recording setups (i.e. they are from all over the world and I have no control over them) recording audio to mp3/wav files. At which point the software should determine whether their setup is okay or not (tragically, for some reason they are not capable of making this determination just by listening to their own recordings, and so occasionally we get recordings that are basically impossible to understand due to low volume or high noise).
I was doing a volume check to make sure the microphone level was okay; unfortunately this misses cases where the volume is high but the clarity is low. I'm wondering if there is some kind of standard scan I can do (ideally in Python) that detects when there is a lot of background noise.
I realize one possible solution is to ask them to record total silence and then compare to the spoken recording and consider the audio "bad" if the volume of the "silent" recording is too close to the volume of the spoken recording. But that depends on getting a good sample from the speaker both times, which may or may not be something I can depend on.
So I'm wondering if instead there's just a way to scan through an audio file (these would be ~10 seconds long) and recognize whether the sound file is "noisy" or clear.
|
django i18n msgstr quote is escaped twice
| 23,383,849 | 0 | 1 | 190 | 0 |
python,django,escaping
|
The problem is resolved with django 1.6
You can update with :
sudo pip install -U django
| 0 | 0 | 0 | 0 |
2013-06-26T18:57:00.000
| 1 | 0 | false | 17,328,275 | 0 | 0 | 1 | 1 |
Some of my texts are escaped twice after upgrading from django 1.4 to django 1.5
For instance one label in my template "{{ field.label_tag }}" is displayed as "Email ou nom d'utilisateur".
Is there something to change in the settings to avoid the double escaping?
The text "Email ou nom d'utilisateur" comes file django.po
This {{ field.label_tag }} come from the file signin_form.html of userena package Vers 1.2.1
"Email ou nom d'utilisateur" it is the traduction of "Email or username" in french, this come from the traduction in django.po
_(u"Email or username"), come from the file form.py line 147 of package userena
|
Navigate to a specific module in PyCharm
| 17,909,331 | 3 | 4 | 1,264 | 0 |
python,django,pycharm
|
I feel your problem. All you have to do is Ctrl + CLICK on the definition. Please note, however that this does not provide the actual files. What I mean by this is that it does not re-direct you to the actual function, but rather a skeleton of the function.
If you want to go to the actual function, you will need to go it it my clicking on external libraries on your sidebar and do a search.
| 0 | 0 | 0 | 0 |
2013-06-26T20:41:00.000
| 1 | 0.53705 | false | 17,330,079 | 1 | 0 | 1 | 1 |
I use PyCharm as my IDE for working with Django. So far, it's navigation shortcuts have proven very useful. I can go to a specific (project) file with Ctrl+Shift+N, I can go to any class definition with Ctrl+N and can go to any symbol with Ctrl+Shift+Alt+N.
This is great, but lately I've seen that it would be very useful too to have a shortcut to move to a specific external (or project) module.
Is there any shortcut where I can pass for example: django.contrib and show the modules for inside django.contrib package or base64 and show me the modules match for base64, just as easy as I can go to a specific, symbol, class, file?
|
how to add field for multiple image attachment in openerp module
| 17,343,886 | 0 | 0 | 1,243 | 0 |
python,eclipse,openerp
|
Add a many2many field relating to ir.attachments.Check sent by email button in invoice. It opens a wizard which we can add many attachments and also email body
for example, add a many2many field relating to ir.attachement and in xml line of the field, specify widget="many2many_binary"
I dont know whether it is possible to show images as many2many
| 0 | 0 | 0 | 0 |
2013-06-27T10:44:00.000
| 1 | 0 | false | 17,341,034 | 0 | 0 | 1 | 1 |
Hi I have created a custom openerp module having several fields . I also have a field for attaching image file . But now I need to have a field that should have the ability to attach multiple image fields. How Can I be able to do this?
Hopes for suggestion
|
How to authenticate a user in a RESTful api and get their user id? (Tornado)
| 17,350,408 | 0 | 1 | 513 | 0 |
python,rest,authentication,tornado,userid
|
I am assuming that your authentication function talks to a database and that each page in you app hits the database one or more times.
With that in mind, you should probably just authenticate each request. Many cloud/web applications have multiple database queries per page and run just fine. So when performance does get to be problem in your app (it probably won't for a long time), you'll likely already have an average of n queries per page where n is greater than 1. You can either work on bringing down that average or work on making those queries faster.
| 0 | 1 | 0 | 0 |
2013-06-27T16:17:00.000
| 1 | 0 | false | 17,348,253 | 0 | 0 | 1 | 1 |
I would like to maintain statelessness but I also don't want to call my login function on each authenticated request. Would using tornado's secure cookie functionality be feasible for storing the userid in each request for a mobile app? I'm trying to keep performance in mind, so although basic http authentication would work, I dont want to call a login function on each request to get the users id.
|
Long-running I/O-bound processes in AppEngine: tasks or threads?
| 17,354,787 | 0 | 2 | 193 | 0 |
python,google-app-engine,asynchronous
|
It depends on how long the "interaction" takes. Appengine has a limit of 60 seconds per HTTP Requests.
If your external systems send data periodically then I would advice to grab the data in small chunks to respect the 60 seconds limit. Aggregate those into blobs and then process the data periodically using tasks.
| 0 | 1 | 0 | 0 |
2013-06-27T18:31:00.000
| 1 | 1.2 | true | 17,350,684 | 0 | 0 | 1 | 1 |
My Python AppEngine app interacts with slow external systems (think receiving data from narrow-band connections). Half-hour-long interactions are a norm. I need to run 10-15 of such interactions in parallel.
My options are background tasks and "background threads" (not plain Python threads). Theoretically they look about the same. I'd stick with tasks since background threads don't run on the local development server.
Are there any significant advantages of one approach over the other?
|
Django Dynamic Scraper Project does not run on windows even though it works on Linux
| 17,374,282 | 0 | 0 | 194 | 0 |
python,django,web-scraping,scraper,scraperwiki
|
Step # 1
download django-dynamic-scraper-0.3.0-py2.7.tar.gz file
Step # 2
Unzip it and change the name of the folder to:
django-dynamic-scraper-0.3.0-py2.7.egg
Step # 3
paste the folder into C:\Python27\Lib\site-packages
| 0 | 1 | 0 | 0 |
2013-06-28T11:53:00.000
| 1 | 0 | false | 17,364,120 | 0 | 0 | 1 | 1 |
I am trying to make a project in dynamic django scraper. I have tested it on linux and it runs properly. When I try to run the command: syndb i get this error
/*****************************************************************************************************************************/
python : WindowsError: [Error 3] The system cannot find the path specified: 'C:\Python27\l
ib\site-packages\django_dynamic_scraper-0.3.0-py2.7.egg\dynamic_scraper\migrations/.'
At line:1 char:1
+ python manage.py syncdb
+ ~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (WindowsError: [...migrations/.':String) [],
RemoteException
+ FullyQualifiedErrorId : NativeCommandError
/*****************************************************************************************************************************/
The admin server runs properly with the command python manage.py runserver
Kindly guide me how i can remove this error
|
Scraping Contact Information from Several Unique Sites with Python
| 17,366,729 | 1 | 3 | 2,766 | 0 |
python,web-scraping,beautifulsoup,screen-scraping
|
In most countries the telephone number follows one of a very few well defined patterns that can be matched with a simple regexp - likewise email addresses have an internationally recognised format - simply scrape the homepage, contacts or contact us page and then parse with regular expressions - you should easily achieve better than 90% accuracy.
Alternatively of course you simply submit the restaurant name and town to the local equivalent of the Yellow Pages web site.
| 0 | 0 | 1 | 1 |
2013-06-28T14:03:00.000
| 2 | 0.099668 | false | 17,366,528 | 0 | 0 | 1 | 1 |
I'd like to scrape contact info from about 1000-2000 different restaurant websites. Almost all of them have contact information either on the homepage or on some kind of "contact" page, but no two websites are exactly alike (i.e., there's no common pattern to exploit). How can I reliably scrape email/phone # info from sites like these without specifically pointing the Python script to a particular element on the page (i.e., the script needs to be structure agnostic, since each site has a unique HTML structure, they don't all have, e.g., their contact info in a "contact" div).
I know there's no way to write a program that will be 100% effective, I'd just like to maximize my hit rate.
Any guidance on this—where to start, what to read—would be much appreciated.
Thanks.
|
How does Django handle foreignKeys internally?
| 17,369,468 | 3 | 0 | 38 | 0 |
python,django,metaprogramming,metaclass
|
The field name in the model has _id appended to it in the table, and it stores the PK of the foreign model (as a FK normally would).
When the related field is accessed on a model, Django performs a query to retrieve the foreign model from the database.
When a model is assigned to the related field, Django reads the PK of the model and assigns it to the backing field in the table.
| 0 | 0 | 0 | 0 |
2013-06-28T16:27:00.000
| 1 | 1.2 | true | 17,369,422 | 0 | 0 | 1 | 1 |
I am curious how Django handles model relationships at the object level because I am working on building a custom json serializer, and I need to understand this so I have properly handle nested serialization. I am almost positive I will have to dive into some of the internals of python, but that will not be too big of a deal.
|
Application producing invalid JSON
| 17,406,851 | 0 | 1 | 131 | 0 |
python,json,flask,ascii,octal
|
I ended up passing a urlencoded cookie instead of json. This is a hack. I am not really satisfied with this fix right now.
| 0 | 0 | 0 | 0 |
2013-06-28T20:42:00.000
| 1 | 1.2 | true | 17,373,333 | 1 | 0 | 1 | 1 |
I have written an application using flask. Part of the application creates a dictionary and then the dictionary gets parsed into json(string) with json.dumps. The string then gets stored as a cookie. Everything was working fine in development.
I set up a production environment and when the above process takes place, I am unable to read the cookie with javascript. Upon examining the cookie, I can see that an ASCII octal character for comma has been added: \054.
There are supposedly no differences between my development and production environments. I did have a newer version of flask in production and read that they changed how cookies are stored, so I blew away flask 0.10.1 and installed 0.9 which is what is on my development environment, but the problem persists.
Any ideas where this comma is being replaced by the octal code?
|
Python: Create and return an SQLite DB as a web request result
| 17,382,483 | 1 | 0 | 169 | 1 |
python,django,sqlite
|
I'm not sure you can get at the contents of a :memory: database to treat it as a file; a quick look through the SQLite documentation suggests that its API doesn't expose the :memory: database to you as a binary string, or a memory-mapped file, or any other way you could access it as a series of bytes. The only way to access a :memory: database is through the SQLite API.
What I would do in your shoes is to set up your server to have a directory mounted with ramfs, then create an SQLite3 database as a "file" in that directory. When you're done populating the database, return that "file", then delete it. This will be the simplest solution by far: you'll avoid having to write anything to disk and you'll gain the same speed benefits as using a :memory: database, but your code will be much easier to write.
| 0 | 0 | 0 | 0 |
2013-06-29T16:01:00.000
| 2 | 1.2 | true | 17,382,053 | 0 | 0 | 1 | 1 |
In my python/django based web application I want to export some (not all!) data from the app's SQLite database to a new SQLite database file and, in a web request, return that second SQLite file as a downloadable file.
In other words: The user visits some view and, internally, a new SQLite DB file is created, populated with data and then returned.
Now, although I know about the :memory: magic for creating an SQLite DB in memory, I don't know how to return that in-memory database as a downloadable file in the web request. Could you give me some hints on how I could reach that? I would like to avoid writing stuff to the disc during the request.
|
Database design, adding an extra column versus converting existing column with a function
| 17,393,525 | 1 | 0 | 50 | 1 |
python,mysql,django
|
Having a string-valued PK should not be a problem in any modern database system. A PK is automatically indexed, so when you perform a look-up with a condition like table1.pk = 'long-string-key', it won't be a string comparison but an index look-up. So it's ok to have string-valued PK, regardless of the length of the key values.
In any case, if you need an additional column with all unique values, then I think you should just add a new column.
| 0 | 0 | 0 | 0 |
2013-06-30T18:03:00.000
| 1 | 1.2 | true | 17,393,291 | 0 | 0 | 1 | 1 |
suppose there was a database table with one column, and it's a PK. To make things more specific this is a django project and the database is in mysql.
If I needed an additional column with all unique values, should I create a new UniqueField with unique integers, or just write a hash-like function to convert the existing PK's for each existing row (model instance) into a new unique variable. The current PK is a varchar/ & string.
With creating a new column it consumes more memory but I think writing a new function and converting fields frequently has disadvantages also. Any ideas?
|
Cassandra-Django python application approach
| 17,403,637 | 3 | 1 | 410 | 1 |
python,django,orm,cassandra
|
There's an external backend for Cassandra, but it has some issues with the authentication middleware, which doesn't handle users correctly in the admin. If you use a non-relational database, you lose a lot of goodies that django has. You could try using Postgres' nosql extension for the parts of your data that you want to store in a nosql'y way, and the regular Postgres' tables for the rest.
| 0 | 0 | 0 | 0 |
2013-07-01T11:25:00.000
| 2 | 1.2 | true | 17,403,346 | 0 | 0 | 1 | 1 |
i am working on developing a Django application with Cassandra as the back end database. while Django supports ORM feature for SQL, i wonder if there is any thing similar for Cassandra.
what would be the best approach to load the schema into the Cassandra server and perform CRUD operations.
P.S. I am complete beginner to Cassandra.
|
openerp not loading on localhost
| 17,486,874 | 0 | 0 | 1,240 | 0 |
python,eclipse,openerp
|
Verify that the OpenERP service is running on your computer. You can verify this by clicking on the Taskbar -> task manager -> Services.
Look for the OpenERP service and start it if it is not running.
A problem might have made it fail to start. There might be errors with your custom module.
I tell you developing custom modules on Window is more tedious than on Linux because you can run the server in terminal mode and view output logged directly on the console
| 0 | 0 | 1 | 0 |
2013-07-02T07:21:00.000
| 1 | 0 | false | 17,419,724 | 0 | 0 | 1 | 1 |
Hi I have been working on openerp-7 (win-7) custom module creation . I have been loading openerp server through localhost:8069 . But today the application failed to start and its generating error " Oops! Google Chrome could not connect to localhost:8069 " . What should I do now to fix this issue?
Plz help
Hopes for suggestion
|
Selenium versus BeautifulSoup for web scraping
| 55,484,972 | 3 | 55 | 40,675 | 0 |
javascript,python,selenium,beautifulsoup
|
I would recommend using Selenium for things such as interacting with web pages whether it is in a full blown browser, or a browser in headless mode, such as headless Chrome. I would also like to say that beautiful soup is better for observing and writing statements that rely on if an element is found or WHAT is found, and then using selenium ot execute interactive tasks with the page if the user desires.
| 0 | 0 | 1 | 0 |
2013-07-02T21:19:00.000
| 3 | 0.197375 | false | 17,436,014 | 0 | 0 | 1 | 1 |
I'm scraping content from a website using Python. First I used BeautifulSoup and Mechanize on Python but I saw that the website had a button that created content via JavaScript so I decided to use Selenium.
Given that I can find elements and get their content using Selenium with methods like driver.find_element_by_xpath, what reason is there to use BeautifulSoup when I could just use Selenium for everything?
And in this particular case, I need to use Selenium to click on the JavaScript button so is it better to use Selenium to parse as well or should I use both Selenium and Beautiful Soup?
|
Testing concurrent access in GAE
| 17,441,520 | 1 | 0 | 221 | 0 |
google-app-engine,python-2.7,app-engine-ndb
|
Please vote if it solve your problem :)
GAE works like that:
You can have multiple instances of program with separated code space - mean instance has not access to other instance.
You can have multiple threads in program instance if you mark code as thread safe - mean each instance has access to same code/memory (counter in you case) - you need locking to avoid conflicts.
Memcache is synchronized - updated of value is available to all programs and their threads - there is no concurrent races - mean you can read recent cache value and track if it not change during your changes.
How to simulate concurrent access to piece of code? - You should not simulate you should use clear locking at level of thread or program - since it very hard to simulate concurrent races - it is not know who will win program or thread race since in each environment result is undefined - mean Linux, Windows, Python.
| 0 | 1 | 0 | 0 |
2013-07-03T05:27:00.000
| 1 | 0.197375 | false | 17,440,323 | 0 | 0 | 1 | 1 |
Is it possible to simulate concurrent access to a piece of code in Google App Engine? I am trying to unit test a piece of code that increments a counter. It is possible that the code will be used by different instances of the app concurrently and although I have made the datastore access sections transactional and also used memcache cas I would feel better if there was some way to test it.
I have tried setting up background threads but Testbed seems to be creating a new environment for each thread.
|
How to create field through func in openerp?
| 17,447,105 | 0 | 1 | 198 | 0 |
python,eclipse,openerp
|
You can create field on function, you have to create field in object 'ir.model.fields'
if you are create simple field like float. char, boolean then you have to give value Field Name, Label, Model, for which object you want to create field , if many2one or many2many field then you have to give Object Relation field to.
Hope this help
| 0 | 0 | 0 | 0 |
2013-07-03T11:26:00.000
| 1 | 0 | false | 17,446,703 | 0 | 0 | 1 | 1 |
Hi I have created a button oin my custom openerp module. I wanted to add func to this button to create a field. I have added the function but how to add functionality for creating fields . please help
|
Why doesn't Django support Single Table Inheritance?
| 17,460,337 | 0 | 0 | 147 | 0 |
python,django,database-design,frameworks,relational-database
|
One reason is possibly that Django does not (currently) have the ability to modify database tables after creation.
You can 'kind-of' do STI using proxy models. This will not allow you to have different fields on the different models, but it will allow you to attach different behaviour (via model methods) to different subclasses.
However, if you decide to create a subclass with extra fields, Django will not be able to update the database to reflect that.
| 0 | 0 | 0 | 0 |
2013-07-03T23:23:00.000
| 1 | 0 | false | 17,459,680 | 0 | 0 | 1 | 1 |
What is the rationale behind the decision to not support Single Table Inheritance in Django?
Is STI a bad design? Does it result in poor performance? Would it conflict with the Django ORM as it is?
Just wondering because it's been a missing feature for like ten years now and so there must have been a conscious decision made that it would never be supported.
|
after working in local server, how to move OpenERP on a remote server?
| 17,473,961 | 1 | 0 | 2,217 | 0 |
python,xml,postgresql,openerp
|
This is not a generally accepted way of doing customization in openerp. Ususally, you should make a custom module that implements your customization when installed on the OpenERP server installation.
Are you using Windows or Linux? The concept here is to move all of the server addons files to the upsite server, including a dump of the database which can be restored on the upsite server.
Here's how.
First click the Manage databases at the login screen,
Do a backup database and save the generated dump file.
Install openerp o nthe upsite server (*major versions must match).
Copy the server addons folder, and upload to the upsite server's addon directory.
Restart openerp service.
Then restore the dump file from your backup location.
This is basically how you can mirror a "customized" openerp installation across servers.
Hope this helps.
| 0 | 0 | 1 | 0 |
2013-07-04T11:38:00.000
| 1 | 1.2 | true | 17,469,330 | 0 | 0 | 1 | 1 |
I installed OpenERP V 7 on my local machine. I made modification in the CSS. I also remove some menu, change the labels of some windows and change the position of some menus (one after the other in the order decided by the customer).
The work required is over and runs well on the premises. Now I'm looking for a way to move my work on the server while keeping the changes. Knowing that I worked directly through the interface of OpenERP.
Someone has an idea?
|
GAE does not release memory after handling request?
| 20,981,660 | 0 | 4 | 135 | 0 |
python,google-app-engine
|
Like dragonx wrote every handler which was run, every global variable, import is cached on GAE so how long your instance is running so big she is. You can reconfigure your app settings to faster creating new instances and killing old's. That will give you a little chance to minimize that error.
That error doesn't have to be populated because of memory leaks. Many things affect to that so you should check your code, try to reconfigure your instance settings and maybe you should change your instance type to higher.
| 0 | 0 | 0 | 0 |
2013-07-04T12:14:00.000
| 1 | 0 | false | 17,470,073 | 1 | 0 | 1 | 1 |
Why does this happen?
I run a task which uses a lot of memory - after the task has finished I would expect the memory to be released back to the instance.
However, this doesn't happen. The memory just keeps going up and up on subsequent execution of the task until eventually I get a soft memory warning.
What can I do about this? It just doesn't make sense. I have tried explicitly calling gc.collect() but this doesn't help.
|
Django DeleteView without confirmation template
| 17,475,364 | 1 | 30 | 21,411 | 0 |
python,django
|
Yes, just change the next parameter. In your return response, make sure that the dictionary that you pass in is has something like this : { 'next': '/<your_path_here>}/' }, make sure you commit the changes before adding the next parameter. You might want to change your view's get and post functions.
| 0 | 0 | 0 | 0 |
2013-07-04T17:03:00.000
| 5 | 0.039979 | false | 17,475,324 | 0 | 0 | 1 | 1 |
I am using Django DeleteView in a template and I've created a url & view.
Is it possible to skip the process of loading the _confirm_delete template and just post the delete immediately.
|
Moving database from PMA to Django
| 17,491,830 | 0 | 0 | 62 | 1 |
python,mysql,django,phpmyadmin
|
You can use the to_field attribute of a ForeignKey.
Django should detect this automatically if you use ./manage.py inspectdb, though.
| 0 | 0 | 0 | 0 |
2013-07-05T14:54:00.000
| 1 | 0 | false | 17,491,720 | 0 | 0 | 1 | 1 |
I have an existing MySQL database that I set up on PMA, it has FKs that references columns that are not primary keys. Now I am trying to move the database to Django and am having trouble because when I try to set up d Foreign Keys in django it automatically references the Primary Key of the table that I am attempting to reference so the data doesnt match because column A and column B do not contain the same info. Is there a way to tell django what column to reference?
|
What is the correct way to expose an AWS in an API without giving out your keys?
| 17,906,399 | 0 | 0 | 106 | 0 |
python,amazon-web-services,amazon-sqs
|
It depends on your identity requirements. If it's ok for your clients to have AWS accounts, you can give their accounts permission to send messages to your queue. If you want your own identity, then yes, you would need to build a service layer infront of AWS to broker API requests
| 0 | 0 | 1 | 0 |
2013-07-06T21:39:00.000
| 1 | 0 | false | 17,507,395 | 0 | 0 | 1 | 1 |
Sorry about the awkward title.
I am building a Python API. Part of it involves sending and receiving data to an Amazon SQS to communicate with some stuff on an EC2 instance. I don't want to distribute the API with my amazon keys in it though.
What is the correct way around an issue like this? Do I have to write a separate layer that sits in front of SQS with my own authentication or is there a way to add permissions to amazon keys such that uses could just send and receive messages to SQS but couldn't create additional queues or access any other web services?
|
where to place .htaccess on elastic beanstalk python
| 17,511,404 | 0 | 3 | 731 | 0 |
python,django,amazon-web-services,amazon-elastic-beanstalk
|
Apache's AllowOverride is set to None by default. You have to do some .ebextensions touches to change the default AllowOverride on your DocumentRoot folder.
You can verify this by connecting to your instance through ssh and check the httpd config.
| 0 | 0 | 0 | 0 |
2013-07-07T05:03:00.000
| 1 | 0 | false | 17,509,446 | 0 | 0 | 1 | 1 |
I'm trying to remove the www from the url, and normally I do this by using a .htaccess file with a rewrite rule. I don't know where to put this file in my elastic beanstalk folder structure, or where to have it created.
I've tried placing my .htaccess in
1. /var/www/html
2. django application folder
3. the same folder as the wsgi
4. the django templates folder
5. in the root of my project folder.
None of these have made any difference. I would be interested either where to place the .htaccess file, or another way to remove the www that might be django specific.
|
Why does Django's User Model set the email field as non-unique?
| 17,514,616 | 0 | 6 | 2,484 | 0 |
python,django,security,django-models
|
django.contrib.auth uses the username field to identify a user, not the email address, so there is no conflict if two users have the same email address.
Also, since the email address is not required, it is therefore blank or null in the database (neither of which make for good unique keys).
And for your other question - the password reset will reset the password of the user who requested it, because it is requested by user name.
Having two accounts with the same address can be quite handy. For example, perhaps one is an admin account and the other is a normal user.
| 0 | 0 | 0 | 0 |
2013-07-07T17:15:00.000
| 2 | 0 | false | 17,514,348 | 0 | 0 | 1 | 1 |
I am using Django's default User model and the email is not unique, now I have multiple users with the same email address.
You can have User_A with email address [email protected], and then a new user User_B can register with the same email address [email protected].
This doesn't make sense in any programming universe, and it will cause confusion with email-sending functionality, and possible wrong password resets (if a password reset link is sent, with two users sharing the same email address).
This doesn't hold an obvious security vulnerability as I see it because only the original user has control of the original email address, so the attacker will not receive the reset emails.
However, this could result in the original user User_A being locked out of his original account (if he forgets his password) and being prevented of issuing a password reset because Django attempts to reset the new user User_B only. Obviously User_A wants access to his account, not to User_B's account.
What is the justification?
Obviously the password reset functionality is linked with the email, so if I reset the password based on the email, which user (upon following the password reset link) will be reset?
How can I make the email field unique?
|
Appengine errors not appearing in logs
| 17,520,267 | 0 | 0 | 63 | 0 |
python,google-app-engine,error-handling
|
When I was taking some screenshots as tony asked in the comments, I found the solution.
These errors are all HEAD requests. Since my app doesn't support them, they generate a 405 HTTP response code which is shown on the dashboard as error but then in the logs they don't get the error icon. They just seem to be fine at first sight.
| 0 | 1 | 0 | 0 |
2013-07-07T19:36:00.000
| 1 | 1.2 | true | 17,515,574 | 0 | 0 | 1 | 1 |
I have the following problem:
I can see some mysterious errors on the Appengine Dashboard but when I go to the logs I can't find any relevant entries. Otherwise the URIs are working fine when I request them.
If I click on the links on the dashboard which take me to the logs with a prefilled regexp filter, the logs are empty.
I only have one guess:
When a request takes longer to load and the user closes the browser window/tab, before the page has been loaded, theese kind of errors are generated but not logged. But I can't prove this assumption. This guess is based on what I see sometimes when developing locally with the SDK.
I use the python SDK. I only have one live version of the app.
Do you maybe have any clues what happens here? Thanks.
|
Scientific Reporting in Python
| 17,526,180 | 1 | 4 | 1,861 | 0 |
python,ipython,jinja2,ipython-notebook
|
Using the inline-modus of ipython notebook,
ipython notebook --pylab inline
you can execute your matplotlib-scripts in a browser interactively (thus generating your plots). Then go to
File -> Print View (in the notebook-menu, NOT the browser menu)
and save the generated html-File (via the browser menu). This will include all the plots you generated before as well as the python code. Of course, you cannot modify these html-Files anymore without the notebook-server in the background.
Is this what you mean?
| 0 | 0 | 0 | 0 |
2013-07-08T11:33:00.000
| 4 | 0.049958 | false | 17,525,595 | 0 | 0 | 1 | 1 |
I am working on a scientific python project performing a bunch of computations generating a lot of data.
The problem comes when reports have to be generated from these data, with images embedded (mostly computed with matplotlib). I'd like to use a python module or tool to be able to describe the reports and "build" HTML pages for these reports (or any format supported by a browser).
I was thinking about generating an ipython notebook but I was unable to find if there is a way to do so (except creating the json but I'm doubtful about this approach).
The other way is using Sphinx a bit like the matplotlib but I am not sure how I could really fine-tune the layouts of my various pages.
The last option is to use jinja2 templates (or django-templates or any template engine working) and embed matplotlib code inside.
I know it's vague but was unable to find any kind of reference.
|
What is the purpose of app.py in django apps?
| 17,533,814 | 3 | 0 | 2,402 | 0 |
python,django,web
|
Even though the details are wrong (there's no app.py in new Django projects), the question is still valid.
__init__.py is imported implicitly when importing a sub-module. So if something in __init__.py executes automatically with side effects, you might run into unintended consequences. Doing everything in app.py incurs a longer import, but separates package init from app init logic.
| 0 | 0 | 0 | 0 |
2013-07-08T18:35:00.000
| 3 | 0.197375 | false | 17,533,631 | 0 | 0 | 1 | 2 |
In Django 1.4 and above :
There is a new file called app.py in every django application. It defines the scope of the app and some initials required when loaded.
Why don't they use __init__.py for the purpose? Any advantage over __init__.py approach? Can you link to some official documentation for the same?
|
What is the purpose of app.py in django apps?
| 17,535,359 | 2 | 0 | 2,402 | 0 |
python,django,web
|
As all the links you have provided clearly show, this is nothing at all to do with Django itself, but a convention applied by the third-party app django-oscar.
| 0 | 0 | 0 | 0 |
2013-07-08T18:35:00.000
| 3 | 1.2 | true | 17,533,631 | 0 | 0 | 1 | 2 |
In Django 1.4 and above :
There is a new file called app.py in every django application. It defines the scope of the app and some initials required when loaded.
Why don't they use __init__.py for the purpose? Any advantage over __init__.py approach? Can you link to some official documentation for the same?
|
How to generate a secure temporary url to download file from Amazon S3?
| 17,759,017 | 1 | 0 | 1,631 | 0 |
python,amazon-s3
|
Make sure that the expiration time of the link is set to a very short time. Then make sure that you are communicating with the user via SSL, and that the link provided is SSL. When using an SSL connection, both the data from the page and the URL are encrypted, and no one 'sniffing' the data should be able to see anything.
The only other way to put a real lock down on a file like that would be to then aggressively check the log files generated by the S3 bucket, and check the link for abuse. The problem however is that the traffic for your link may take several hours to make it into logs, but dependant on how long you want these links to last that time delay may be acceptable. Then assuming you find abuse, such as several different IP addresses hitting the link, you can stop the hits by renaming the file on S3.
The ultimate is for your server to grab the data off S3 and spoon feed it to the customer. Then it is impossible for anyone to get the file unless you authenticate them and they remain in the session. The big downer of course is that you are taxing your server, and are defeating half the reason S3 is cool, namely you don't have to serve the file, S3 does. But if your server is on Amazon EC2, there is no cost in pulling from S3, only the download to the customer would be charged. Additionally EC2 instances can access and download data from S3 at local network levels of speed, and like I said its free.
| 0 | 0 | 1 | 0 |
2013-07-09T11:04:00.000
| 1 | 1.2 | true | 17,546,608 | 0 | 0 | 1 | 1 |
I am trying to give user a temporary link to download files from Amazon S3 bucket in python.
I am using generate_url method which generates the url for a specified time peiod.
My concern is when this link is created,anyone including the desired user can hit this url in that time period and get the files.How can I prevent other people from getting access to the files ?
|
Py.Test : Reporting and HTML output
| 17,595,687 | 34 | 26 | 41,617 | 0 |
python,reporting,pytest
|
I think you also need to specify the directory/file you want coverage for like py.test --cov=MYPKG --cov-report=html after which a html/index.html is generated.
| 0 | 0 | 0 | 1 |
2013-07-09T20:41:00.000
| 3 | 1.2 | true | 17,557,813 | 0 | 0 | 1 | 1 |
This is not a technical question at all really. However, I can not locate my .HTML report that is supposed to be generated using:
py.test --cov-report html pytest/01_smoke.py
I thought for sure it would place it in the parent location, or the test script location. Does neither and I have not been able to locate. So I am thinking it is not being generated at all?
|
how do I track how many users visit my website
| 17,579,664 | 5 | 9 | 11,232 | 0 |
python,flask,pythonanywhere
|
PythonAnywhere Dev here. You also have your access log. You can click through this from your web app tab. It shows you the raw data about your visitors. I would personally also use something like Google Analytics. However you don't need to do anything to be able to just see your raw visitor data. It's already there.
| 0 | 0 | 0 | 0 |
2013-07-09T23:24:00.000
| 4 | 0.244919 | false | 17,559,967 | 0 | 0 | 1 | 1 |
I just deployed my first ever web app and I am curious if there is an easy way to track every time someone visits my website, well I am sure there is but how?
|
Can I scrape data from web pages when the data comes from JavaScript?
| 17,578,385 | 0 | 1 | 119 | 0 |
javascript,python,html
|
Although your question isn't very clear. I'm guessing that you are trying to access the javascript console.
In Google Chrome:
Press F12
Go to the 'console' tab
In Mozilla Firefox with Firebug installed:
Open Firebug
Go to the 'console' tab
From the console you can execute javascript query's (calling functions, accessing variables etc.).
I hope this answered your question properly.
| 0 | 0 | 1 | 0 |
2013-07-10T18:29:00.000
| 3 | 0 | false | 17,578,253 | 0 | 0 | 1 | 1 |
I'm not exactly sure how to phrase my question but I'll give it my best shot.
If I load up a webpage, in the HTML it executes a JavaScript file. And if I view the page source I can see the source of that JavaScript (though it's not very well formatted and hard to understand).
Is there a way to run the JavaScript from e.g. Python code, without going through the browser? i.e if I wanted to access a particular function in that JavaScript, is there a clean way to call just that from a Python script, and read the results?
For example... a webpage displays a number that I want access to. It's not in the page source because it's a result from a JavaScript call. Is there a way to call that JavaScript from Python?
|
Django runserver bound to 0.0.0.0, how can I get which IP took the request?
| 17,599,320 | 0 | 6 | 10,672 | 0 |
python,django,manage.py
|
If your goal is to ensure the load balancer is working correctly, I suppose it's not an absolute requirement to do this in the application code. You can use a network packet analyzer that can listen on a specific interface (say, tcpdump -i <interface>) and look at the output.
| 0 | 1 | 0 | 0 |
2013-07-11T13:44:00.000
| 2 | 0 | false | 17,595,066 | 0 | 0 | 1 | 1 |
I'm running a temporary Django app on a host that has lots of IP addresses. When using manage.py runserver 0.0.0.0:5000, how can the code see which of the many IP addresses of the machine was the one actually hit by the request, if this is even possible?
Or to put it another way:
My host has IP addresses 10.0.0.1 and 10.0.0.2. When runserver is listening on 0.0.0.0, how can my application know whether the user hit http://10.0.0.1/app/path/etc or http://10.0.0.2/app/path/etc?
I understand that if I was doing it with Apache I could use the Apache environment variables like SERVER_ADDR, but I'm not using Apache.
Any thoughts?
EDIT
More information:
I'm testing a load balancer using a small Django app. This app is listening on a number of different IPs and I need to know which IP address is hit for a request coming through the load balancer, so I can ensure it is balancing properly.
I cannot use request.get_host() or the request.META options, as they return what the user typed to hit the load balancer.
For example: the user hits http://10.10.10.10/foo and that will forward the request to either http://10.0.0.1/foo or http://10.0.0.2/foo - but request.get_host() will return 10.10.10.10, not the actual IPs the server is listening on.
Thanks,
Ben
|
Python web crawler multithreading and multiprocessing
| 17,601,331 | 0 | 0 | 763 | 0 |
python,multithreading,performance,multiprocessing,web-crawler
|
Look into grequests, it doesn't do actual muti-threading or multiprocessing, but it scales much better than both.
| 0 | 1 | 1 | 0 |
2013-07-11T18:51:00.000
| 1 | 0 | false | 17,601,124 | 0 | 0 | 1 | 1 |
Briefly idea, My web crawler have 2 main jobs. Collector and Crawler, The collector will collecting all of the url items for each sites and storing non duplicated url. The crawler will grab the urls from storage, extract needed data and store its back.
2 Machines
Bot machine -> 8 core, Physical Linux OS (No VM on this machine)
Storage machine -> mySql with clustering (VM for clustering), 2 databases (url and data); url database on port 1 and data port 2
Objective: Crawled 100 sites and try to decrease the bottle neck situation
First case: Collector *request(urllib) all sites , collect the url
items for each sites and * insert if it's non duplicated url to
Storage machine on port 1. Crawler *get the url from storage port 1 ,
*request site and extract needed data and *store it's back on port 2
This cause the connection bottle neck for both request web sites and mySql connection
Second case: Instead of inserting across the machine, Collector store
the url on my own mini database file system.There is no *read a huge
file(use os command technic) just *write (append) and *remove header
of the file.
This cause the connection request web sites and I/O (read,write) bottle neck (may be)
Both case also have the CPU bound cause of collecting and crawling 100 sites
As I heard for I/O bound use multithreading, CPU bound use multiprocessing
How about both ? scrappy ? any idea or suggestion ?
|
table specific data in django models
| 17,604,930 | 1 | 0 | 66 | 0 |
python,django
|
In SQL database it would be easiest to create an additional table which would hold a reference to (for example) thread table and user table. Call it (for example) ThreadVisitors.
Whenever a user visits a thread you create an entry in that table for that user and the thread (you could add a unique constraint on (thread, user) pair). That way getting all visitors for a given thread is as simple as running count query (for a given thread). Some indexes would be helpful here and if performance is an issue then you should cache count queries.
You will need such table per each model probably.
| 0 | 0 | 0 | 0 |
2013-07-11T22:33:00.000
| 1 | 1.2 | true | 17,604,692 | 0 | 0 | 1 | 1 |
Where to store table specific data in django models ?
Here is the scenario
I am making a forum. In each thread, there will be following two kinds of visitors.
Total Visitors
Current Visitors
My model is designed in the following manner
Category model(will contain sub category)
Sub-Category model(will contain sub-category, foreign key points to category)
Thread model(will contain individual threads, foreign key points to sub-category)
Post Model(will contain individual posts/messages, foreign key points to Thread)
Now, I will have visitors at every level. A user visiting different threads/sub-categories/categories. I want to capture the no. of visitors visiting.
Can anyone suggest me where these kinds of data fit in django model ?
|
Checking login status at every page load in CherryPy
| 17,606,832 | 2 | 1 | 313 | 0 |
python,web,cherrypy
|
Nevermind, folks. Turns out that this isn't so bad to do; it is simply a matter of doing the following:
Write a function that does what I want.
Make the function in to a custom CherryPy Tool, set to the before_handler hook.
Enable that tool globally in my config.
| 0 | 0 | 0 | 0 |
2013-07-12T02:35:00.000
| 1 | 1.2 | true | 17,606,646 | 0 | 0 | 1 | 1 |
I am in the midst of writing a web app in CherryPy. I have set it up so that it uses OpenID auth, and can successfully get user's ID/email address.
I would like to have it set so that whenever a page loads, it checks to see if the user is logged in, and if so displays some information about their login.
As I see it, the basic workflow should be like this:
Is there a userid stored in the current session? If so, we're golden.
If not, does the user have cookies with a userid and login token? If so, process them, invalidate the current token and assign a new one, and add the user information to the session. Once again, we're good.
If neither condition holds, display a "Login" link directing to my OpenID form.
Obviously, I could just include code (or a decorator) in every public page that would handle this. But that seems very... irritating.
I could also set up a default index method in each class, which would do this and then use a (page-by-page) helper method to display the rest of the content. But this seems like a nightmare when it comes to the occasional exposed method other than index.
So, my hope is this: is there a way in CherryPy to set some code to be run whenever a request is received? If so, I could use this to have it set up so that the current session always includes all the information I need.
Alternatively, is it safe to create a wrapper around the cherrypy.expose decorator, so that every exposed page also runs this code?
Or, failing either of those: I'm also open to suggestions of a different workflow. I haven't written this kind of system before, and am always open to advice.
Edit: I have included an answer below on how to accomplish what I want. However, if anybody has any workflow change suggestions, I would love the advice! Thanks all.
|
Turbogears on bluehost
| 17,742,500 | 1 | 0 | 72 | 0 |
python,apache,cherrypy,turbogears
|
From what I can see from their website, bluehost supports using FastCGI.
In that case you can deploy your applications using FLUP
flup.server.fcgi.WSGIServer permits to mount any WSGI application (like TurboGears apps) and use them with fastcgi.
| 0 | 0 | 0 | 1 |
2013-07-12T09:50:00.000
| 1 | 1.2 | true | 17,612,117 | 0 | 0 | 1 | 1 |
Has anyone successfully installed TurboGears or CherryPy on BlueHost? There are listings on the web, but none of them are viable or the links to the scripts are broken.
However, Bluehost Tech support claims that some folks are running TurboGears successfully on their shared hosting.
Anyone who has a setup or knows how, to install TurboGears or CherryPy on Bluehost, will be very appreciated if he/she could share their know-how.
Alternatively, if anyone knows another pythonic method that can be installed on Bluehost is welcome to share it with me.
Many thanks,
DK
|
Enable programmatic billing for Amazon AWS through API (python)
| 17,630,560 | 0 | 0 | 356 | 0 |
python,amazon-web-services,boto
|
Currently, there is no API for doing this. You have to log into your billing preference page and set it up there. I agree that an API would be a great feature to add.
| 0 | 0 | 1 | 1 |
2013-07-13T05:51:00.000
| 1 | 1.2 | true | 17,627,389 | 0 | 0 | 1 | 1 |
Was wondering if anyone knew how if it was possible to enable programmatic billing for Amazon AWS through the API? I have not found anything on this and I even went broader and looked for billing preferences or account settings through API and still had not luck. I assume the API does not have this functionality but I figured I would ask.
|
Jython and Python Lib Dependencies
| 18,564,145 | 0 | 0 | 312 | 0 |
python,maven,dependencies,jython
|
ObsPy relies on ctypes which works only for CPython - so I'm afraid you won't get it running under Jython.
| 0 | 1 | 0 | 0 |
2013-07-13T21:36:00.000
| 1 | 0 | false | 17,634,435 | 1 | 0 | 1 | 1 |
I am contributing on an open source Java project, and I am trying to use the Python tool ObsPy via the Jython PythonInterpreter. My problem is that I am having trouble figuring out how to include the ObsPy library in the Jython buildpath. Is it possible to use Maven in order to include the ObsPy library in a manner that the Jython runtime will recognize it?
Thanks, and sorry I could not provide any existing code on this issue.
|
accessing a MBean which is boolean in wlst
| 17,650,190 | 0 | 1 | 938 | 0 |
python,weblogic,mbeans,wlst
|
Have you tried with the only get method like this :
var=get('PausedForForwarding'); print var
| 0 | 0 | 0 | 0 |
2013-07-13T23:57:00.000
| 3 | 0 | false | 17,635,271 | 0 | 0 | 1 | 2 |
Need to access a boolean value under Store and Forward Agent ...
already inside the SAF_Agent and once i do a ls(), i see a list of operations and attributes. I can perform the operations, but i am unable to get one of the attributes
the attribute is PausedForForwarding which is a boolean true or false which currently shows true which mean the SAF Agent is currently paused for forwarding
trying to check the status for above using
cmo.getPausedForForwarding()
and other options as well, but no luck, depending on the status, i want to pause or resume the SAF_Agent !!!
Help needed !!!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.