Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Using Multiple Installations of Celery with a Redis Backend
| 12,089,960 | 2 | 13 | 4,720 | 0 |
python,redis,celery
|
I've used a redis backend for celery while also using the same redis db with prefixed cache data. I was doing this during development, I only used redis for the result backend not to queue tasks, and the production deployment ended up being all AMQP (redis only for caching). I didn't have any problems and don't see why one would (other than performance issues).
For running multiple celery projects with different task definitions, I think the issue would be if you have two different types of workers that each can only handle a subset of job types. Without separate databases, I'm not sure how the workers would be able to tell which jobs they could process.
I'd probably either want to make sure all workers had all task types defined and could process anything, or would want to keep the separate projects in separate databases. This wouldn't require installing anything extra, you'd just specify a REDIS_DB=1 in one of your celery projects. There might be another way to do this. I don't know for sure that multiple DBs are required, but it kinda makes sense.
If you're only using redis for a result backend, maybe that would work for having multiple celery projects on one redis db... I'm not really sure.
| 0 | 1 | 0 | 0 |
2012-08-21T09:39:00.000
| 2 | 1.2 | true | 12,052,094 | 0 | 0 | 1 | 1 |
Is it possible to use the same redis database for multiple projects using celery? Like using the same database for multiple projects as a cache using a key prefix. Or do i have to use a seperate database for every installation?
|
Use java library from python (Python wrapper for java library)
| 12,052,404 | 2 | 4 | 3,847 | 0 |
java,python
|
You can write a simple command line Java program which calls the library and saves the results in a format you can read in Python, then you can call the program from Python using os.system.
Another option is to find Python libraries with equivalent functionality to the Java library: you can read excel, xml and other files in Python, that's not a problem.
| 0 | 0 | 0 | 0 |
2012-08-21T09:47:00.000
| 3 | 0.132549 | false | 12,052,241 | 0 | 0 | 1 | 1 |
I have a java library in jar form which can be used to extract data from files(excel,xml etc). As its in java, it can be used only in java applications. But i need the same library to be used for python projects as well.
I have tried py4j etc which takes the objects from jvm. But the library is not an executable and wont be 'run'. I have checked Jython but i need the library to be accessible from Python projects.
I have thought about using automated java to python translators, but i would take that as the last resort.
Please suggest some way i can accomplish this.
|
Where is the Gunicorn config file?
| 54,821,323 | 0 | 53 | 65,764 | 0 |
python,flask,gunicorn
|
I did this after reading the docs:
when deploying my app through gunicorn, usually there is a file called Procfile
open this file, add --timeout 600
finally, my Procfile would be like:
web: gunicorn app:app --timeout 600
| 0 | 0 | 0 | 1 |
2012-08-21T21:39:00.000
| 5 | 0 | false | 12,063,463 | 0 | 0 | 1 | 1 |
The gunicorn documentation talks about editing the config files, but I have no idea where it is.
Probably a simple answer :) I'm on Amazon Linux AMI.
|
Is there an add-on to auto compress files while uploading into Plone?
| 12,079,431 | 2 | 0 | 258 | 0 |
python,plone
|
As Maulwurfn says, there is no such add-on, but this would be fairly straightforward for an experienced developer to implementing using a custom content type. You will want to be pretty sure that the specific file types you're hoping to store will actually benefit from compression (many modern file formats already include some compression, and so simply zipping them won't shrink them much).
Also, unless you implement something complex like a client-side Flash uploader with built-in compression, Plone can only compress files after they've been uploaded, not before, so if you're hoping to make uploads quicker for users, rather than to minimize storage space, you're facing a somewhat more difficult challenge.
| 1 | 0 | 0 | 1 |
2012-08-22T05:42:00.000
| 1 | 1.2 | true | 12,066,923 | 0 | 0 | 1 | 1 |
Is there any add-on which will activate while uploading files into the Plone site automatically? It should compress the files and then upload into the files. These can be image files like CAD drawings or any other types. Irrespective of the file type, beyond a specific size, they should get compressed and stored, rather than manually compressing the files and storing them.I am using plone 4.1. I am aware of the css, javascript files which get compressed, but not of uploaded files. I am also aware of the 'image handling' in the 'Site Setup'
|
Django multi-db: Route all writes to multiple databases
| 12,934,130 | 1 | 1 | 345 | 1 |
python,django,redundancy,webfaction,django-orm
|
I was looking for something similar. What I found is:
1) Try something like Xeround cloud DB - it's built on MySQL and is compatible but doesn't support savepoints. You have to disable this in (a custom) DB engine. The good thing is that they replicate at the DB level and provide automatic scalability and failover. Your app works as if there's a single DB. They are having some connectivity issues at the moment though which are blocking my migration.
2) django-synchro package - looks promissing for replications at the app layer but I have some concerns about it. It doesn't work on objects.update() which I use a lot in my code.
| 0 | 0 | 0 | 0 |
2012-08-22T09:24:00.000
| 1 | 0.197375 | false | 12,070,031 | 0 | 0 | 1 | 1 |
I am currently sitting in front of a more specific problem which has to do with fail-over support / redundancy for a specific web site which will be hosted over @ WebFaction. Unfortunately replication at the DB level is not an option as I would have to install my own local PostgreSQL instances for every account and I am worried about performance amongst other things. So I am thinking about using Django's multi-db feature and routing all writes to all (shared) databases and the balance the reads to the nearest db.
My problem is now that all docs I read seem to indicate that this would most likely not be possible. To be more precise what I would need:
route all writes to a specific set of dbs (same type, version, ...)
if one write fails, all the others will be rolled back (transactions)
route all reads to the nearest db (could be statically configured)
Is this currently possible with Django's multi-db support?
Thanks a lot in advance for any help/hints...
|
First time Django database SQL or NoSQL?
| 12,078,992 | 1 | 3 | 5,188 | 1 |
python,sql,django,nosql
|
Postgres is a great database for Django in production. sqlite is amazing to develop with. You will be doing a lot of work to try to not use a RDBMS on your first Django site.
One of the greatest strengths of Django is the smooth full-stack integration, great docs, contrib apps, app ecosystem. Choosing Mongo, you lose a lot of this. GeoDjango also assumes SQL and really loves postgres/postgis above others - and GeoDjango is really awesome.
If you want to use Mongo, I might recommend that you start with something like bottle, flask, tornado, cyclone, or other that are less about the full-stack integration and less assuming about you using a certain ORM. The Django tutorial, for instance, assumes that you are using the ORM with a SQL DB.
| 0 | 0 | 0 | 0 |
2012-08-22T18:07:00.000
| 4 | 0.049958 | false | 12,078,928 | 0 | 0 | 1 | 2 |
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo?
I've read all about both but there's a lot I don't understand. Mongo sounds better/easier but at the same time it sounds better/easier for those that already know Relational Databases very well and are looking for something more agile.
|
First time Django database SQL or NoSQL?
| 12,079,233 | 0 | 3 | 5,188 | 1 |
python,sql,django,nosql
|
sqlite is the simplest to start with. If you already know SQL toss a coin to choose between MySQL and Postgres for your first project!
| 0 | 0 | 0 | 0 |
2012-08-22T18:07:00.000
| 4 | 0 | false | 12,078,928 | 0 | 0 | 1 | 2 |
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo?
I've read all about both but there's a lot I don't understand. Mongo sounds better/easier but at the same time it sounds better/easier for those that already know Relational Databases very well and are looking for something more agile.
|
Hindi or Farsi numbers in django templating engine
| 12,091,958 | 1 | 6 | 1,525 | 0 |
python,django,django-templates,persian,hindi
|
You can use the django internationalization there is a library l10n is define in django
| 0 | 0 | 0 | 0 |
2012-08-23T12:14:00.000
| 7 | 0.028564 | false | 12,091,353 | 1 | 0 | 1 | 1 |
I want to print {{forloop.counter}} with persian or Hindi encoding means to have "۱ ۲ ۳ ۴ .." instead of "1 2 3 4 ...". I searched a lot but I couldn't find any related functions. Would you mind helping me?
Regards
|
Running XMPP on Amazon for a chat app
| 12,095,630 | 0 | 0 | 365 | 0 |
python,ruby,amazon-ec2,amazon-web-services,xmpp
|
As an employee of ProcessOne, the makers of ejabberd, I can tell you we run a lot of services over AWS, including mobile chat apps. We have industrialized our procedures.
| 0 | 0 | 1 | 1 |
2012-08-23T15:48:00.000
| 2 | 0 | false | 12,095,507 | 0 | 0 | 1 | 2 |
I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends.
|
Running XMPP on Amazon for a chat app
| 12,095,743 | 1 | 0 | 365 | 0 |
python,ruby,amazon-ec2,amazon-web-services,xmpp
|
Try to explore more about Amazon SQS( Simple Queuing Service) . It might come handy for your requirement.
| 0 | 0 | 1 | 1 |
2012-08-23T15:48:00.000
| 2 | 0.099668 | false | 12,095,507 | 0 | 0 | 1 | 2 |
I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends.
|
How can I trigger a function anytime there is a new session in a GAE/Python Application?
| 12,116,756 | 0 | 0 | 412 | 0 |
python,google-app-engine
|
I am trying to be pretty general here as I don't know whether you are using the default users service or not and I don't know how you are uniquely linking your SessionSupplemental entities to users or whether you even have a way to identify users at this point. I am also assuming you are using some version of webapp as that is the standard request handling library on App Engine. Let me know a bit more and I can update the answer to be more specific.
Subclass the default RequestHandler in webapp with a new class (such as MyRequestHandler).
In your subclass override the initialize() method.
In your new initialize() method get the current user from your session system (or the users service or whatever you are using). Test to see if a SessionSupplemental entity already exists for this user and if not create a new one.
For all your other request handlers you now want to subclass MyRequestHandler (instead of the default RequestHandler).
Whenever a request happens webapp will automatically call the initialize() method.
This is going to cost you a read for every request and also a write for every request by a new user. If you use the ndb library (instead of db) then a lot of the requests will just hit memcache instead of the datastore.
Now if you are just starting creating a new AppEngine app I would recommend using the Python27 runtime and webapp2 and trying to leverage as much of the webapp2 Auth module as you can so you don't have to write so much session stuff yourself. Also, ndb can be much nicer than the default db library.
| 0 | 1 | 0 | 0 |
2012-08-23T19:01:00.000
| 1 | 0 | false | 12,098,358 | 0 | 0 | 1 | 1 |
I am a newbie to Google App Engine and Python.
I want to create an entry in a SessionSupplemental table (Kind) anytime a new user accesses the site (regardless of what page they access initially).
How can I do this?
I can imagine that there is a list of standard event triggers in GAE; where would I find these documented? I can also imagine that there are a lot of system/application attributes; where can I find these documented and how to use them?
Thanks.
|
IE9 and Python issues?
| 12,098,951 | 1 | 0 | 408 | 0 |
python,windows,debugging,internet-explorer-9,pipeline
|
You need to specify what Python "Web server" you're using (e.g. bottle? Maybe Tornado? CherryPy?), but more important, you need to supply what request headers and what HTTP response go in and out when IE9 is involved.
You may lift them off the wire using e.g. ngrep, or I think you can use Developers Tools in IE9 (F12 key).
The most common quirks with IE9 that often do not bother Web browsers are mismatches in Content-Length (well, this DID bother Safari last time I looked), possibly Content-Type (this acts in reverse - IE9 sometimes correctly gleans HTML mimetype even if the Content-Type is wrong), Connection: Close.
So yes, it could be a problem with HTTP pipelining: specifically if you pipeline a request with invalid Content-Length and not even chunked-transfer-encoding, IE might wait for the request to "finish". This would happen in Web browsers too; but it could then be that this behavior, in IE, overrides the connection being flushed and closed, while in Web browsers it does not. These two hypotheses might match your observed symptoms.
To fix that, you either switch to chunked transfer encoding, which replaces Content-Length in a way, or correctly compute its value. How to do this depends on the server.
To verify quickly, you could issue a Content-Length surely too short (e.g. 100 bytes?) to see whether this results in IE un-hanging and displaying a partial web page.
| 0 | 0 | 1 | 0 |
2012-08-23T19:30:00.000
| 2 | 0.099668 | false | 12,098,732 | 0 | 0 | 1 | 1 |
Trying to debug a website in IE9. I am running it via Python.
In chrome, safari, firefox, and opera, the site loads immediately, but in IE9 it seems to hang and never actually loads.
Could this possibly be an issue with http pipelining? Or something else? And how might I fix this?
|
How can use Google App Engine with MS-SQL
| 12,116,542 | 0 | 3 | 2,793 | 1 |
python,sql-server,google-app-engine
|
You could, at least in theory, replicate your data from the MS-SQL to the Google Cloud SQL database. It is possible create triggers in the MS-SQL database so that every transaction is reflected on your App Engine application via a REST API you will have to build.
| 0 | 1 | 0 | 0 |
2012-08-24T11:45:00.000
| 2 | 0 | false | 12,108,816 | 0 | 0 | 1 | 1 |
I use
python 2.7
pyodbc module
google app engine 1.7.1
I can use pydobc with python but the Google App Engine can't load the module. I get a no module named pydobc error.
How can I fix this error or how can use MS-SQL database with my local Google App Engine.
|
Is there a cross-OS GUI framework that supports embedding HTML pages?
| 12,211,713 | 0 | 9 | 4,766 | 0 |
c#,javascript,python,tidesdk
|
It is strange that Qt is not for you. You may be surprised to hear Sencha's Architect and Animator products use Qt and QWebView for cross platform JavaScript applications with full menus and icons and executables and system dialog boxes and file I/O.
It currently works Windows, OSX, and Linux.
They use an in-house developed library called ion to load and interact a JavaScript application. They provide some helper classes for JS to use.
A simple skeleton c++ application which uses Qt to create and load a window and create a web view in that window and load html and other content from file into that view.
Another solution is Adobe's Air which is like a browser with native support. It also provides deployment.
| 0 | 0 | 0 | 0 |
2012-08-24T12:54:00.000
| 9 | 0 | false | 12,109,795 | 0 | 0 | 1 | 1 |
I want to develop a desktop app to be used cross-system (win, mac, linux), is there a GUI framework that would allow me to write code once for all 3 platforms and have a fully-scriptable embedded web component?
I need it to have an API to communicate between app and webpage javascript.
I know C#, JavaScript and a little bit of python.
|
Django databases and threads
| 12,115,563 | 7 | 7 | 5,194 | 0 |
python,mysql,django,multithreading
|
You can perform actions from different thread manually (eg with Queue and executors pool), but you should note, that Django's ORM manages database connections in thread-local variables. So each new thread = new connection to database (which will be not good idea for 50-100 threads for one request - too many connections). On the other hand, you should check database "bandwith".
| 0 | 0 | 0 | 0 |
2012-08-24T15:05:00.000
| 2 | 1.2 | true | 12,111,983 | 0 | 0 | 1 | 1 |
In one model I've got update() method which updating few fields and creates one object of some other model. The problem is that data I use to update is fetched from another host (unique for each object) and it could take a moment (host may be offline, and timeout is set to 3sec). And now, I need to update couple of hundred objects, 3-4 times per hour - of course updating every one in a row is not an option, because it could take all day.
My first thought was split it up for 50-100 threads so each one could update its own part of objects. 99% of update function time is waiting for server respond (there is few bytes of data only, so pings are the problem), I think the CPU won't be a problem, I'm more worried about:
Django ORM. Can it handle it? Getting all objects, splitting it up, and updating from >50 threads?
Is it a good idea to solve this? If it is - how to do it and don't screw a database? Or maybe I shouldn't care about so little records?
If it isn't a good way, how to do it right?
|
Google App Engine HRD Migration - Data Read Returns Nothing
| 12,185,853 | 0 | 0 | 104 | 0 |
google-app-engine,python-2.7
|
The issue has resolved itself after a few days. Now the app is returning the correct data. It may be just a glitch from the migration. I have another GAE app that's stuck in the middle of the migration. Searching on SO I have found others that are experiencing the same problem.
| 0 | 1 | 0 | 0 |
2012-08-24T16:49:00.000
| 1 | 1.2 | true | 12,113,554 | 0 | 0 | 1 | 1 |
After following the instruction to migrate from a GAE app from Master/Slave to High Replication Datastore(HRD), the app is returning nothing for datastore read. I am able to see the data using the "Datastore Viewer" and they are there (migrated successfully). I have not changed any code. Just wondering if there's anything I need to set or configure for the datastore read to happen. I don't see any error in the "Log Console" on my dev machine and no error on the server's "Logs".
|
Which to use: OneToOne vs ForeignKey?
| 12,115,267 | 3 | 7 | 3,900 | 0 |
python,django
|
ForeignKey means that you are referencing an element that exists inside of another table.
OneToOne, is a type of ForeignKey in which an element of table1 and table2 are uniquely bound together.
Your favorite fruit example would be OneToMany. Because each person has a unique favorite fruit, but each fruit can have multiple people who list that particular fruit as their favorite.
A OneToOne relationship may be done with your Car example. Cars.VIN could have a OneToOne relationship with CarInfo.VIN since one car will only ever have one CarInfo associated with it (and vise versa).
| 0 | 0 | 0 | 0 |
2012-08-24T18:51:00.000
| 2 | 0.291313 | false | 12,115,073 | 0 | 0 | 1 | 2 |
My understanding is that OneToOneField is used for just 1 row of data from Table2 (Favorite fruit) linked to one row of data in Table1 (Person's name), and ForeignKey is for multiple rows of data in Table2 (Car models) to 1 row of data in Table1 (Brand/Manufacturer).
My question is what should I use if I have multiple tables but only one row of data from each table that links back to Table1. For example: I have Table1 as "Cars", my other tables are "Insurance Info", "Car Info", "Repair History". Should I use ForeignKey or OneToOne?
|
Which to use: OneToOne vs ForeignKey?
| 12,115,281 | 13 | 7 | 3,900 | 0 |
python,django
|
You just need to ask yourself "Can object A have many object B, or object B many object A's"?
Those table relations each could be different:
A Car could have 1 or many insurance policies, and an insurance policy only applies to one car. If the car can only have one, then it could be a one-to-one.
A Car can have many repair history rows, so this would be a foreign key on the repair history, with a back relation to the Car as a set.
Car Info is similar to the UserProfile concept in django. If it is truly unique information, then it too would be a one-to-one. But if you define Car Info as a general description that could apply to similar Car models, then it would be a foreign key on the Car Table to refer to the Car Info
| 0 | 0 | 0 | 0 |
2012-08-24T18:51:00.000
| 2 | 1.2 | true | 12,115,073 | 0 | 0 | 1 | 2 |
My understanding is that OneToOneField is used for just 1 row of data from Table2 (Favorite fruit) linked to one row of data in Table1 (Person's name), and ForeignKey is for multiple rows of data in Table2 (Car models) to 1 row of data in Table1 (Brand/Manufacturer).
My question is what should I use if I have multiple tables but only one row of data from each table that links back to Table1. For example: I have Table1 as "Cars", my other tables are "Insurance Info", "Car Info", "Repair History". Should I use ForeignKey or OneToOne?
|
web.py User Authentication with PostgreSQL database example
| 12,137,859 | 0 | 0 | 2,157 | 1 |
python,session,login,web.py
|
Okay, I was able to figure out what I did wrong. Total newbie stuff and all part of the learning process. This code now works, well mostly. The part that I was stuck on is now working. See my comments in the code
Thanks
import web
web.config.debug = False
render = web.template.render('templates/', base='layout')
urls = (
'/', 'index',
'/add', 'add',
'/login', 'Login',
'/reset', 'Reset'
)
app = web.application(urls, globals())
db = web.database(blah, blah, blah)
store = web.session.DiskStore('sessions')
session = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0})
class index:
def GET(self):
todos = db.select('todo')
return render.index(todos)
class add:
def POST(self):
i = web.input()
n = db.insert('todo', title=i.title)
raise web.seeother('/')
def logged():
if session.get('login', False):
return True
else:
return False
def create_render(privilege):
if logged():
if privilege == 0:
render = web.template.render('templates/reader')
elif privilege == 1:
render = web.template.render('templates/user')
elif privilege == 2:
render = web.template.render('templates/admin')
else:
render = web.template.render('templates/communs')
else:
## This line is key, i do not have a communs folder, thus returning an unusable object
#render = web.template.render('templates/communs') #Original code from example
render = web.template.render('templates/', base='layout')
return render
class Login:
def GET(self):
if logged():
## Using session.get('something') instead of session.something does not blow up when it does not exit
render = create_render(session.get('privilege'))
return '%s' % render.login_double()
else:
render = create_render(session.get('privilege'))
return '%s' % render.login()
def POST(self):
name, passwd = web.input().name, web.input().passwd
ident = db.select('users', where='name=$name', vars=locals())[0]
try:
if hashlib.sha1("sAlT754-"+passwd).hexdigest() == ident['pass']:
session.login = 1
session.privilege = ident['privilege']
render = create_render(session.get('privilege'))
return render.login_ok()
else:
session.login = 0
session.privilege = 0
render = create_render(session.get('privilege'))
return render.login_error()
except:
session.login = 0
session.privilege = 0
render = create_render(session.get('privilege'))
return render.login_error()
class Reset:
def GET(self):
session.login = 0
session.kill()
render = create_render(session.get('privilege'))
return render.logout()
if __name__ == "__main__": app.run()
| 0 | 0 | 0 | 0 |
2012-08-25T08:47:00.000
| 1 | 0 | false | 12,120,539 | 0 | 0 | 1 | 1 |
I am trying to copy and use the example 'User Authentication with PostgreSQL database' from the web.py cookbook. I can not figure out why I am getting the following errors.
at /login
'ThreadedDict' object has no attribute 'login'
at /login
'ThreadedDict' object has no attribute 'privilege'
Here is the error output to the terminal for the second error. (the first is almost identical)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 239, in process
return self.handle()
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 230, in handle
return self._delegate(fn, self.fvars, args)
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 420, in _delegate
return handle_class(cls)
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 396, in handle_class
return tocall(*args)
File "/home/erik/Dropbox/Python/Web.py/Code.py", line 44, in GET
render = create_render(session.privilege)
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/session.py", line 71, in __getattr__
return getattr(self._data, name)
AttributeError: 'ThreadedDict' object has no attribute 'privilege'
127.0.0.1:36420 - - [25/Aug/2012 01:12:38] "HTTP/1.1 GET /login" - 500 Internal Server Error
Here is my code.py file. Pretty much cut-n-paste from the cookbook. I tried putting all of the class and def on top of the main code. I have also tried launching python with sudo as mentioned in another post.
import web
class index:
def GET(self):
todos = db.select('todo')
return render.index(todos)
class add:
def POST(self):
i = web.input()
n = db.insert('todo', title=i.title)
raise web.seeother('/')
def logged():
return False #I added this to test error #1, Now I get error #2
#if session.login==1:
# return True
#else:
# return False
def create_render(privilege):
if logged():
if privilege == 0:
render = web.template.render('templates/reader')
elif privilege == 1:
render = web.template.render('templates/user')
elif privilege == 2:
render = web.template.render('templates/admin')
else:
render = web.template.render('templates/communs')
else:
render = web.template.render('templates/communs')
return render
class Login:
def GET(self):
if logged():
render = create_render(session.privilege)
return '%s' % render.login_double()
else:
# This is where error #2 is
render = create_render(session.privilege)
return '%s' % render.login()
def POST(self):
name, passwd = web.input().name, web.input().passwd
ident = db.select('users', where='name=$name', vars=locals())[0]
try:
if hashlib.sha1("sAlT754-"+passwd).hexdigest() == ident['pass']:
session.login = 1
session.privilege = ident['privilege']
render = create_render(session.privilege)
return render.login_ok()
else:
session.login = 0
session.privilege = 0
render = create_render(session.privilege)
return render.login_error()
except:
session.login = 0
session.privilege = 0
render = create_render(session.privilege)
return render.login_error()
class Reset:
def GET(self):
session.login = 0
session.kill()
render = create_render(session.privilege)
return render.logout()
#web.config.debug = False
render = web.template.render('templates/', base='layout')
urls = (
'/', 'index',
'/add', 'add',
'/login', 'Login',
'/reset', 'Reset'
)
app = web.application(urls, globals())
db = web.database(dbn='postgres', user='hdsfgsdfgsd', pw='dfgsdfgsdfg', db='postgres', host='fdfgdfgd.com')
store = web.session.DiskStore('sessions')
# Too me, it seems this is being ignored, at least the 'initializer' part
session = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0})
if __name__ == "__main__": app.run()
|
virtualenv removing libraries (flask / yolk) on restart
| 12,134,916 | 2 | 1 | 333 | 0 |
python,flask,virtualenv,yolk
|
As long as you're sourcing the virtualenv correctly and installing the packages correctly, your virtualenv should not be affected by a reboot. It's completely independent of that. There are one of three possibilities that I can think of that explains your issues:
The incorrect virtualenv was sourced
You installed flask and yolk onto the system python
You used some kind of ephemeral storage
(The third is the least likely)
| 0 | 0 | 0 | 0 |
2012-08-26T23:41:00.000
| 1 | 1.2 | true | 12,134,782 | 1 | 0 | 1 | 1 |
I just started learning Flask (and as a result, getting into virtualenv as well). I followed a tutorial on Flask's documentation and created a small application. I installed Flask and yolk using venv and everything was working fine.
I restarted my computer and when I activated virtualenv again, flask and yolk were no longer recognised. I had to reinstall them via easy_install. Does venv remove any installed packages once the computer has been restarted?
What happened here? Is there anything I need to do from my side?
|
Can django server send request to another server
| 12,136,794 | 2 | 0 | 1,855 | 0 |
python,django
|
1) Of course Django can make request to another server
I have not much idea about django-socketio
and one more suggestion why you are using httplib you can use other advance version like httplib2 or requests apart from that Django-Piston is dedicated for REST request you can also try with that
| 0 | 0 | 0 | 0 |
2012-08-27T02:40:00.000
| 1 | 1.2 | true | 12,135,671 | 0 | 0 | 1 | 1 |
I'm looking for help. My django server has instant messaging function achieved by django-socketio. If I run the server by cmd 'runserver_socketio' then there is no problems.
But now I want to run server by 'runfcgi' but that will make my socketio no working. So I want the socketio server handles the request which is conveyed by fcgi server. Can it work?
Following is my code:
def push_msg(msg):
params = urllib.urlencode({"msg":str(msg)})
'''headers = {"Content-type":"text/html;charset=utf8"}
conn = httplib.HTTPConnection("http://127.0.0.1:8000")
print conn
conn.request("POST", "/push_msg/", data=params, headers=headers)
response = conn.getresponse()
print response'''
h = httplib2.http()
print h
resp, content = h.request("http://127.0.0.1:8000/push_msg/", method="POST", body=params)
url(r'^push_msg/$', 'chat.events.on_message')
chat.events.on_message:
def on_message(request):
msg = request.POST.get('msg')
msg = eval(msg)
try:
print 'handle messages'
from_id = int(msg['from_id'])
to_id = int(msg['to_id'])
user_to = UserProfile.objects.get(id = msg['to_id'])
django_socketio.broadcast_channel(msg, user_to.channel)
if msg.get('type', '') == 'chat':
ct = Chat.objects.send_msg(from_id=from_id,to_id=to_id,content=data['content'],type=1)
ct.read = 1
ct.save()
except:
pass
return HttpResponse("success")
I have tried many times, but it can't work, why?
|
django id integer limit
| 15,858,398 | 11 | 14 | 7,009 | 0 |
python,django,sqlite,django-postgresql
|
Adding this as an answer. Django maps this to serial columns which means that the maximum value is in the 2 billion range ( 2,147,483,647 to be exact). While that is unlikely to be an issue for most applications, if you do, you could alter the type to become a bigint instead and this would make it highly unlikely you will ever reach the end of 64-bit int space.
| 0 | 0 | 0 | 0 |
2012-08-27T23:36:00.000
| 3 | 1 | false | 12,150,973 | 0 | 0 | 1 | 1 |
Is there a limit to the AutoField in a Django model or the database back-ends?
The Django project I am working on could potentially see a lot of objects in certain database tables which would be in excess of 40000 within a short amount of time.
I am using Sqlite for dev and Postgresql for production.
|
Python Web Programming - Not Using Django
| 12,156,338 | 0 | 0 | 350 | 0 |
python,html,web,cgi,mysql-python
|
If nothing else it will show you why you want to use a framework, should be a really valuable learning experience. I say go for it.
| 0 | 0 | 0 | 1 |
2012-08-28T09:22:00.000
| 6 | 0 | false | 12,156,293 | 0 | 0 | 1 | 3 |
I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures.
I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl.
As of now I do not want to try Django or for that matter any other frameworks.
Is python with cgi and MySQLdb modules a good start?
Thanks
|
Python Web Programming - Not Using Django
| 12,160,352 | 1 | 0 | 350 | 0 |
python,html,web,cgi,mysql-python
|
Having used both Flask and Django for a bit now, I must say that I much prefer Flask for most things. I would recommend giving it a try. Flask-Uploads and WTForms are two nice extensions for the Flask framework that make it easy to do the things you mentioned. Lots of other extensions available.
If you go on to work with dynamic site attached to a database, Flask + SQL Alchemy make a very powerful combination. I much prefer the SQLAlchemy ORM to the django model ORM.
| 0 | 0 | 0 | 1 |
2012-08-28T09:22:00.000
| 6 | 0.033321 | false | 12,156,293 | 0 | 0 | 1 | 3 |
I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures.
I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl.
As of now I do not want to try Django or for that matter any other frameworks.
Is python with cgi and MySQLdb modules a good start?
Thanks
|
Python Web Programming - Not Using Django
| 12,268,540 | 2 | 0 | 350 | 0 |
python,html,web,cgi,mysql-python
|
I recommend Pyramid Framework!
| 0 | 0 | 0 | 1 |
2012-08-28T09:22:00.000
| 6 | 0.066568 | false | 12,156,293 | 0 | 0 | 1 | 3 |
I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures.
I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl.
As of now I do not want to try Django or for that matter any other frameworks.
Is python with cgi and MySQLdb modules a good start?
Thanks
|
Deadlock with PyMongo and gevent
| 12,163,744 | 4 | 3 | 862 | 1 |
python,mongodb,pymongo,gevent,greenlets
|
I found what the problem is. By default PyMongo has no network timeout defined on the connections, so what was happening is that the connections in the pool got disconnected (because they aren't used for a while). Then when I try to reuse a connection and perform a "find", it takes a very long time for the connection be detected as dead (something like 15 minutes). When the connection is detected as dead, the "find" call finally throws an AutoReconnectError, and a new connection is spawned up to replace to stale one.
The solution is to set a small network timeout (15 seconds), so that the call to "find" blocks the greenlet for 15 seconds, raises an AutoReconnectError, and when the "find" is retried, it gets a new connection, and the operation succeeds.
| 0 | 0 | 0 | 0 |
2012-08-28T10:26:00.000
| 1 | 0.664037 | false | 12,157,350 | 0 | 0 | 1 | 1 |
I am using PyMongo and gevent together, from a Django application. In production, it is hosted on Gunicorn.
I am creating a single Connection object at startup of my application. I have some background task running continuously and performing a database operation every few seconds.
The application also serves HTTP requests as any Django app.
The problem I have is the following. It only happens in production, I have not been able to reproduce it on my dev environment. When I let the application idle for a little while (although the background task is still running), on the first HTTP request (actually the first few), the first "find" operation I perform never completes. The greenlet actually never resumes. This causes the first few HTTP requests to time-out.
How can I fix that? Is that a bug in gevent and/or PyMongo?
|
Scrapy approach to scraping multiple URLs
| 12,161,314 | 3 | 1 | 2,710 | 0 |
python,scrapy
|
1) In the BaseSpider, there is an __init__ method that can be overridden in subclasses. This is where the declaration of the start_urls and allowed_domains variables are set. If you have a list of urls in mind, prior to running the spider, than you can insert them dynamically here.
For example, in a few of the spiders I have built, I pull in preformatted groups of URL's from MongoDB, and insert them into the start_urls list in once bulk insert.
2)This might be a little bit more tricky, but you could easily see the crawled URL by looking in the response object (response.url). You should be able to check to see if the url contains 'google', 'bing', or 'yahoo', and then use the prespecified selectors for a url of that type.
3) I am not so sure that #3 is possible, or at least not without some difficulty. As far as I know, the url's in the start_urls list are not crawled orderly, and they each arrive in the pipeline independently. I am not sure that without some serious core hacking, you will be able to collect a group of response objects and pass them into a pipeline together.
However, you might consider serializing the data to disk temporarily, and then bulk-saving the data later on to your database. One of the crawlers I built receives groups of URLs that are around 10000 in number. Rather than making 10000 single item database insertions, I store the urls (and collected data) in BSON, and than insert it into MongoDB later.
| 0 | 0 | 1 | 0 |
2012-08-28T13:47:00.000
| 3 | 0.197375 | false | 12,160,673 | 0 | 0 | 1 | 1 |
I have a project which requires a great deal of data scraping to be done.
I've been looking at Scrapy which so far I am very impressed with but I am looking for the best approach to do the following:
1) I want to scrape multiple URL's and pass in the same variable for each URL to be scraped, for example, lets assume I am wanting to return the top result for the keyword "python" from Bing, Google and Yahoo.
I would want to scrape http://www.google.co.uk/q=python, http://www.yahoo.com?q=python and http://www.bing.com/?q=python (not the actual URLs but you get the idea)
I can't find a way to specify dynamic URLs using the keyword, the only option I can think of is to generate a file in PHP or other which builds the URL and specify scrapy to crawl the links in the URL.
2) Obviously each search engine would have its own mark-up so I would need to differentiate between each result to find the corresponding XPath to extract the relevant data from
3) Lastly, I would like to write the results of the scraped Item to a database (probably redis), but only when all 3 URLs have finished scraping, essentially I am wanting to build up a "profile" from the 3 search engines and save the outputted result in one transaction.
If anyone has any thoughts on any of these points I would be very grateful.
Thank you
|
Generate various events in 'Web scraping with beautiful soup'
| 12,163,450 | 1 | 0 | 102 | 0 |
python,web-scraping,beautifulsoup
|
BeautifulSoup is a tool for parsing and analyzing HTML. It cannot talk to web servers, so you'd need another library to do that, like urllib2 (builtin, but low-level) or requests (high-level, handles cookies, redirection, https etc. out of the box). Alternatively, you can look at mechanize or windmill, or if you also require JavaScript code to be executed, phantomjs.
| 0 | 0 | 1 | 0 |
2012-08-28T14:10:00.000
| 1 | 1.2 | true | 12,161,140 | 0 | 0 | 1 | 1 |
Is there any way to generate various events like:
filling an input field
submitting a form
clicking a link
Handling redirection etc
via python beautiful soup library. If not what's the best way to do above (basic functionality).
|
Can I make STATICFILES_DIR same as STATIC_ROOT in Django 1.3?
| 12,161,409 | 91 | 45 | 37,149 | 0 |
python,django,django-views,django-staticfiles
|
No. In fact, the file django/contrib/staticfiles/finders.py even checks for this and raises an ImproperlyConfigured exception when you do so:
"The STATICFILES_DIRS setting should not contain the STATIC_ROOT setting"
The STATICFILES_DIRS can contain other directories (not necessarily app directories) with static files and these static files will be collected into your STATIC_ROOT when you run collectstatic. These static files will then be served by your web server and they will be served from your STATIC_ROOT.
If you have files currently in your STATIC_ROOT that you wish to serve then you need to move these to a different directory and put that other directory in STATICFILES_DIRS. Your STATIC_ROOT directory should be empty and all static files should be collected into that directory (i.e., it should not already contain static files).
| 0 | 0 | 0 | 0 |
2012-08-28T14:15:00.000
| 1 | 1.2 | true | 12,161,271 | 0 | 0 | 1 | 1 |
I'm using Django 1.3 and I realize it has a collectstatic command to collect static files into STATIC_ROOT. Here I have some other global files that need to be served using STATICFILES_DIR.
Can I make them use the same dir ?
Thanks.
|
pickle/zodb: how to handle moving .py files with class definitions?
| 12,164,152 | 1 | 7 | 745 | 0 |
python,refactoring,pickle,zodb
|
Unfortunately, there is no easy solution. You can convert your old-style objects with the refactored ones (I mean classes which are in another file/module) by the following schema
add the refactored classes to your code without removing the old-ones
walk through your DB starting from the root and replacing all old objects with new equivalents
compress your database (that's important)
now you can remove your old classes from the sources
| 0 | 0 | 0 | 0 |
2012-08-28T16:50:00.000
| 3 | 0.066568 | false | 12,163,918 | 1 | 0 | 1 | 2 |
I'm using ZODB which, as I understand it, uses pickle to store class instances. I'm doing a bit of refactoring where I want to split my models.py file into several files. However, if I do this, I don't think pickle will be able to find the class definitions, and thus won't be able to load the objects that I already have stored in the database. What's the best way to handle this problem?
|
pickle/zodb: how to handle moving .py files with class definitions?
| 12,164,218 | 3 | 7 | 745 | 0 |
python,refactoring,pickle,zodb
|
As long as you want to make the pickle loadable without performing a migration to the new class model structure: you can use alias imports of the refactored classes inside the location of the old model.py.
| 0 | 0 | 0 | 0 |
2012-08-28T16:50:00.000
| 3 | 0.197375 | false | 12,163,918 | 1 | 0 | 1 | 2 |
I'm using ZODB which, as I understand it, uses pickle to store class instances. I'm doing a bit of refactoring where I want to split my models.py file into several files. However, if I do this, I don't think pickle will be able to find the class definitions, and thus won't be able to load the objects that I already have stored in the database. What's the best way to handle this problem?
|
Order a website's content based on its social share count (fb+ twitter + gplus)
| 12,164,955 | 1 | 1 | 112 | 0 |
python,django
|
You can use Javascript, if you don't have to do it on the backend.
Just read the facebook likes using the API, and sort the divs.
| 0 | 0 | 0 | 0 |
2012-08-28T18:01:00.000
| 1 | 0.197375 | false | 12,164,910 | 0 | 0 | 1 | 1 |
I've a requirement for my django website.
Is there any way to order contents of my website based on its fb likes + twitter share count + gplus count + etc..
Any api's that I can use.
I saw this feature on the new digg site. They seem to have aggregated the counts fb + twitter + digg) for the stories.
|
How to prevent Hudson from entering shutdown mode automatically or when idle?
| 12,382,944 | 2 | 1 | 864 | 0 |
continuous-integration,hudson,shutdown,python-idle
|
Solution: disable the thinBackup plugin
...
I figured this out by taking a look at the Hudson logs at http://localhost:8080/log/all
thinBackup was running every time the Hudson instance went into shutdown mode.
The fact that shutdown mode was occurring at periods of inactivity is also consistent with the behavior of thinBackup.
I then disabled the plug-in and Hudson no longer enters shutdown mode. What's odd is that thinBackup had been installed for some time before this problem starting occurring. I am seeking out a solution from thinBackup to re-enable the plugin without the negative effects and will update here if I get an answer.
| 0 | 1 | 0 | 0 |
2012-08-29T16:53:00.000
| 2 | 1.2 | true | 12,182,882 | 0 | 0 | 1 | 1 |
After several months of successful and unadulterated continuous integration, my Hudson instance, running on Mac OSX 10.7.4 Lion, decides it wants to enter shutdown mode after every 20-30 minutes of inactivity.
For those of you familiar with shutdown mode, the instance of course doesn't shutdown, but has the undesirable effect (in this case) of stopping new jobs from starting.
I know I haven't changed any settings, so it makes me think the problem was slowly growing and keeps triggering shutdown mode.
I know there is plenty of storage space on the machine with 400+ GB to go so I'm wondering what else would trigger shutdown mode without actually using the Hudson web portal to manually do it.
As mentioned before, the problem also seems to be tied to inactivity. I tried creating a quick fix, which is a build job that does nothing every 5 minutes. It appeared to work at first, but after long periods of inactivity I will find it back in shutdown mode.
Any ideas what might be going on?
|
Is it possible to have a form built by a user?
| 12,183,778 | 0 | 0 | 71 | 0 |
python,django
|
For this you would have some kind of editor that would create a html string. This string would be stored into your database and then upon request you would display it on the user's site.
The editor should be very strict into what it can add and what the user has control over, there are some javascript editors available that will be able to provide this functionality.
The only issue I can think of is that you may run into django escaping the form when displayed to the page.
| 0 | 0 | 0 | 0 |
2012-08-29T17:53:00.000
| 2 | 0 | false | 12,183,730 | 0 | 0 | 1 | 1 |
For example:
I have a user that wants to create a contact form for their personal website. They want three input type=text and one textarea and they specify a label and an name/id for them on my site. Then they can use this form on their site, but I will handle it on mine?
Is it possible for django to spit out custom forms specified by the user?
Edit: If django is too "locked down" what would you recommend I do? I would like to stay with python.
|
Unicode characters in Django usernames
| 12,185,565 | 5 | 6 | 2,967 | 0 |
python,django,unicode,internationalization,django-registration
|
It is really not a problem - because this character restriction is in UserCreationForm (or RegistrationForm in django-registration) only as I remember, and you can easily make your own since field in database is just normal TextField.
But those restriction is there not without a reason. One of the possible problems I can think of now is creating links - usernames are often used for that and it may cause a problem. There is also bigger possibility of fake accounts with usernames looking the same but being in fact different characters, etc.
| 0 | 0 | 0 | 0 |
2012-08-29T19:32:00.000
| 2 | 1.2 | true | 12,185,218 | 0 | 0 | 1 | 1 |
I am developing a website using Django 1.4 and I use django-registration for the signup process. It turns out that Unicode characters are not allowed as usernames, whenever a user enters e.g. a Chinese character as part of username the registration fails with:
This value may contain only letters, numbers and @/./+/-/_ characters.
Is it possible to change it so Unicode characters are allowed in usernames? If yes, how can I do it? Also, can it cause any problem?
|
File changes not reflecting immediately
| 12,203,642 | 4 | 1 | 1,229 | 0 |
python,apache,mod-wsgi,pyramid
|
It's usually a lot easier to use something other than mod_wsgi to develop your Python WSGI application (mod_wsgi captures stdout and stderr, which makes it tricky to use things like pdb).
The Pyramid scaffolding generates code that allows you to do something like "pserve development.ini" to start a server. If you use this instead of mod_wsgi to do your development, you can do "pserve development.ini --reload" and your changes to Python source will be reflected immediately.
This doesn't mean you can't use mod_wsgi to serve your application in production. After you get done developing, you can then put your application into mod_wsgi for its productiony goodness.
| 0 | 0 | 0 | 1 |
2012-08-30T04:55:00.000
| 2 | 1.2 | true | 12,190,125 | 0 | 0 | 1 | 1 |
The problem that I am facing is whenever I make changes to my Python code, like in __init__.py or views.py file, they are not reflected on the server immediately. I am running the server using Apache+mod_wsgi, so all the Daemon process and virtual host are configured properly.
I find that I have to run setup.py each time for new changes to take place. Is this how Pyramid works or I am missing something. Shouldn't the updated files be served instead of the old ones.
|
Creating and testing that a database field has been created or not in same program
| 12,197,060 | 0 | 0 | 43 | 0 |
python,selenium
|
I got the solution, Actually problem was the transaction handling. Through out the program django uses auto_commit transaction so database change was happening only after program executed completely. So instead of auto_commit I am handling transaction manually by using transaction.commit_manually and transaction.commit() and transaction.rollback() to properly commit and rollback transactions at the point where I want them to save.
| 0 | 0 | 0 | 0 |
2012-08-30T11:05:00.000
| 1 | 1.2 | true | 12,195,459 | 0 | 0 | 1 | 1 |
While writing a selenium test case,I have found out a weird situation
I was saving a form , and at the time of saving the form, I created a user into the database.
Now what is happening that user has been created successfully in the database but at the time of getting it in that same selenium test case I am getting DOESNOTEXIST exception. when I check manaually in the database, the newly created user is there.
Can anybody explain how can I create and test that user has been created or not on DB in same program ? and if it is not possible than Why ?
|
Request restoration after login in pyramid
| 12,196,884 | 3 | 0 | 227 | 0 |
python,login,pyramid
|
there are three parts you need;
The page that handles the authenticated form submission should check to see if the request is properly authenticated, perform the action, but if it isn't, store all of the data in a server side session and redirect the use to a login page.
The login page should look for a "was trying to do X" sort of query param (eg, ...?fromurl=/post/a/comment. After the user successfully logs in, the login page should redirect the user to that page instead of the site's front page.
The url the user was redirected to should be the same form they used to originally fill out the unauthenticated request. In this case, though, the server should recognize that there are field values stored in the server side session for this user; and so it should populate all of the form fields with those values. The user could then hit submit immediately and complete the post. This could work in a similar way that fields are repopulated when a request contains some invalid form values.
It's important that step 3 should not perform the post directly; The original data and request came from a user who was not authenticated.
| 0 | 0 | 0 | 1 |
2012-08-30T12:04:00.000
| 1 | 1.2 | true | 12,196,442 | 0 | 0 | 1 | 1 |
Let us have some simple page that allows logged users to edit articles. Imagine following situation:
User Bob is logged into the system and is editing long article. As it takes really long to edit such article, his authentication becomes expired. After that, he clicks submit button and because of expired authentication, he is redirected to login page.
It is really desirable to finish the action (saving article) after his successful login. So we shall restore the request that was done while Bob was unauthenticated and repeat it now, after successful login. How could this be done with pyramids?
|
how to implement an on_revoked event in celery
| 18,464,160 | 0 | 0 | 355 | 0 |
python,celery
|
Use AbortableTask as a template and create a RevokableTask class to your specification.
| 0 | 1 | 0 | 0 |
2012-08-30T16:00:00.000
| 1 | 0 | false | 12,200,972 | 0 | 0 | 1 | 1 |
I have a task that retries often, and I would like a way for it to cleanup if it is revoked while it is in the retry state. It seems like there are a few options for doing this, and I'm wondering what the most acceptable/cleanest would be. Here's what I've thought of so far:
Custom Camera that picks up revoked tasks and calls on_revoked
Custom Event Consumer that knows to process on_revoked on tasks that get revoked
Using AbortableTasks and using abort instead of revoke (I'd really like to avoid this)
Are there any other options that I am missing?
|
API Call Is Extremely Slow Server Side on GAE but Fast Browser Side
| 12,203,940 | 1 | 1 | 159 | 0 |
python,api,google-app-engine,jinja2
|
If you don't expect them to change constantly, you can cache the results in memcache and only hit the real API when necessary.
On top of that, if you think that the API calls are predictable, you can do this using a backend, and memcache the results (basically scraping), so that users can get at the cached results rather than having to hit the real API.
| 0 | 0 | 1 | 0 |
2012-08-30T18:00:00.000
| 1 | 1.2 | true | 12,202,815 | 0 | 0 | 1 | 1 |
I just have a simple question here. I'm making a total of 10 calls to the Twitch TV API and indexing them, which is rather slow (15 seconds - 25 seconds slow).
Whenever I make these calls browser side (i.e. throw them into my url), they load rather quickly. Since I am coding in python, is there any way I could fetch/index multiple URL's using say, jinja2?
If not, is there anything else I could do?
Thank you!
|
Draw an inner map of a certain place (like a house blueprint) in Django
| 12,203,505 | 2 | 0 | 211 | 0 |
python,django,flash,api,graphics
|
Assuming you mean drawing plans interactively in the browser, rather than maps in the sense of Google Maps, you need something like HTML5 canvas or SVG, and a library like fabric.js (for canvas) or Raphael (for SVG). Your JS code will then handle the mechanics of drawing lines from mouse input, producing a picture in the browser. You can then extract that picture using JS and pass it back to the server for saving as a PNG or whatever.
If you're targeting modern browsers, canvas is definitely the way to go - it's a much nicer API, has better libraries (IMO) and is easier to extract PNGs from. SVG isn't too bad, but getting PNGs out is tricky - it relies either on hacks (converting the SVG to canvas in JS, rendering it in an invisible element, then converting that to PNG!) or sending the whole SVG to the server to be rendered there.
I've recently implemented something requiring very similar mechanics, although for a very different purpose, so if you have any more detailed questions feel free to ask.
| 0 | 0 | 0 | 0 |
2012-08-30T18:23:00.000
| 1 | 0.379949 | false | 12,203,149 | 0 | 0 | 1 | 1 |
i'll try to be the most specific possible. I have a Django project and i want to be able to draw a inner map of a certain place. By that, i mean a graphical representation of important objects like the tables positions, bathrooms etc. I'm trying to avoid Flash as an option. Is there an existing API that i can use? Or how can i get this thing working?
I don't mean to draw soemthing in 3d, just a simple view from above, like a blueprint.
Thanks in advance.
|
Cannot find .pyc files for django/apache/mod_wsgi
| 12,204,524 | 1 | 1 | 1,616 | 0 |
python,django,mod-wsgi,pyc
|
By default apache probably doesn't have any write access to your django app directory which is a good thing security wise.
Now Python will byte recompile your code once every apache restart then cache it in memory.
As it is a longlive process it is ok.
Note: if you really really want to have those pyc, give a write access to your apache user to the source directory.
Note2: This can create a hell lot of confusion when you start with manage.py a test instance shared by apache as this will create those pyc as root and will keep them if you then run apache despite a source code change.
| 0 | 0 | 0 | 1 |
2012-08-30T19:46:00.000
| 2 | 1.2 | true | 12,204,330 | 0 | 0 | 1 | 1 |
I am running a web app on python2.7 with mod_wsgi/apache. Everything is fine but I can't find any .pyc files. Do they not get generated with mod_wsgi?
|
java- how to code a process to intercept the output streams of program running on remote machine/know when remote program has halted/completed
| 12,206,913 | 1 | 0 | 142 | 0 |
java,python,ruby,remote-debugging
|
If you're only looking to determine when it has completed (and not looking to really capture all the output, as in your other question) you can simply check for the existence of the process id and, when you fail to find the process id, phone home. You really don't need the logs for that.
| 0 | 1 | 0 | 1 |
2012-08-30T23:10:00.000
| 2 | 0.099668 | false | 12,206,879 | 0 | 0 | 1 | 1 |
I want to run a java program on a remote machine, and intercept its logs-- also I want to be able to know if the program has completed execution, and also whether it was successful execution or if execution was halted due to an error.
Is there any ready-made java library available for this purpose? Also, I would like to be able to use this program for obtaining logs/execution completion for remote programs in different languages-- like Java/Ruby/Python etc--
|
How to install Django 1.4 with Python 3.2.3 in Debian?
| 12,208,688 | 3 | 2 | 728 | 0 |
python,django,debian
|
Django does not support Python 3. You will need to install a version of Python 2.x.
| 0 | 1 | 0 | 0 |
2012-08-31T03:54:00.000
| 2 | 0.291313 | false | 12,208,680 | 1 | 0 | 1 | 1 |
I installed Python 3.2.3 in Debian /usr/local/bin/python3 and I installed Django 1.4 in the same directory. But when I try to import django from python 3 shell interpreter I get syntax error! What am I doing wrong?
|
mrjob: Is it possible to run a job flow in a VPC?
| 12,321,461 | 6 | 3 | 587 | 0 |
python,amazon-web-services,amazon-emr,amazon-vpc,mrjob
|
Right now (v 0.3.5) is not possible. I made a pull request on the github project to add support for the 'api_params' parameter of boto, so you can pass parameters directly to the AWS API, and use the 'Instances.Ec2SubnetId' parameter to run a job flow in a VPC subnet.
| 0 | 0 | 0 | 0 |
2012-09-01T03:29:00.000
| 1 | 1 | false | 12,224,671 | 0 | 0 | 1 | 1 |
I'm using mrjob to run some MapReduce tasks on EMR, and I want to run a job flow in a VPC. I looked at the documentation of mrjob and boto, and none of them seems to support this.
Does anyone know if this is possible to do?
|
deleted version are still being serverd in appspot.com
| 12,230,416 | 0 | 1 | 155 | 0 |
google-app-engine,version-control,python-2.7
|
If you go to the App Engine Admin page, you should be able to see all the instances you have running. Kill all the instances for the old versions and it should stop serving.
| 0 | 0 | 0 | 0 |
2012-09-01T17:30:00.000
| 2 | 0 | false | 12,229,881 | 0 | 0 | 1 | 2 |
I'm at lost as to why I have deleted some versions of my apps in appspot.com, but event after clearing out the cache on both local browsers and appspot.com under pagespeed service.
Old versions are still accessible. How long before deleted versions are gone?
Also, I have upload changes, but it does not show up at all.
So how long before changes show up?
If there is a way to force these to happen I would greatly appreciate it.
Thank you in advance of your assistance in this matter.
|
deleted version are still being serverd in appspot.com
| 18,547,441 | 1 | 1 | 155 | 0 |
google-app-engine,version-control,python-2.7
|
Google's GSLB proxy will cache your static files for hours, even if you have disable your appspot application and then re-enable it.
My solution is to append version number at every css, js, jpg url.
| 0 | 0 | 0 | 0 |
2012-09-01T17:30:00.000
| 2 | 0.099668 | false | 12,229,881 | 0 | 0 | 1 | 2 |
I'm at lost as to why I have deleted some versions of my apps in appspot.com, but event after clearing out the cache on both local browsers and appspot.com under pagespeed service.
Old versions are still accessible. How long before deleted versions are gone?
Also, I have upload changes, but it does not show up at all.
So how long before changes show up?
If there is a way to force these to happen I would greatly appreciate it.
Thank you in advance of your assistance in this matter.
|
Django with Tornado
| 12,256,534 | 3 | 1 | 523 | 0 |
python,django,tornado
|
I haven't seen big projects that use tornado in front of django. But technically, you can do monkey.patch_all() with gevent. And then tornado will make sense. It's really bad solution, but if all you need is async unstable django waiting for you with chainsaw at the corner to cut your legs off instead of shooting them - then that is yours.
| 0 | 1 | 0 | 0 |
2012-09-01T19:25:00.000
| 2 | 1.2 | true | 12,230,701 | 0 | 0 | 1 | 2 |
I see a lot of people using django with tornado using WSGIContainer. Does this make any sense? Tornado is supposed to be asynchronous, but Django is synchronous. Aren't you just shooting yourself in the foot by doing that?
|
Django with Tornado
| 12,254,758 | 0 | 1 | 523 | 0 |
python,django,tornado
|
Django comes with a debug server, so i guess, using Tornado with Django, the Tornado here is the mix of Apache + mod_WSGI
| 0 | 1 | 0 | 0 |
2012-09-01T19:25:00.000
| 2 | 0 | false | 12,230,701 | 0 | 0 | 1 | 2 |
I see a lot of people using django with tornado using WSGIContainer. Does this make any sense? Tornado is supposed to be asynchronous, but Django is synchronous. Aren't you just shooting yourself in the foot by doing that?
|
Django: admin login page redirecting to itself
| 12,232,286 | 0 | 0 | 628 | 0 |
python,django,django-admin
|
Well another one! There was a misspelling in the setting.py file in the TEMPLATE_DIRS entry. This appeard to be the source of the problem.
| 0 | 0 | 0 | 0 |
2012-09-01T20:16:00.000
| 1 | 1.2 | true | 12,231,053 | 0 | 0 | 1 | 1 |
I am having a weird problem. I have a standard django 1.4 website with an admin section.
Locally everything is working fine, but when I deploy online, after login into the admin section the first time, it works. I logout then login again, then I get redirected to the same login page!
If I login with incorrect credentials then sure enough the correct errors are shown. If I restart the apache production server then login/logout to the admin section works only for one time then every login from then on produces the same problem.
Has anyone had this before? Is it something to do with cookies or maybe caching problems?
Note: The app has only one url redirecting to the admin, there are no other views. Also prodcution is using http and not https.
|
Python Parent Child Relationships in Google App Engine Datastore
| 12,240,201 | 2 | 0 | 299 | 0 |
python,google-app-engine,inheritance,data-modeling
|
What you're talking about is inheritance heirarchies, but App Engine keys provide for object heirarchies. An example of the former is "a banana is a fruit", while an example of the latter is "a car has a steering wheel". Parent properties are the wrong thing to use here; you want to use PolyModel.
| 0 | 1 | 0 | 0 |
2012-09-02T03:15:00.000
| 1 | 1.2 | true | 12,233,151 | 0 | 0 | 1 | 1 |
I am trying to model a parent hierarchy relationship in Google App Engine using Python. For example, I would like to model fruit.
So the root would be fruit, then a child of fruit would be vine-based, tree-based. Then for example children of tree-based would be apple, pear, banana, etc. Then as children of apple, I would like to add macintosh, golden delicious, granny smith, etc.
I am trying to figure out the easiest way to model this such that I can put in another entity of type basket a an entity of type fruit, or of type granny smith.
Any help would be greatly appreciated!
Thanks
Jon
|
What are some ways to work with Amazon S3 not offering read-after-write consistency in US Standard?
| 12,242,133 | 1 | 0 | 323 | 1 |
python,amazon-s3,amazon-web-services
|
I'd save time and not do anything. The wait times are pretty fast.
If you wanted to stall the end-user, you could just show a 'success' page without the image. If the image isn't available, most regular users will just hit reload.
If you really felt like you had to... I'd probably go with a javascript solution like this:
have a 'timestamp uploaded' column in your data store
if the upload time is under 1 minute, instead of rendering an img=src tag... render some javascript that polls the s3 bucket in 15s intervals
Again, chances are most users will never experience this - and if they do, they won't really care. The UX expectations of user generated content are pretty low ( just look at Facebook ); if this is an admin backend for an 'enterprise' service that would make workflow better, you may want to invest time on the 'optimal' solution. For a public facing website though, i'd just forget about it.
| 0 | 0 | 1 | 0 |
2012-09-03T04:22:00.000
| 1 | 0.197375 | false | 12,241,945 | 0 | 0 | 1 | 1 |
Are there any generally accepted practices to get around this? Specifically, for user-submitted images uploaded to a web service. My application is running in Python.
Some hacked solutions that came to mind:
Display the uploaded image from a local directory until the S3 image is ready, then "hand it off" and update the database to reflect the change.
Display a "waiting" progress indicator as a background gif and the image will just appear when it's ready (w/ JavaScript)
|
What algorithms i can use from machine learning or Artificial intelligence which i can show via web site
| 12,243,670 | 1 | 0 | 1,093 | 0 |
python,web,machine-learning,artificial-intelligence
|
I assume you are mostly concerned with a general approach to implementing AI in a web context, and not in the details of the AI algorithms themselves. Any computable algorithm can be implemented in any turing complete language (i.e.all modern programming languages). There's no special limitations for what you can do on the web, it's just a matter of representation, and keeping track of session-specific data, and shared data. Also, there is no need to shy away from "calculation" and "graph based" algorithms; most AI-algorithms will be either one or the other (or indeed both) - and that's part of the fun.
For example, as an overall approach for a neural net, you could:
Implement a standard neural network using python classes
Possibly train the set with historical data
Load the state of the net on each request (i.e. from a pickle)
Feed a part of the request string (i.e. a product-ID) to the net, and output the result (i.e. a weighted set of other products, like "users who clicked this, also clicked this")
Also, store the relevant part of the request (i.e. the product-ID) in a session variable (i.e. "previousProduct"). When a new request (i.e. for another product) comes in from the same user, strengthen/create the connection between the first product and the next.
Save the state of the net between each request (i.e. back to pickle)
That's just one, very general example. But keep in mind - there is nothing special about web-programming in this context, except keeping track of session-specific data, and shared data.
| 0 | 0 | 0 | 1 |
2012-09-03T04:37:00.000
| 1 | 1.2 | true | 12,242,054 | 0 | 1 | 1 | 1 |
I am die hard fan of Artificial intelligence and machine learning. I don't know much about them but i am ready to learn. I am currently a web programmer in PHP , and I am learning python/django for a website.
Now as this AI field is very wide and there are countless algorithms I don't know where to start.
But eventually my main target is to use whichever algorithms; like Genetic Algorithms , Neural networks , optimization which can be programmed in web application to show some stuff.
For Example : Recommendation of items in amazon.com
Now what I want is that in my personal site I have the demo of each algorithm where if I click run and I can show someone what this algorithm can do.
So can anyone please guide which algorithms should I study for web based applications.
I see lot of example in sci-kit python library but they are very calculation and graph based.
I don't think I can use them from web point of view.
Any ideas how should I go?
|
django-celery in multiple server production environment
| 12,246,221 | 6 | 5 | 1,600 | 0 |
python,django,rabbitmq,celery,django-celery
|
It really depends on the size of the project, ideally you have RabbitMq, celery workers and web workers running on different machines.
You need only one RabbitMQ and eventually multiple queue workers (bigger queues need more workers of course).
You dont need 1 celery worker per webworker, the webworkers are going to publish tasks to the broker and then the workers will get them from there, in fact the webworker does not care about the amount of workers connected to the broker as it only communicates with the broker.
Of course if you are starting a project it makes sense to keep everything on the same hardware and keep budget low and wait for the traffic and the money to flow :)
You want to have the same code on every running instance of your app, no matter if they are celery workers/ webservers or whatever.
| 0 | 1 | 0 | 0 |
2012-09-03T10:23:00.000
| 1 | 1 | false | 12,245,999 | 0 | 0 | 1 | 1 |
I trying to deploy a django project using django, but I have these questions unsolved:
Should I run one celeryd for each web server?
Should I run just one RabbitMQ server, on another machine (not) running celeryd there, accesible to all my web servers? or RabbitMQ must be run also on each of the web servers?
How can I use periodic tasks if the code is the same in all web servers?
Thank for your answers.
|
Queueing HTTP, emails, and TCP messages in Python
| 12,258,879 | 1 | 0 | 169 | 0 |
python,rabbitmq,message-queue,django-celery
|
Try the Pika client or the Kombu client. Celery is a whole framework for job queues, which you may not need - but it's worth taking a look if you want to understand a queue use case.
| 0 | 0 | 0 | 1 |
2012-09-03T19:23:00.000
| 1 | 1.2 | true | 12,253,063 | 0 | 0 | 1 | 1 |
I have a system that sends different types of messages (HTTP, SMTP, POP, IMAP, and regular TCP) to different systems, and I need to queue all of those messages in my system, in case of other systems in-availability.
I'm a bit new to the message queueing concept. so I don't know the best python library that I shall go for.
Is Django-celery (and the underling components - RabbitMQ, MySql, django, apache) is the best choice for me? Will this library cover all my needs?
|
What's the preferred method for throttle websocket connections?
| 12,266,453 | 0 | 1 | 1,564 | 0 |
python,websocket,eventlet
|
I think the most efficient way to do this is that client app tell the server what they are displaying. The server keep track of this and send changes only to the objects currently viewed, only to the concerned client.
A way to do this is by using a "Who Watch What" list of items.
Items are indexed in two ways. From the client ID and with a isVievedBy chainlist inside each data objects (I know it doesn't look clean to mix it with data but it is very efficient).
You'll also need a lastupdate timestamp for each data object.
When a client change view, it send a "I'm viewing this, wich I have the version -timestamp-" message to the server. The server check timestamp and send back the object if required. It also remove obsolete "Who Watch What" (accessing them by client ID) items and create the new ones.
When a data object is updated, loop through the isVievedBy chainlist of this object to know which client should be updated. Put this in message buffers for each client and flush those buffers manually (in case you update several items at the same time, it will send one big message).
This is lot of work, but your app will be efficient and scale gracefully, even with lot of objects and lot of clients. It sends only usefull messages and it is very unlikely that there will be too many of them.
For your onMessage problem, I would store data in a queue and process them asynchronously.
| 0 | 0 | 0 | 0 |
2012-09-04T14:24:00.000
| 1 | 0 | false | 12,265,561 | 0 | 0 | 1 | 1 |
I have a web app where I am streaming model changes to a backbone collection in a chrome client. There a a few backbone views that may or may not render parts of the page depending on the type of update and what is being looked at. For example some changes to a model result in the view for the collection being re-rendered and there may or may not be a detail panel view open for the model that's being updated. These model changes can happen very fast as the server side workflow involves quite verbose and rapid changes to the model.
Here's the problem: I'm getting a large number of errno 32 pipe broken messages in the webserver's process when sending messages to the client, although the websocket connection is still up and its readyState is still 1 (OPEN).
What I suspect is happening is that the various views haven't finished rendering in the onmessage callback by the time the next message is coming in. After I get these tracebacks in stdout the websocket connection can still work and the UI will still update.
If I put eventlet.sleep(0.02) in the loop that reads model changes off the message queue and sends them on the websocket the broken pipe messages go away, however this isn't a real solution and feels like a nasty hack.
Has anyone has similar problems with websocket's onmessage function trying to do too much work and still being busy when the next message comes in? Anyone have a solution?
|
How to expose Django/Python application to the web?
| 12,274,626 | 2 | 3 | 1,111 | 0 |
python,django,web
|
Easiest way to do that is to find a host online (such as pythonanywhere.com) to host your app on there, following their instructions, and your app will be online. They handle most of the issues with piggybacking a django project on a server.
| 0 | 0 | 0 | 0 |
2012-09-05T04:44:00.000
| 4 | 0.099668 | false | 12,274,600 | 0 | 0 | 1 | 1 |
I developed a simple Django/Python poll application(Following step by step tutorial from the Django Official Documentation). I used the in build server to test the application. Now, I want to host my application in the web. I heard WSGI is the best way to expose Python/Django application to the web? What would be the best way to expose Python/Django code into the web? Thank you
P.S: I already have a domain name and a shared web hosting from justhost.com. While I chat to their support, they told me that they support WSGI.
|
What does this console message mean in Google App Engine
| 21,434,751 | 1 | 4 | 177 | 0 |
python,google-app-engine,app-engine-ndb
|
This seems to happen if you have async operations in progress before you enter the ndb.toplevel function.
My guess is that this warns you that theses async operations will not be waited for at the end of the request. This could be an issue if you expected them to be included in your "toplevel" function and they are tasklets waiting for an operation to complete before executing some more.
| 0 | 1 | 0 | 0 |
2012-09-05T17:45:00.000
| 1 | 0.197375 | false | 12,286,987 | 0 | 0 | 1 | 1 |
I'm using Google App Engine NDB with a lot of async operations and yields. The console shows me this message:
tasklets.py:119] all_pending: clear set([Future 106470190 created by
dispatch(webapp2.py:570) for tasklet post(sync.py:387); pending])
Is this a warning of some sort? Should it be ignored? It doesn't cause any unusual behavior.
(sync.py is one of my files, but the other stuff aren't mine)
|
Passing variables through Selenium send.keys instead of strings
| 60,626,053 | 0 | 7 | 9,992 | 0 |
python,selenium
|
I think username might be a variable in the python library you are using. Try calling it something else like Username1 and see if it works??
| 0 | 0 | 1 | 0 |
2012-09-05T21:00:00.000
| 5 | 0 | false | 12,289,700 | 0 | 0 | 1 | 2 |
I'm trying to use Selenium for some app testing, and I need it to plug a variable in when filling a form instead of a hardcoded string. IE:
this works
name_element.send_keys("John Doe")
but this doesnt
name_element.send_keys(username)
Does anyone know how I can accomplish this? Pretty big Python noob, but used Google extensively to try and find out.
|
Passing variables through Selenium send.keys instead of strings
| 47,662,083 | 1 | 7 | 9,992 | 0 |
python,selenium
|
Try this.
username = r'John Doe'
name_element.send_keys(username)
I was able to pass the string without casting it just fine in my test.
| 0 | 0 | 1 | 0 |
2012-09-05T21:00:00.000
| 5 | 0.039979 | false | 12,289,700 | 0 | 0 | 1 | 2 |
I'm trying to use Selenium for some app testing, and I need it to plug a variable in when filling a form instead of a hardcoded string. IE:
this works
name_element.send_keys("John Doe")
but this doesnt
name_element.send_keys(username)
Does anyone know how I can accomplish this? Pretty big Python noob, but used Google extensively to try and find out.
|
How to execute code only on test failures with python unittest2?
| 12,290,574 | 2 | 12 | 1,958 | 0 |
python,unit-testing,selenium-webdriver
|
Override fail() to generate the screenshot and then call TestCase.fail(self)?
| 0 | 0 | 0 | 1 |
2012-09-05T21:53:00.000
| 5 | 0.07983 | false | 12,290,336 | 0 | 0 | 1 | 1 |
I have some class-based unit tests running in python's unittest2 framework. We're using Selenium WebDriver, which has a convenient save_screenshot() method. I'd like to grab a screenshot in tearDown() for every test failure, to reduce the time spent debugging why a test failed.
However, I can't find any way to run code on test failures only. tearDown() is called regardless of whether the test succeeds, and I don't want to clutter our filesystem with hundreds of browser screenshots for tests that succeeded.
How would you approach this?
|
How do you add the scrapy framework to portable python?
| 20,853,931 | 0 | 1 | 658 | 0 |
python,scrapy,portable-python
|
you can easily download scrapy executable, extract it with python, copy scrapy folder and content to c:\Portable Python 2.7.5.1\App\Lib\site-packages\ and you'll have scrapy in your portable python.
I just had my similar problem with SciKit this way.
| 0 | 0 | 0 | 0 |
2012-09-05T22:29:00.000
| 1 | 0 | false | 12,290,694 | 0 | 0 | 1 | 1 |
I need to create a portable python install on a usb but also install the scrapy framework on it, so I can work on and run my spiders on any computer.
Has anyone else done this? Is it even possible?
If so how do you add scrapy onto the portable python usb and then run the spiders?
Thanks
|
How do I go about storing session objects?
| 12,320,928 | 4 | 1 | 241 | 1 |
python,session,sqlalchemy,session-state,pyramid
|
Unless you're being really careful, serializing the entire object into redis is going to cause problems. You're effectively treating it like a cache, so you have to be careful that those values are expired if the user changes something about themselves. You also have to make sure that all of the values are serializable (likely via pickle). You didn't specify whether this is a premature optimization so I'm going to say that it probably is and recommend that you just track the user id and reload his information when you need it from your database.
| 0 | 0 | 0 | 0 |
2012-09-06T02:42:00.000
| 2 | 1.2 | true | 12,292,277 | 0 | 0 | 1 | 1 |
I have a few a few model classes such as a user class which is passed a dictionary, and wraps it providing various methods, some of which communicate with the database when a value needs to be changed. The dictionary itself is made from an sqlalchemy RowProxy, so all its keys are actually attribute names taken directly from the sql user table. (attributes include user_id, username, email, passwd, etc)
If a user is logged in, should I simply save this dictionary to a redis key value store, and simply call a new user object when needed and pass it this dictionary from redis(which should be faster than only saving a user id in a session and loading the values again from the db based on that user_id)?
Or should I somehow serialize the entire object and save it in redis? I'd appreciate any alternate methods of managing model and session objects that any of you feel would be better as well.
In case anyone is wondering I'm only using the sqlalchemy expression language, and not the orm. I'm using the model classes as interfaces, and coding against those.
|
Export a website to an XML Page
| 12,295,930 | 4 | 1 | 2,036 | 0 |
php,javascript,python,xml,perl
|
You could try to reverse engineer the javascript code. Maybe it's making an ajax request to a service, that delivers the data as json. Use your browsers developer tools/network tab to see what's going on.
| 0 | 0 | 1 | 0 |
2012-09-06T08:20:00.000
| 1 | 1.2 | true | 12,295,834 | 0 | 0 | 1 | 1 |
I need to export a website(.html page) to a XML file. The website contains a table with some data which i require for using in my web project. The table in the website is formed using some javascript, so i cannot get the data by getting the page source. Please tell me how I can export the table in the website to a XML file using php/python/javascript/perl.
|
How to convert all HTML tags and attributes in a string to lowercase in python?
| 12,299,906 | 2 | 0 | 807 | 0 |
python,html
|
I won't go so far as to say it's impossible, but this is an extremely tall order. The reason is that an HTML parser will usually not attempt to preserve HTML-irrelevant characters like line endings, but anything other than an HTML parser will not be very good at telling what is or isn't a tag according to the strict definitions of the format.
If you really need to do this and do this well, I would look at dissecting an existing python HTML parser and modifying it to your needs. This is a fairly advanced programming project. It would be better to seriously consider why you need to do this and if this is strictly the right thing to do.
Edit: An additional problem is that it's not really possible to parse HTML without checking the validity of the HTML and either reforming it to be valid, or choking on invalid HTML. So if you potentially have validity problems with your HTML, your result is undefined. For instance, if the input includes a grossly invalid tag like <font="courier">, would that be considered an HTML tag for the purposes of this exercise, or just a string of parser-killing characters? Likewise with a valid-looking tag in the wrong place in the document.
| 0 | 0 | 0 | 0 |
2012-09-06T11:58:00.000
| 2 | 0.197375 | false | 12,299,602 | 0 | 0 | 1 | 1 |
How can I convert all HTML tags and attributes in a string to lowercase in python? Nothing else should be changed, e.g. attribute values should not be changed, no indentation, line wrapping etc.
Sorry if it's too obvious :)
|
How to expand input buffer size of pyserial
| 45,513,398 | 0 | 10 | 21,578 | 0 |
python,buffer,pyserial
|
pySerial uses the native OS drivers for serial receiving. In the case of Windows, the size of the input driver is based on the device driver.
You may be able to increase the size in your Device Manager settings if it is possible, but ultimately you just need to read the data in fast enough.
| 0 | 0 | 0 | 1 |
2012-09-06T14:16:00.000
| 4 | 0 | false | 12,302,155 | 0 | 0 | 1 | 2 |
I want to communicate with the phone via serial port. After writing some command to phone, I used ser.read(ser.inWaiting()) to get its return value, but I always got total 1020 bytes of characters, and actually, the desired returns is supposed to be over 50KB.
I have tried to set ser.read(50000), but the interpreter will hang on.
How would I expand the input buffer to get all of the returns at once?
|
How to expand input buffer size of pyserial
| 12,920,183 | 1 | 10 | 21,578 | 0 |
python,buffer,pyserial
|
I'm guessing that you are reading 1020 bytes because that is all there is in the buffer, which is what ser.inWaiting() is returning. Depending on the baud rate 50 KB may take a while to transfer, or the phone is expecting something different from you. Handshaking?
Inspect the value of ser.inWaiting, and then the contents of what you are receiving for hints.
| 0 | 0 | 0 | 1 |
2012-09-06T14:16:00.000
| 4 | 0.049958 | false | 12,302,155 | 0 | 0 | 1 | 2 |
I want to communicate with the phone via serial port. After writing some command to phone, I used ser.read(ser.inWaiting()) to get its return value, but I always got total 1020 bytes of characters, and actually, the desired returns is supposed to be over 50KB.
I have tried to set ser.read(50000), but the interpreter will hang on.
How would I expand the input buffer to get all of the returns at once?
|
Django: OAuth token storage and renewal
| 12,322,503 | 1 | 1 | 343 | 0 |
python,django,oauth
|
I ended up creating a table with a single row, updated to contain the latest valid token.
Main reason: I know that wherever I deploy this application, and no matter how many processes across how many machines are serving, the database will work as storage. It's not that much extra code either, and goes well with Django's application packaging.
| 0 | 0 | 0 | 0 |
2012-09-06T14:30:00.000
| 1 | 1.2 | true | 12,302,385 | 0 | 0 | 1 | 1 |
I'm running a Django app that needs to interact with an external API to make requests in behalf of its users.
Before making any calls, I have to obtain an access token through an OAuth-like interface. This token is mine, my users won't have one each. I have tested the entry points and methods with curl, and everything seems to work fine, so I'm ready to integrate the service.
How should I go about requesting, storing, reusing and renewing this token when it expires? Also, which parts of the process should run on the client's browser, and which parts on the Django backend?
|
have bug with aeroo reports in openerp version 6.1
| 13,524,808 | 0 | 2 | 629 | 0 |
python,openerp
|
All it means is the directive shown has a syntax error. If you look at it it is fairly clear. One set of parentheses is not closed. The other common cause is missing the closing > tag
| 0 | 0 | 0 | 0 |
2012-09-07T14:19:00.000
| 1 | 0 | false | 12,319,957 | 0 | 0 | 1 | 1 |
I got the following Error after I hit the print button, the report was full functional in openerp6.1 with aeroo:
Aeroo Reports: Error while generating the report.
unexpected EOF while parsing in expression "__filter(get_label("report.account.account.balance","chart_account_id")" of "replace" directive (, line 1) (, line 1)
unexpected EOF while parsing in expression "__filter(get_label("report.account.account.balance","chart_account_id")" of "replace" directive (, line 1) (, line 1)
For more reference inspect error logs.
(, Exception(u'Aeroo Reports: Error while generating the report.', TemplateSyntaxError('unexpected EOF while parsing in expression "__filter(get_label("report.account.account.balance","chart_account_id")" of "replace" directive (, line 1) (, line 1)',), 'unexpected EOF while parsing in expression "__filter(get_label("report.account.account.balance","chart_account_id")" of "replace" directive (, line 1) (, line 1)', u'For more reference inspect error logs.'), )
Please help me..
Thanks
|
Find out what users DIDN'T submit in input field.
| 12,335,176 | 1 | 0 | 84 | 0 |
jquery,python,google-app-engine,autocomplete,submit
|
You could add a keystroke event listener and simply listen to the backspace keypress event.
When that happens, save and ajax the form field value to your server for storage.
EDIT: On basis of Scott Selby's anser:
To cover all bases:
Event listener for keypress backspace
Event listener for selection start and end. Save and ajax the selected part if if the form field has changed on the next keypress.
Catch onunload event, safe form data if exists.
That way you always get what the users deleted. This also helps making your search query window "typo" proof because you'll be able to spot "common" mistakes over time.
It's also a good way to collect data on what products or services your potential clients expect from you.
| 0 | 0 | 0 | 0 |
2012-09-08T22:56:00.000
| 1 | 0.197375 | false | 12,335,127 | 0 | 0 | 1 | 1 |
I'm using Jquery autocomplete on an input field. So, as users type out a word, they can see a drop-down list showing fewer and fewer choices to select from.
The thing is, sometimes the user's input will not be among the choices in that list. I believe that when users see that list go empty, they will give up on their original search term and either enter a different one, or simply delete it and leave the field blank.
When that happens, I want to know what was the original search term entered?
(Because after collecting that info over time, I plan to add the most popular ones to the list, for future users to choose from)
So my question is, what is the best way to find out what users typed in, but didn't submit?
My idea so far is to listen for when the user takes the focus off the input field, and get it's value on change. That will tell me about users who deleted and left it blank, but it still doesnt tell me anything for the users who deleted and then entered a new term.
Is there a better solution?
PS - My site is on GAE, so if using python opens any relevant doors... there's that too.
|
How secure is Django Admin interface
| 12,337,353 | -2 | 0 | 2,419 | 0 |
python,django,web-applications,django-admin
|
How secure is a Google Login ;) ? You can't tell as you can't look behind the scenes (Update: Of Google Login of course.). I would guess that Django's admin code is pretty safe, as it's used in lots of production systems. Still, beside the code, it also matters how you set it up. In the end, it depends on the level of security you need.
| 0 | 0 | 0 | 0 |
2012-09-09T07:36:00.000
| 2 | -0.197375 | false | 12,337,318 | 0 | 0 | 1 | 2 |
I have couple of apps on internet and want to serve static files for those apps using another Django App. I simply can't afford to use Amazon Web Services for my pet-projects.
So, I want to setup an Admin interface where I can manage static files easily. The following are the actions I am thinking to include in admin.
Upload, delete static files
Grouping static files (creating new folders, adding new/ existing files to it); I am not sure If it is possible.
checking my models
Thus, I would like to know how secure is Django-Admin interface!
How secure it is when compared to our famous sites like Yahoo, Facebook, Google Login's.. (at least in terms of "cracking".. is django admin can be cracked easily?)
|
How secure is Django Admin interface
| 12,337,841 | 0 | 0 | 2,419 | 0 |
python,django,web-applications,django-admin
|
Besides serving static files through django is considered a bad idea, the django admin itself is pretty safe. You can take additional measure by securing it via .htaccess and force https access on it. You could also restrict access to a certain IP. If the admin is exposed to the whole internet you should at least be careful when choosing credentials. Since I don't know how secure Google and Yahoo really are, I can't compare to them.
| 0 | 0 | 0 | 0 |
2012-09-09T07:36:00.000
| 2 | 0 | false | 12,337,318 | 0 | 0 | 1 | 2 |
I have couple of apps on internet and want to serve static files for those apps using another Django App. I simply can't afford to use Amazon Web Services for my pet-projects.
So, I want to setup an Admin interface where I can manage static files easily. The following are the actions I am thinking to include in admin.
Upload, delete static files
Grouping static files (creating new folders, adding new/ existing files to it); I am not sure If it is possible.
checking my models
Thus, I would like to know how secure is Django-Admin interface!
How secure it is when compared to our famous sites like Yahoo, Facebook, Google Login's.. (at least in terms of "cracking".. is django admin can be cracked easily?)
|
Google Appengine not signing emails with DKIM code
| 12,340,517 | 1 | 0 | 441 | 0 |
python,google-app-engine,dkim
|
How long ago did you create your DNS TXT record? Since DKIM is a DNS controlled service, and DNS often takes up to days to propagate across the Internet, you may need to wait for that to happen before Google will recognize it as valid.
| 0 | 1 | 0 | 0 |
2012-09-09T15:39:00.000
| 1 | 0.197375 | false | 12,340,456 | 0 | 0 | 1 | 1 |
Am confused why emails sent by my appengine app are not being signed with DKIM.
Enabled DKIM signing on Google Apps dashboard. Confirmed that my domain is "Authenticating email"
Have setup DNS TXT record using the values indicated in the apps domain. Have confirmed, using 3rd party validation tool, that the DNS is correct. Also, I assume that having a green-light indicator for authenticating email in my Google Apps domain means this record has been validated by Google Apps.
Email-send is being triggered by a click by a user browsing my application via my custom url. The custom url matches the domain for the return address of the sender. The sender return address is an owner of the account.
As far as I know, these are the requirements for emails to be signed automatically. Yet, alas, they are not being signed. Any help or ideas will be greatly appreciated. Thanks -
|
How can one load an AppEngine cloud storage backup to a local development server?
| 37,780,181 | 0 | 3 | 1,341 | 0 |
python,google-app-engine
|
For those using windows change the open line to:
raw = open('path_to_datastore_export_file', 'rb')
The file must be opened in binary mode!
| 0 | 1 | 0 | 0 |
2012-09-09T15:41:00.000
| 3 | 0 | false | 12,340,468 | 0 | 0 | 1 | 1 |
I'm experimenting with the Google cloud storage backup feature for an application.
After downloading the backup files using gsutil, how can they be loaded into a local development server?
Is there a parser available for these formats (eg, protocol buffers)?
|
Jinja 2 safe keyword
| 43,586,175 | 8 | 30 | 32,304 | 0 |
python,template-engine,jinja2
|
For anyone coming here looking to use the safe filter programmatically: wrap it in a markupsafe.Markup class, on which Jinja2 depends on.
| 0 | 0 | 0 | 0 |
2012-09-09T17:58:00.000
| 4 | 1 | false | 12,341,496 | 0 | 0 | 1 | 1 |
I have a little problem understanding what an expression like {{ something.render() | safe }} does .
From what I have seen, without the safe keyword it outputs the entire html document, not just the true content.
What I would like to know, is what it actually does, how it functions .
|
Error when installing mod_wsgi
| 12,444,455 | 0 | 0 | 314 | 0 |
python,python-2.7,mod-wsgi
|
You have Apache 2.2 core package installed, but possibly have the devel package for Apache 1.3 instead of that for 2.2 installed. This isn't certain though, as for some Apache distributions, such as when compiled from source code, 'apxs' is still called 'apxs'. It is only certain Linux distros that have changed the name of 'apxs' in Apache 2.2 distros to be 'apxs2'. This is why the mod_wsgi configure script checks for 'apxs2' as well as 'apxs'.
So, do the actual make and see if that fails before assuming you have the wrong apxs.
| 0 | 0 | 0 | 1 |
2012-09-09T18:12:00.000
| 2 | 0 | false | 12,341,610 | 0 | 0 | 1 | 2 |
When installing mod_wsgi I get stuck after doing ./config
Apparently I am missing the apxs2
Here is the result:
checking for apxs2... no
checking for apxs... /usr/sbin/apxs
checking Apache version... 2.2.22
checking for python... /usr/bin/python
configure: creating ./config.status
config.status: creating Makefile
What I am not sure of now is how I get apxs2 working and installed. Any solution anyone? This is so that I can later on install Django and finally get a Python/Django environment up and running on my VPS.
|
Error when installing mod_wsgi
| 12,341,622 | 2 | 0 | 314 | 0 |
python,python-2.7,mod-wsgi
|
checking for apxs... /usr/sbin/apxs
...
config.status: creating Makefile
It succeeded. Go on to the next step.
| 0 | 0 | 0 | 1 |
2012-09-09T18:12:00.000
| 2 | 0.197375 | false | 12,341,610 | 0 | 0 | 1 | 2 |
When installing mod_wsgi I get stuck after doing ./config
Apparently I am missing the apxs2
Here is the result:
checking for apxs2... no
checking for apxs... /usr/sbin/apxs
checking Apache version... 2.2.22
checking for python... /usr/bin/python
configure: creating ./config.status
config.status: creating Makefile
What I am not sure of now is how I get apxs2 working and installed. Any solution anyone? This is so that I can later on install Django and finally get a Python/Django environment up and running on my VPS.
|
pyqt4, function Mute / Un mute microphone and also speakers [PJSIP]
| 12,426,092 | 1 | 0 | 1,239 | 0 |
python,pyqt4,pjsip
|
himself answer :-)
in my case it was so
# call window
################
self.MuteMic = False
self.MuteSpeaker = False
################
#btn signals
self.connect(self.MuteUnmuteMicButton, QtCore.SIGNAL("clicked()"), self.MuteUnmuteMic)
self.connect(self.MuteUnmuteSpeakerButton, QtCore.SIGNAL("clicked()"), self.MuteUnmuteSpeaker)
def MuteUnmuteMic(self):
try:
if self.MuteMic:
self.MuteMic = False
self.parent().unmute_mic()
else:
self.MuteMic = True
self.parent().mute_mic()
except:
debug ("ошибка при вызове функции включение или отключение микрофона (call Window).")
def MuteUnmuteSpeaker(self):
try:
if self.MuteSpeaker:
self.MuteSpeaker = False
self.parent().unmute_speaker()
else:
self.MuteSpeaker = True
self.parent().mute_speaker()
except:
debug ("ошибка при вызове функции включение или отключение микрофона (call Window).")
# other code
----------
# ----------------------core of the my app
# ---import PJUA lib----
def mute_mic(self):
#this that you need in my case my app connected to pjua "self.lib"
self.lib.conf_set_rx_level(0,0)
debug ("вызвана функция выключение микрофона")
def unmute_mic(self):
self.lib.conf_set_rx_level(0,1)
debug ("вызвана функция включение микрофона")
def mute_speaker(self):
self.lib.conf_set_tx_level(0,0)
debug ("вызвана функция выключение динамиков")
def unmute_speaker(self):
self.lib.conf_set_tx_level(0,1)
debug ("вызвана функция включение динамиков")
| 0 | 0 | 0 | 0 |
2012-09-09T20:44:00.000
| 1 | 1.2 | true | 12,342,807 | 0 | 0 | 1 | 1 |
Hello friends and colleagues
I am trying to make a function mute / un mute microphone and also speakers for my program softphone on pyt4 and using library PJSIP
I found this in the code pjsip
pjsip:
def conf_set_tx_level(self, slot, level):
"""Adjust the signal level to be transmitted from the bridge to
the specified port by making it louder or quieter.
Keyword arguments:
slot -- integer to identify the conference slot number.
level -- Signal level adjustment. Value 1.0 means no level
adjustment, while value 0 means to mute the port.
"""
lck = self.auto_lock()
err = _pjsua.conf_set_tx_level(slot, level)
self._err_check("conf_set_tx_level()", self, err)
def conf_set_rx_level(self, slot, level):
"""Adjust the signal level to be received from the specified port
(to the bridge) by making it louder or quieter.
Keyword arguments:
slot -- integer to identify the conference slot number.
level -- Signal level adjustment. Value 1.0 means no level
adjustment, while value 0 means to mute the port.
"""
lck = self.auto_lock()
err = _pjsua.conf_set_rx_level(slot, level)
self._err_check("conf_set_rx_level()", self, err)
well I understand I need to send a parameter 0, but how to do?
And to return back work the sound device and microphone.
Maybe it """""pjsua_conf_adjust_tx_level(slot_number, 0 )"""""
|
Delete pieces of Session on browser close
| 12,343,112 | 2 | 1 | 1,943 | 0 |
python,django,session,cookies
|
You're confusing things a bit.
The only thing stored inside "Django's session cookie" is an ID. That ID refers to the data which is stored inside the session backend: this is usually a database table, but could be a file or cache location depending on your Django configuration.
Now the only time that data is updated is when it is modified by Django. You can't expire data automatically, except by either the cookie itself expiring (in which case the entire set of data persists in the session store, but is no longer associated with the client) or by running a process on the server that modifies sessions programmatically.
There's no way of telling from the server end when a user leaves a website or closes his browser. So the only way of managing this would be to run a cron job on your server that gets sessions that were last modified (say) two hours ago, and iterate through them deleting the data you want to remove.
| 0 | 0 | 0 | 0 |
2012-09-09T20:45:00.000
| 1 | 1.2 | true | 12,342,820 | 0 | 0 | 1 | 1 |
I am going to be storing a significant amount of information inside Django's session cookie. I want this data to persist for the entire time the user is on the website. When he leaves, the data should be deleted, but the session MUST persist. I do not want to user to need to log in every time he returns to the website.
I found ways to purge the entire session cookie every time a user leaves the website, but ideally I would like to only delete select pieces of the cookie which I explicitly set. Does anyone know how to do this?
|
Most efficient/cheap way to send/store data on GAE
| 12,345,364 | 0 | 1 | 118 | 0 |
python,performance,google-app-engine,google-cloud-datastore,scalability
|
Here are some suggestions:
Save the user preferences/settings on the Server too, so that you can have their preferences synced to multiple clients/devices in the future.
Having the data stored locally on the client is also recommended, so that the calls are not made to the server everytime to get some preferences. In offline situations, having the relevant data available locally within the client is very critical.
In terms of Storage options on GAE - there are various ones. Since you are using Datastore I suggest that you go with that. In terms of storage size of each entity and in/out bandwidth, it is something that you can calculate approximately and if you feel that you can achieve the same logic via a few numbers or combined number (in a single Datastore entity attribute) rather than multiple entity attributes, it is preferable and will help.
| 0 | 0 | 0 | 0 |
2012-09-10T00:44:00.000
| 2 | 0 | false | 12,344,193 | 0 | 0 | 1 | 2 |
I want to minimize traffic/storage costs on GAE.
Users fill out a form, checking boxes to select options which are lines of text, eg "I wake up two or more times during the night." or "I sleep less than 7 hours per night." or "I usually have trouble falling asleep."
I want to store the user's selections using the datastore. I suppose I can save on storage space by giving each selection a unique identifier. Then I'll just store (for example) "342, 554, 106" instead of three long lines of text... Then retrieve those numbers and translate them back into sentences next time loading the page for each user.
My question is, will it be better to do that conversion on the client side, or the server side?
Obviously, doing the conversion on the client side will mean sending LESS data from client to server for storage - which is good. However, it would mean sending MORE data from server to client, considering the additional lines of client-side javascript necessary to facilitate the conversion, which they will be downloaded as part of the page source - and that could be bad.
|
Most efficient/cheap way to send/store data on GAE
| 12,352,966 | 1 | 1 | 118 | 0 |
python,performance,google-app-engine,google-cloud-datastore,scalability
|
Sounds like you already figured out how best to store the data.
In terms of translating it to HTML on the server or client side, it'll depend on the complexity of your page. Analyzing it will probably be more time than it's worth, and it might change if your page changes. It's most likely a wash unless it's an extreme situation. Use whichever is simpler and get your project done and out the door. If you're using a framework that handles generating forms on the server side, use that. If you have hundreds of thousands of views and it's adding up to a significant cost, revisit the pages in particular that are causing you the problem.
An extreme situation might be if your form needs to appear many, many times on a page, in which case it may be easier to have the actual form in javascript once and reproduce it many many times.
| 0 | 0 | 0 | 0 |
2012-09-10T00:44:00.000
| 2 | 0.099668 | false | 12,344,193 | 0 | 0 | 1 | 2 |
I want to minimize traffic/storage costs on GAE.
Users fill out a form, checking boxes to select options which are lines of text, eg "I wake up two or more times during the night." or "I sleep less than 7 hours per night." or "I usually have trouble falling asleep."
I want to store the user's selections using the datastore. I suppose I can save on storage space by giving each selection a unique identifier. Then I'll just store (for example) "342, 554, 106" instead of three long lines of text... Then retrieve those numbers and translate them back into sentences next time loading the page for each user.
My question is, will it be better to do that conversion on the client side, or the server side?
Obviously, doing the conversion on the client side will mean sending LESS data from client to server for storage - which is good. However, it would mean sending MORE data from server to client, considering the additional lines of client-side javascript necessary to facilitate the conversion, which they will be downloaded as part of the page source - and that could be bad.
|
google app engine email attachment: download it to file system
| 12,349,722 | 1 | 0 | 218 | 0 |
python,google-app-engine,download,email-attachments
|
You cannot write a File to your web application directory in App Engine.
Possible choices for you are:
Save the content in the Datastore.
Use the Blobstore
Use the Google Storage facility.
Alternately, you might want to post the content away to an external server that can store the data, either your own or some 3rd party like Amazon S3.
| 0 | 1 | 0 | 0 |
2012-09-10T07:10:00.000
| 1 | 0.197375 | false | 12,346,831 | 0 | 0 | 1 | 1 |
I am able to receive an email to app engine. I see the data in an attachment is payload base64 encoded data. Can I download the attachment as it is to file system without processing it or without storing it to blobstore?
|
How can I unittest wsgi code which uses gevent?
| 15,670,806 | 1 | 1 | 563 | 0 |
python,wsgi,gevent
|
If your library uses threading.local to provide thread-isolated "global" request variable then all you need is to call gevent.monkey.patch_thread BEFORE you use threading.local. That should turn all threading.local objects into "greenlet.local" ones.
| 0 | 0 | 0 | 1 |
2012-09-10T09:09:00.000
| 1 | 0.197375 | false | 12,348,500 | 0 | 0 | 1 | 1 |
I'd like to test my WSGI library with gevent's WSGI Servers to ensure that request parameters aren't leaked/overwritten with those from another request/greenlet - in my library request is "global", though it should be thread-safe... which is what I'd like to test using gevent.
What approaches can I use? Are there any open-source projects which already have unittests which achieve this from which I could learn?
|
Python AppEngine coding for Android app?
| 12,349,313 | 0 | 0 | 130 | 0 |
java,android,python,google-app-engine
|
The communication with your server can be totally independent of the languages used on the server and client end.
Typically web applications use principles such as REST to communicate. This is why your browser runs using HTML and JavaScript and your server can be using anything, including python.
It really depends on what you need your server to do for your app.
| 0 | 1 | 0 | 0 |
2012-09-10T09:44:00.000
| 1 | 1.2 | true | 12,349,086 | 0 | 0 | 1 | 1 |
I'm a newbie Android Developer, and my app requires that it interacts with a server.
I came across Google AppEngine, and find it to be a good choice for this app.
If I code my Android app in Java, and do the server coding for Google AppEngine in Python, will my Android App be able to communicate with the server?
I mean will this Java (client) + Python (server) combination work well?
|
How to scrape a webpage that changes content from tag
| 12,349,430 | 0 | 3 | 2,430 | 0 |
python,web-scraping
|
I assume you use some library like urllib to do the scraping. You already know the website's content changes dynamically. I also assume that the dynamic content uses server-side interaction. This means, using javascript (ajax) the browser requests new data from the server, based on the value from the selection).
If so, then you could try to emulate the ajax call to the server in your web scraping library.
First, using a browser debugging tool find out the url to the server that is being invoked.
Split the parameters parts in the ajax call
Perform the same call to lookup the options in the select tag.
| 0 | 0 | 1 | 0 |
2012-09-10T09:57:00.000
| 2 | 0 | false | 12,349,295 | 0 | 0 | 1 | 2 |
I want to scrape a webpage that changes its content via a <select> tag. When I select a different option, the content of the page dynamically changes. I want to know if there is a way that I can change the option from a python script so I can get the content from all different pages of all different options in <select> tag.
|
How to scrape a webpage that changes content from tag
| 12,349,434 | 0 | 3 | 2,430 | 0 |
python,web-scraping
|
As @Tichodroma said, when the select is changed, either:
Some content previously hidden on the page is made visible, or:
An ajax call is made to retrieve some additional content and add it to the DOM
In both cases, JavaScript is involved. Have a look at it, and depending on what is happening (case #1 or #2), you should:
Scrape the whole page, since all the content you want is already in it, or:
Make several calls to the file usually called using ajax to retrieve the content you want for each value of the <select>
| 0 | 0 | 1 | 0 |
2012-09-10T09:57:00.000
| 2 | 0 | false | 12,349,295 | 0 | 0 | 1 | 2 |
I want to scrape a webpage that changes its content via a <select> tag. When I select a different option, the content of the page dynamically changes. I want to know if there is a way that I can change the option from a python script so I can get the content from all different pages of all different options in <select> tag.
|
Shopify webhook not working when product updated/deleted
| 12,423,594 | 1 | 2 | 2,272 | 0 |
python,django,shopify,webhooks
|
Thanks for the answers guys, but I found out that the issue was something else.
I forgot to make a CSRF exemption for the POST request URL that Shopify calls and also forgot to add a trailing slash '/' at the end of the URL I told the webhook to call.
I guess I would have caught these errors if I used something like postcatcher.in as suggested in the comments above. I din't bother doing that as it looked like too much of a hassle.
| 0 | 0 | 1 | 0 |
2012-09-10T14:50:00.000
| 2 | 1.2 | true | 12,354,189 | 0 | 0 | 1 | 2 |
Backdrop: Am building a shopify app using a test store provided by shopify. #Python #Django-
Problem: I have setup shopify webhooks for my test store using the python API for the topics "products/update" and "products/delete". But my endpoints are not called by shopify when I manually update or delete a product on my test store.
My detective work so far: I have checked the following:
I have confirmed that the webhooks were successfully created using the API. I simply listed all the existing webhooks using the API for the store and mine are there.
The address/URL I specified in the webhook for shopify to call in the event of a product update or delete is a public url, as in it is not on my localhost. (not 127.0.0.1:8000 etc.)
My webhook endpoint is fine. When I manually call my endpoint in a test case, it does what it should.
I contacted the shopify apps support guys, and I was asked to post this issue here.
Another minor issue is that I cannot find in the shopify API docs exactly what JSON/XML the webhook will POST to my URL in the event it should. So I do not know what that JSON will look like...
Any help would be appreciated!
|
Shopify webhook not working when product updated/deleted
| 12,389,770 | 1 | 2 | 2,272 | 0 |
python,django,shopify,webhooks
|
I don't have the creds to comment apparently, so I'll put this in an "answer" - to use the term very loosely - instead. I ran into something similar with the Python API, but soon realized that I was doing it wrong. In my case, it was toggling the fulfillment status, which then fires off an email notifying customers of a download location for media.
What I was doing wrong was this: I was modifying the fulfillment attribute of the order object directly. Instead, the correct method was to fetch / create a fulfillment object, modify that, point the order attribute to this object, than save() it. This worked.
I don't know if this is your issue as there's no code posted, but I hope this helps.
--Matt
| 0 | 0 | 1 | 0 |
2012-09-10T14:50:00.000
| 2 | 0.099668 | false | 12,354,189 | 0 | 0 | 1 | 2 |
Backdrop: Am building a shopify app using a test store provided by shopify. #Python #Django-
Problem: I have setup shopify webhooks for my test store using the python API for the topics "products/update" and "products/delete". But my endpoints are not called by shopify when I manually update or delete a product on my test store.
My detective work so far: I have checked the following:
I have confirmed that the webhooks were successfully created using the API. I simply listed all the existing webhooks using the API for the store and mine are there.
The address/URL I specified in the webhook for shopify to call in the event of a product update or delete is a public url, as in it is not on my localhost. (not 127.0.0.1:8000 etc.)
My webhook endpoint is fine. When I manually call my endpoint in a test case, it does what it should.
I contacted the shopify apps support guys, and I was asked to post this issue here.
Another minor issue is that I cannot find in the shopify API docs exactly what JSON/XML the webhook will POST to my URL in the event it should. So I do not know what that JSON will look like...
Any help would be appreciated!
|
Library to create code outline
| 12,360,884 | 1 | 2 | 361 | 0 |
python,ruby,outline
|
As far as I know there is no such library. You could create it yourself though.
A pragmatic way would be to follow the indentation levels in Python. For other languages, you could either follow the indentation level, or use regular expression matching and a stack to keep track your outline.
| 0 | 0 | 0 | 0 |
2012-09-10T17:34:00.000
| 1 | 1.2 | true | 12,356,728 | 1 | 0 | 1 | 1 |
Is there a python or ruby library to create a code outline for the given code? The library should support multiple languages.
I am looking for something like outline view in Eclipse. I don't need the UI, and i can write my own. But I am looking for a library which parses the given language and creates an outline datastructure.
|
Auto Text Summarization: Web application using Django/python?
| 12,367,541 | 1 | 0 | 421 | 0 |
python,django,project
|
It doesn't matter if database is involved or not, but for overall web development, it's an easy to use framework.
| 0 | 0 | 0 | 0 |
2012-09-11T09:52:00.000
| 2 | 0.099668 | false | 12,367,082 | 0 | 0 | 1 | 1 |
I am going to develop a auto text summarization tool as my FYP. I am going to use Python and it's going to be a web application. Since, there would be no database involved in my tool is it a good idea to use Django? Can anyone recommend any other framework? Thanks.
|
Write/Read with High Replication Datastore + NDB
| 12,378,411 | 2 | 2 | 863 | 0 |
google-app-engine,python-2.7,google-cloud-datastore
|
Pretty sure you are running into the HRD feature where queries are "eventually consistent". NDB's caching has nothing to do with this behavior.
| 0 | 1 | 0 | 0 |
2012-09-11T10:41:00.000
| 2 | 1.2 | true | 12,367,904 | 0 | 0 | 1 | 1 |
So I have been reading a lot of documentation on HRD and NDB lately, yet I still have some doubts regarding how NDB caches things.
Example case:
Imagine a case where a users writes data and the app needs to fetch it immediately after the write. E.g. A user creates a "Group" (similar to a Facebook/Linkedin group) and is redirected to the group immediately after creating it. (For now, I'm creating a group without assigning it an ancestor)
Result:
When testing this sort of functionality locally (having enabled high replication), the immediate fetch of the newly created group fails. A NoneType is returned.
Question:
Having gone through the High Replication docs and Google IO videos, I understand that there is a higher write latency, however, shouldn't NDB caching take care of this? I.e. A write is cached, and then asynchronously actually written on disk, therefore, an immediate read would be reading from cache and thus there should be no problem. Do I need to enforce some other settings?
|
Is it possible to automatically pull random "tags" from a long string of text?
| 12,372,353 | 0 | 0 | 60 | 0 |
javascript,python,tags
|
Agree with @unwind , it depends on the content length of the text and your algorithm to grab the tags(scalability)
| 0 | 0 | 1 | 0 |
2012-09-11T14:39:00.000
| 2 | 0 | false | 12,372,258 | 0 | 0 | 1 | 2 |
I'm thinking if a user submits a message and they click a 'suggest tags' button, their message would be analyzed and a form field populated wIthaca random words from their post.
Is it possible to do this on a scalable level? Would JavaScript be able to handle it or better to Ajax back to python?
I'm thinking certain common words would be excluded (a, the, and, etc) and maybe the 10 longest words or just random not common words would be added to a form field like "tag1, tag2, tag3"
|
Is it possible to automatically pull random "tags" from a long string of text?
| 12,372,295 | 0 | 0 | 60 | 0 |
javascript,python,tags
|
Of course it's possible, you pretty much described the algorithm to test, and it doesn't seem to contain any obviously non-computable steps:
Split the message into words
Filter out the common words
Sort the words by length
Pick the top ten and present them as tags
Not sure what you mean by "scalable level", this sounds client-side to me. Unless the messages are very long, i.e. not typed in by a human, I don't think there will be any problems just doing it.
| 0 | 0 | 1 | 0 |
2012-09-11T14:39:00.000
| 2 | 0 | false | 12,372,258 | 0 | 0 | 1 | 2 |
I'm thinking if a user submits a message and they click a 'suggest tags' button, their message would be analyzed and a form field populated wIthaca random words from their post.
Is it possible to do this on a scalable level? Would JavaScript be able to handle it or better to Ajax back to python?
I'm thinking certain common words would be excluded (a, the, and, etc) and maybe the 10 longest words or just random not common words would be added to a form field like "tag1, tag2, tag3"
|
Sockjs - Send message to sockjs-tornado in Python code
| 12,393,867 | 6 | 6 | 2,233 | 0 |
python,django,websocket
|
There are few options how to handle it:
Create simple REST API in your Tornado server and post your updates from Django using this API;
Use Redis. Tornado can subscribe to the update key and Django can publish updates to this key when something happens;
Use ZeroMQ (AMQP, etc) to send updates from the Django to the Tornado backend (variation of the 1 and 2).
In most of the cases, it is either first or second option. Some people prefer using 3rd option though.
| 0 | 0 | 0 | 0 |
2012-09-12T07:36:00.000
| 3 | 1.2 | true | 12,383,272 | 0 | 0 | 1 | 1 |
I use https://github.com/mrjoes/sockjs-tornado for a Django app. I can send messages from javascript console very easy. But I want to create a signal in Django and send json string once the signal is active.
Could anyone give me a way to send a certain message in Python to sockjs-tornado socket server?
|
Authenticate by IP address in Django
| 12,383,605 | 3 | 18 | 16,527 | 0 |
python,django,authentication
|
There's no need to write an authentication backend for the use case you have written. Writing an IP based dispatcher in the middleware layer will likely be sufficient
If your app's url(s) is/are matched, process_request should check for an authenticated django user and match that user to a whitelist.
| 0 | 0 | 0 | 0 |
2012-09-12T07:53:00.000
| 6 | 0.099668 | false | 12,383,540 | 0 | 0 | 1 | 1 |
I have a small Django application with a view that I want to restrict to certain users. Anyone from a specific network should be able to see that view without any further authentication, based on IP address alone. Anyone else from outside this IP range should be asked for a password and authenticated against the default Django user management.
I assume I have to write a custom authentication backend for that, but the documentation confuses me as the authenticate() function seems to expect a username/password combination or a token. It is not clear to me how to authenticate using IP addresses here.
What would be the proper way to implement IP address-based authentication in Django? I'd prefer to use as much existing library functions as possible for security-related code instead of writing all of it myself.
|
Tornado secure cookie expiration (aka secure session cookie)
| 12,385,159 | 11 | 7 | 4,626 | 0 |
python,cookies,tornado
|
It seems to me that you are really on the right track. You try lower and lower values, and the cookie has a lower and lower expiration time.
Pass expires_days=None to make it a session cookie (which expires when the browser is closed).
| 0 | 1 | 0 | 0 |
2012-09-12T08:04:00.000
| 1 | 1.2 | true | 12,383,697 | 0 | 0 | 1 | 1 |
How can I set in Tornado a secure cookie that expires when the browser is closed?
If I use set_cookie I can do this without passing extra arguments (I just set the cookie), but how if I have to use set_secure_cookie?
I tried almost everything:
passing nothing: expiration is set to its default value, that is 1 month
passing an integer value: the value is considered as day, i.e. 1 means 1 day
passing a float value: it works, for example setting 0.1 it means almost one hour and a half
|
Fetch html content from a destination url that is on onload of the first site in urllib2
| 12,384,339 | 1 | 0 | 296 | 0 |
python,urllib2
|
You have to figured out the call to that second page, including parameters sent, so you can make that call yourself from your python code, best way is navigate first page with google chrome page inspector opened, then go to Network tab where the POST call would be captured and you can see the parameters sent and all. Then just recreate that same POST call from urllib2.
| 0 | 0 | 1 | 0 |
2012-09-12T08:29:00.000
| 1 | 0.197375 | false | 12,384,056 | 0 | 0 | 1 | 1 |
I am trying to fetch the HTML content of a website using urllib2. The site has a body onload event that submit a form on this site and hence it goes to a destination site and render the details I need.
response = urllib2.urlopen('www.xyz.com?var=999-999')
www.xyz.com contains a form that is posted to "www.abc.com", this
action value varies depending upon the content in url 'var=999-999'
which means action value will change if the var value changes to
'888-888'
response.read()
this still gives me the html content of "www.xyz.com" , but I want
that of resulting action url. Any suggestions of fetching the html
content from the final page?
Thanks in advance
|
Serving many on-the-fly generated images with Django
| 12,390,045 | 0 | 4 | 676 | 0 |
python,django,apache,comet,wsgi
|
If one user is all you need to bring your webserver down then the problem is not apache or mod_wsgi.
First you should optimize your tiling routines and check if you really only deliver the data a user actually sees.
After that a faster cpu, more ram, a ssd and aggressive caching will give you more performance.
At last you may get some extra points for using another webserver, but dont expect too much from that.
| 0 | 1 | 0 | 0 |
2012-09-12T11:59:00.000
| 3 | 0 | false | 12,387,707 | 0 | 0 | 1 | 2 |
Similar to a tiling server for spatial image data, I want to view many on-the-fly generated images in my Django based web application (merge images, color change, etc.). Since one client can easily request many (>100) images in a short time, it is easy to bring the web server (Apache + mod_wsgi) down.
Hence, I am looking for alternative ways. Since we already use Celery, it might be a good idea to do this image processing asynchronously and push the generated data to the client. To get started with that I switched the WSGI server to be gevent with Apache used as a proxy. However, I haven't managed to get the push thing working yet and I am not quite sure if this is the right direction anyway. Based on that I have three questions:
Do you think this (Celery, gevent, Socket.IO) is a sensible way to allow many clients to use the application without bringing the web server down? Do you see alternatives?
If I hand over the image processing to Celery and let it push the image data to the browser when it is done, the connection won't go through Apache, will it?
If some kind of pushing to the client is used, would it be better to use one connection or one for each image (and close it when done)?
Background:
The Django application I am working on allows a user to display very large images. This is done by tiling the large images before and show only the currently relevant tiles in a grid to the user. From what I understand this is the standard way to serve data in the field of mapping and spatial image data (e.g. OpenStreetMap). But unlike mapping data, we also have many slices in Z a user can scroll through (biological images).
All this works fine when the tiles are statically served. Now I added the option to generate those tiles on the fly -- different images are merged, color corrected, …. This works, but is some heavy load for the web server as one image takes about 0.1s to be generated. Currently we use Apache with mod_wsgi (WSGIRestrictedEmbedded On) and it is easy to bring the server down. Just browsing through the image stack will lead to a hanging web server. I already tried to adjust MaxClients, etc. and turned KeepAlive off. I also tried different thread/processes combinations for mod_wsgi. However, nothing helped enough to allow usage for more than one user. Therefore, I thought a Comet/WebSocket way could help here.
|
Serving many on-the-fly generated images with Django
| 12,390,401 | 1 | 4 | 676 | 0 |
python,django,apache,comet,wsgi
|
All this works fine when the tiles are statically served. Now I added
the option to generate those tiles on the fly -- different images are
merged, color corrected, …. This works, but is some heavy load for the
web server as one image takes about 0.1s to be generated.
You need a load balancer, with image requests being sent to a front-end server (e.g. NginX) that will multiplex (and cache!) as many requests as needed, provided you supply enough backend servers to do the heavy lifting.
This looks like a classic case for Amazon distributed computing: you could store the tiles in S3 storage (or maybe NFS over EBS). All the image manipulation servers get the data from a single image repository.
At the beginning, you can have both the Web application and one instance of the image manipulation server on the same machine. But basically your processes are three:
Web serving that calculates image URLs (you'll need some way to encode the manipulation as parameters in the URLs, otherwise you'll have to use cookies and session storage, which is ickier)
image server that receives the "image formula" and provides the JPEG tile
file server that allows access to the large images or single original tiles
I have worked at several such architectures, wherein our image layers were stored in a single image file (e.g. five zoom levels, each fifteen channels from FIR to UV, for a total of 75 "images" up to 100K pixels on a side, and the client could request 'Zoom level 2, red channel plus double of difference between UV-1 channel and green, tiles from X=157, Y=195 to X=167,Y=205').
| 0 | 1 | 0 | 0 |
2012-09-12T11:59:00.000
| 3 | 0.066568 | false | 12,387,707 | 0 | 0 | 1 | 2 |
Similar to a tiling server for spatial image data, I want to view many on-the-fly generated images in my Django based web application (merge images, color change, etc.). Since one client can easily request many (>100) images in a short time, it is easy to bring the web server (Apache + mod_wsgi) down.
Hence, I am looking for alternative ways. Since we already use Celery, it might be a good idea to do this image processing asynchronously and push the generated data to the client. To get started with that I switched the WSGI server to be gevent with Apache used as a proxy. However, I haven't managed to get the push thing working yet and I am not quite sure if this is the right direction anyway. Based on that I have three questions:
Do you think this (Celery, gevent, Socket.IO) is a sensible way to allow many clients to use the application without bringing the web server down? Do you see alternatives?
If I hand over the image processing to Celery and let it push the image data to the browser when it is done, the connection won't go through Apache, will it?
If some kind of pushing to the client is used, would it be better to use one connection or one for each image (and close it when done)?
Background:
The Django application I am working on allows a user to display very large images. This is done by tiling the large images before and show only the currently relevant tiles in a grid to the user. From what I understand this is the standard way to serve data in the field of mapping and spatial image data (e.g. OpenStreetMap). But unlike mapping data, we also have many slices in Z a user can scroll through (biological images).
All this works fine when the tiles are statically served. Now I added the option to generate those tiles on the fly -- different images are merged, color corrected, …. This works, but is some heavy load for the web server as one image takes about 0.1s to be generated. Currently we use Apache with mod_wsgi (WSGIRestrictedEmbedded On) and it is easy to bring the server down. Just browsing through the image stack will lead to a hanging web server. I already tried to adjust MaxClients, etc. and turned KeepAlive off. I also tried different thread/processes combinations for mod_wsgi. However, nothing helped enough to allow usage for more than one user. Therefore, I thought a Comet/WebSocket way could help here.
|
Webapp2 - Invalidate user login session, when the user logs in from a different browser
| 12,562,691 | 0 | 1 | 891 | 0 |
python,session,sessionid,webapp2
|
Make modification to what you already do: when user logs in, create unique/random token and store it in the user object and set a cookie in the browser with it. When user's session is requested, check that the two tokens (from the request cookie and user object) match and if not, burn the session.
It's just the same but instead of remote_addr use a random token that you generate and set as cookie on login.
| 0 | 0 | 0 | 0 |
2012-09-12T15:34:00.000
| 3 | 0 | false | 12,391,736 | 0 | 0 | 1 | 2 |
I have the following requirement in a webapp2 application. When a user leaves his machine or browser, that user's previous authentication session should be terminated.
I am able to do this when a user logs in from a different machine, by storing the remote_addr in the User object at login. When the user's session is requested I check the remote_addr from the request against the user's remote_addr at login.
I am not happy with this solution, as it will not work when the user is behind a proxy server and also, it will not work when the user uses different browsers.
Does webapp2 store a session id somewhere, so I can use that to see if the user has logged on in a new session?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.