Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Which OS recommended for Programmers?
| 5,423,456 | 1 | 1 | 237 | 0 |
python,linux,qt
|
I would say: nobody cares...if you are fine with Windows, use Windows. If you are commandline guy, go with a Unix system...the distro unlikely matters...
| 1 | 0 | 0 | 0 |
2011-03-24T17:57:00.000
| 3 | 1.2 | true | 5,423,404 | 0 | 0 | 1 | 3 |
My stack for web developing includes Django/Python and Qt/C++ for non-web developing.
What is most comfortable OS for a developer with such a stack ?
|
Which OS recommended for Programmers?
| 5,423,449 | 1 | 1 | 237 | 0 |
python,linux,qt
|
Once you go linux, you'll never go back (though you might go crazy and move onto bsd). Ubuntu is probably the easiest OS to start with, if only because of the truckloads of documentation, forum assistance, etc. Good luck!
| 1 | 0 | 0 | 0 |
2011-03-24T17:57:00.000
| 3 | 0.066568 | false | 5,423,404 | 0 | 0 | 1 | 3 |
My stack for web developing includes Django/Python and Qt/C++ for non-web developing.
What is most comfortable OS for a developer with such a stack ?
|
Which OS recommended for Programmers?
| 5,423,454 | 1 | 1 | 237 | 0 |
python,linux,qt
|
If you don't want to use an language that has special requirements (like .net), the question should be which os does the programmer recommend.
That's really a personal decision.
| 1 | 0 | 0 | 0 |
2011-03-24T17:57:00.000
| 3 | 0.066568 | false | 5,423,404 | 0 | 0 | 1 | 3 |
My stack for web developing includes Django/Python and Qt/C++ for non-web developing.
What is most comfortable OS for a developer with such a stack ?
|
Connecting methods Python
| 5,424,689 | 0 | 3 | 183 | 0 |
python,google-app-engine,oauth,datastore,foursquare
|
You can create a parent association so that SiteUser is the parent of FoursquareAuth and HunchAuth.
When the user first logs in with Foursquare you create the SiteUser model and then create the FoursquareAuth model with parent=just_created_user. Then when you send the user off to authenticate through hunch you include the user's id, or a session id in the callback parameter. When the callback happens you get the user's key and create HunchAuth with parent=previously_created_user.
The SiteUser model contains the combined information from both sources (name, location, last checkin, etc.). The *Auth models just contain whatever guaranteed unique identifiers are supplied by each provider (user_id, access_token, etc.).
This way, if you have the user object you can find either the Foursquare or the Hunch authentication data (using an ancestor filter), and you can find a user by loading any *Auth model and fetching its parent().
(note: I call the model SiteUser to not confuse it with the User object available in App Engine)
| 0 | 0 | 0 | 0 |
2011-03-24T19:15:00.000
| 2 | 0 | false | 5,424,339 | 0 | 0 | 1 | 2 |
I am currently using Oauth to allow a user to sign in through Foursquare, I then create a new session for this user. If the user is new to the system they are asked to sign in through Hunch, this can then generate a user profile based on information from both systems. I have them both signing in to each application separately, but how can I associate the user logged in with Foursquare to the one in Hunch.
My idea for it was to somehow create a reference to the session id in the user model, or use the session ID as a parameter for the hunch sign in but I'm not sure if this would be the best idea. Is there any other way in which I can create the association?
|
Connecting methods Python
| 5,427,530 | 0 | 3 | 183 | 0 |
python,google-app-engine,oauth,datastore,foursquare
|
The easiest way to do this would be something like the following:
Send the user to foursquare to sign in
When the user returns, create a record in the datastore for them.
Send the user to Hunch to sign in, but include the ID of the record you created in step 2 in the continue URL.
When the user returns, use the ID embedded in the URL to add the user's Hunch info to their user record.
| 0 | 0 | 0 | 0 |
2011-03-24T19:15:00.000
| 2 | 1.2 | true | 5,424,339 | 0 | 0 | 1 | 2 |
I am currently using Oauth to allow a user to sign in through Foursquare, I then create a new session for this user. If the user is new to the system they are asked to sign in through Hunch, this can then generate a user profile based on information from both systems. I have them both signing in to each application separately, but how can I associate the user logged in with Foursquare to the one in Hunch.
My idea for it was to somehow create a reference to the session id in the user model, or use the session ID as a parameter for the hunch sign in but I'm not sure if this would be the best idea. Is there any other way in which I can create the association?
|
Can a create a site with a custom URI in Google Sites with Python?
| 6,580,459 | 0 | 0 | 132 | 0 |
python,google-apps,gdata-api,gdata
|
The initial site URL is determined by the site name, using a simple stripping algorithm (all lowercase, no spaces or special chars except - I believe)
Change this into a two-step process:
create the site using a munged name that corresponds to the URL you want
update the site title, which retains the URL but updates the pretty title
Done :)
| 0 | 0 | 1 | 0 |
2011-03-24T23:46:00.000
| 1 | 0 | false | 5,426,967 | 0 | 0 | 1 | 1 |
Using Google Appengine running the Python GDATA setup. I'm a member of the volunteer programming team for DPAU, which runs on Google Apps Education and has a Google Appengine running Python with help from the GDATA library.
I'm using the create_site function in the SitesClient class. I know there is an input called uri= but when I pass it through it always comes back as Invalid Request URI.
Also, Google's docs suggest the URI field is intended to be used for adding a site to a different domain. I want it on my normal domain (dpau.org) but I want to specify the url of the site because that's important. www.dpau.org/IWantThisURL
entry = client.create_site(orgName, description=orgDescription, source_site='https://sites.google.com/feeds/site/dpau.org/org', uri='https://sites.google.com/feeds/site/dpau.org/title-for-my-site')
I shall be very grateful for any help you can provide to us. I'm a bit of a newbie at python :)
|
Python PHP Integration
| 5,433,000 | 2 | 4 | 1,667 | 0 |
php,python
|
I'm not familiar with Pyke; but when this type of situation arises for me, I usually end up wrapping the Python code with a web-service. I then use PHP to make SOAP or cURL calls to the webservice.
| 0 | 0 | 0 | 1 |
2011-03-25T11:37:00.000
| 1 | 0.379949 | false | 5,431,887 | 0 | 0 | 1 | 1 |
I am facing a problem here which I could not find a good solution for it. I am developing a mobile web app using php and I need a rule based inference engine (open source) - expert system. The only one I could find was Pyke in Python. So I need to integrate Pykes' source code with my php implementation. My service provider is not allowing any commands such as exec for security reasons. I tried PiP (Python to PHP module) but it has a lot of bugs.
|
Is it possible to run webservice based on SOAPpy with mod_wsgi under Apache?
| 5,432,043 | 1 | 0 | 227 | 0 |
python,apache,mod-wsgi,soappy
|
No. SOAPpy has its own HTTP server based on BaseHTTPServer which means that it is not possible to turn it into a WSGI app without a non-trivial amount of hacking.
| 0 | 0 | 0 | 1 |
2011-03-25T11:44:00.000
| 1 | 1.2 | true | 5,431,958 | 0 | 0 | 1 | 1 |
Is it possible to run webservice based on SOAPpy with mod_wsgi under Apache?
if yes can you post link to sample(example)?
|
Converting Python App into Django
| 5,435,451 | 1 | 0 | 753 | 0 |
python,django,json,database-design
|
Man, what I think you can do is convert the classes you already have made into django model classes. Of course, only the ones that need to be saved to a database. The other classes, as the rest of the code, I recommend you to encapsulate them for use as helper functions. So you don't have to change too much your code and it's going to work fine. ;D
Or, another choice, that can be easier to implement is: put everything in a helper, the classes, the functions and everything else.
SO you'll just need to call the functions in your views and define models to save your data into the database.
Your idea of saving the objects as JSON on the database works, but it's ugly. ;)
Anyway, if you are in a hurry to deliver the website, anything is valid. Just remember that things made in this way always give us lots of problems in the future.
It hopes that it could be useful! :D
| 0 | 0 | 0 | 0 |
2011-03-25T16:20:00.000
| 1 | 0.197375 | false | 5,435,169 | 0 | 0 | 1 | 1 |
I've got a Python program with about a dozen classes, with several classes possessing instances of other classes, e.g. ObjectA has a list of ObjectB's, and a dictionary of (ObjectC, ObjectD) pairs.
My goal is to put the program's functionality on a website.
I've written and tested JSON encode and decode methods for each class. The problem as I see it now is that I need to choose between starting over and writing the models and logic afresh from a database perspective, or simply storing the python objects (encoded as JSON) in the database, and pulling out the saved states for changes.
Can someone confirm that these are both valid approaches, and that I'm not missing any other simple options?
|
How do I make my website on Google App Engine accessible to visitors in China?
| 5,436,314 | 1 | 2 | 534 | 0 |
python,google-app-engine
|
Assuming Google has, and routes to, Datacenters in Asia, the latency should be reasonable.
The reverse proxy to avoid the firewall should be in a country that does not censor and is as near as possible to the target area.
In those conditions, google would choose a datacenter near your reverse proxy, and the latency is rtt(google<->proxy)+rtt(user<->proxy)
But you really have to try this out.
| 0 | 1 | 0 | 0 |
2011-03-25T17:54:00.000
| 2 | 0.099668 | false | 5,436,249 | 0 | 0 | 1 | 1 |
China blocks appspot -- How do I get around this?
Assuming the censorship was not an issue, how bad are the latency issues?
|
How to run PHP and Web.py together
| 12,962,245 | 1 | 1 | 915 | 0 |
python,web.py
|
Combining web.py and PHP doesn't really make sense. But you can definitely set up Apache to have both. You just install mod_php and mod_wsgi. Point mod_wsgi to your web.py WSGI function, and set up your PHP web app in some directory where Apache can find it. You won't be combining the two technologies, but you will have separate web applications on your server that separately use the two technologies.
| 0 | 0 | 0 | 1 |
2011-03-26T18:45:00.000
| 2 | 0.099668 | false | 5,444,445 | 0 | 0 | 1 | 1 |
I normally web develop in PHP. I am working on a python based project, and want to make a front-end web site for it.
I looked at web.py, and I was wondering if PHP can be used together with web.py, or would I have to rely completely on python as the server side scripting?
Thanks.
|
Is there a difference between developing a web2py app on Windows or Linux?
| 5,444,972 | 2 | 2 | 1,990 | 0 |
python,web2py
|
No, there is a Windows installer.
| 0 | 1 | 0 | 0 |
2011-03-26T19:35:00.000
| 3 | 0.132549 | false | 5,444,798 | 0 | 0 | 1 | 1 |
I recall setting up other frameworks in a Windows environment were extremely painful :)
|
3d game engine suggestions
| 5,445,501 | 2 | 0 | 1,543 | 0 |
javascript,python,game-engine
|
For the requirements that you have Unity3d is probably one of your best bets. As roy said there aren't any other 3D engines out there that will span that wide a range of platforms. Why do you think that going to a completely code based system would save you from creating a variety of classes with various responsibilities ?
The coding effort and the amount of code and classes will stay the same. The only thing that does change is the way that you interact with the system that you are producing. With any large scale system you will quickly run into hundreds of files. I am just finishing up a smaller sized unity project 3-4 month coding including learning unity it runs at 10k lines of code plus another 8k from external libraries and over 100 classes. But this amount wasn't driven by how unity works it was driven by the requirements of the project. While coding this I have learned a lot about how unity runs and what kind of patterns it requires and will be able to come up with better solutions for the next project. Look back at what you did and think about how you can organize it better. I think it is a save bet to say that you will need about the same amount of code with any other system to achieve a similar result.
The advantages that unity does have are a good multiplattform support and a excellent asset pipeline. Importing and utilising art assets, 2D, 3D and audio is for me one of the most onerous tasks of this kind of development and it is extremely well supported in unity.
| 1 | 0 | 0 | 0 |
2011-03-26T20:41:00.000
| 3 | 1.2 | true | 5,445,166 | 0 | 0 | 1 | 2 |
I am developing a 3d shooter game that I would like to run on Computers/Phones/Tablets and would like some help to choose which engine to use.
I would like to write the application once and port it over to Android/iOS/windows/mac with ease.
I would like to make the application streamable over the internet.
The engine needs some physics(collision detection) as well as 3d rendering capabilities
I would prefer to use a scripting language such as Javascript or Python to Java or C++(although I would be willing to learn these if it is the best option)
-My desire is to use an engine that is Code-based and not GUI-based, an engine that is more like a library which I can import into my Python files(for instance) than an application which forces me to rely on its GUI to import assets and establish relationships between them.
This desire stems from my recent experience with Unity3d and Blender. The way I had designed my code required me to write dozens of disorganized scripts to control various objects. I cannot help but think that if I had written my program in a series of python files that I would be able to do a neater, faster job.
I'd appreciate any suggestions. The closest thing to what I want is Panda3d, but I had a difficult time working with textures, and I am not convinced that my application can be made easily available to mobile phone/device users. If there is a similar option that you can think about, I'd appreciate the tip.
|
3d game engine suggestions
| 5,445,300 | 0 | 0 | 1,543 | 0 |
javascript,python,game-engine
|
Well I see you've checked Unity3D already, but I can't think of any other engines work on PC, Telephones and via streaming internet that suport 3D (for 2D check EXEN or any others).
I'm also pretty sure that you can use Unity code-based, and it supports a couple of different languages, but for Unity to work you can't just import unity.dll (for example) into your code, no you have to use your code into unity so that unity can make it work on all these different platforms.
| 1 | 0 | 0 | 0 |
2011-03-26T20:41:00.000
| 3 | 0 | false | 5,445,166 | 0 | 0 | 1 | 2 |
I am developing a 3d shooter game that I would like to run on Computers/Phones/Tablets and would like some help to choose which engine to use.
I would like to write the application once and port it over to Android/iOS/windows/mac with ease.
I would like to make the application streamable over the internet.
The engine needs some physics(collision detection) as well as 3d rendering capabilities
I would prefer to use a scripting language such as Javascript or Python to Java or C++(although I would be willing to learn these if it is the best option)
-My desire is to use an engine that is Code-based and not GUI-based, an engine that is more like a library which I can import into my Python files(for instance) than an application which forces me to rely on its GUI to import assets and establish relationships between them.
This desire stems from my recent experience with Unity3d and Blender. The way I had designed my code required me to write dozens of disorganized scripts to control various objects. I cannot help but think that if I had written my program in a series of python files that I would be able to do a neater, faster job.
I'd appreciate any suggestions. The closest thing to what I want is Panda3d, but I had a difficult time working with textures, and I am not convinced that my application can be made easily available to mobile phone/device users. If there is a similar option that you can think about, I'd appreciate the tip.
|
How do I match this URL in Django's urls.py?
| 5,446,040 | 6 | 0 | 1,876 | 0 |
python,regex,django
|
Don't ever send passwords in the URL. They belong in the POST body, which is not stored by browsers (you can repeat POSTs in browsers, but POST data is not stored in the history).
| 0 | 0 | 0 | 0 |
2011-03-26T23:07:00.000
| 4 | 1 | false | 5,445,994 | 0 | 0 | 1 | 1 |
I want to match a url that:
begins with /signup
and ends with password=goodbye
That's it. If something begins with that and ends with that, how do I match that in regex?
I understand that I should do this in urls.py, but I have to because of certain reasons.
Please answer how I would match it this way. I have to because a iPhone client (which cannot be changed) hardcoded it this way. I know it's not ideal, but I have to match it this way now.
|
Record audio in Google App Engine using rtmplite?
| 5,450,822 | 5 | 3 | 981 | 0 |
python,google-app-engine,rtmp
|
Google App Engine is tricky for RTMP because it does not support sockets. You would have to use something like RTMPT which is tunneled over HTTP, however, this tunneling incurs latency so if you are looking to do anything realtime this could become an issue.
Currently rtmplite does not support RTMPT so this would not be possible at the moment. I am involved in a project, RTMPy (http://rtmpy.org), that is planning support for RTMPT and AppEngine. Unfortunately AppEngine support is probably a few months out.
| 0 | 1 | 0 | 0 |
2011-03-27T06:30:00.000
| 2 | 0.462117 | false | 5,447,631 | 0 | 0 | 1 | 2 |
I am in the process of building a Google App Engine application which requires audio to be recorded and saved in our database. I have found no alternative to using some form of RTMP server for recording audio through flash, so [rtmplite] (http://code.google.com/p/rtmplite/) came into our horizon.
Since I have no experience with rtmplite, is it the right choice for our project? Or is there any other Python-based RTMP solution that allows audio recording? Any flash client you can recommend?
Thanks!
|
Record audio in Google App Engine using rtmplite?
| 6,433,502 | 0 | 3 | 981 | 0 |
python,google-app-engine,rtmp
|
Try appengine backends, they currently don't whitelist a lot of things required for such streaming. But they might soon do so. Once they enable sockets, then rtmplite or rtmpy could easily be ported to run there. Backends already support unlimited request length which is required for streaming.
| 0 | 1 | 0 | 0 |
2011-03-27T06:30:00.000
| 2 | 0 | false | 5,447,631 | 0 | 0 | 1 | 2 |
I am in the process of building a Google App Engine application which requires audio to be recorded and saved in our database. I have found no alternative to using some form of RTMP server for recording audio through flash, so [rtmplite] (http://code.google.com/p/rtmplite/) came into our horizon.
Since I have no experience with rtmplite, is it the right choice for our project? Or is there any other Python-based RTMP solution that allows audio recording? Any flash client you can recommend?
Thanks!
|
how to have google apps engine send mail- not send a copy of the mail to the sender
| 5,450,093 | 6 | 3 | 288 | 0 |
python,google-app-engine,sendmail
|
You can't. Sending email from someone without their knowledge isn't permitted by App Engine.
You can send email from any administrator address; you could add a "[email protected]" type address as an administrator and send email from that address.
| 0 | 1 | 0 | 0 |
2011-03-27T10:38:00.000
| 1 | 1.2 | true | 5,448,698 | 0 | 0 | 1 | 1 |
I'm using GAE send mail- but I dont want the sender of the mail to get a coppy of the mail.
as for now, when a user is sending mail he gets a mail saying that he sent a mail to someone and the body of the sent mail, how do I disable that?
|
Django IntegrityError
| 5,450,932 | 0 | 2 | 2,397 | 0 |
python,django,django-models
|
Create and add items to the model before saving.
| 0 | 0 | 0 | 0 |
2011-03-27T17:19:00.000
| 2 | 0 | false | 5,450,881 | 0 | 0 | 1 | 2 |
I have a little project I am working. I am writing django database to hold some data. I have one ManyToManyField.
I am using my own Manager and adding methods for convience. I have one that adds different tasks to the users to do list. These items can be assigned to many people and so on.
When I do this I am getting an IntegrityError What is the main cause of this? The exact error I am getting is.
...items_id may not be NULL
I would appreciate answer on how to fix this. Also an explanation on how this exception is thrown. I have been told to catch it. But I dont like things being thrown unless they need to be.
Please and thank you!
|
Django IntegrityError
| 5,970,201 | 3 | 2 | 2,397 | 0 |
python,django,django-models
|
Turns out all I needed to do was clean out my database with a python manage.py sqlflush Everything was fine afterwards. I then added south to help with migrations in the future.
I would advise caution since sqlflush will return your database to back the state of the last syncdb.
| 0 | 0 | 0 | 0 |
2011-03-27T17:19:00.000
| 2 | 1.2 | true | 5,450,881 | 0 | 0 | 1 | 2 |
I have a little project I am working. I am writing django database to hold some data. I have one ManyToManyField.
I am using my own Manager and adding methods for convience. I have one that adds different tasks to the users to do list. These items can be assigned to many people and so on.
When I do this I am getting an IntegrityError What is the main cause of this? The exact error I am getting is.
...items_id may not be NULL
I would appreciate answer on how to fix this. Also an explanation on how this exception is thrown. I have been told to catch it. But I dont like things being thrown unless they need to be.
Please and thank you!
|
django: on pypy, psyco, unladen swallow or cpython, which one is the fastest?
| 5,497,595 | 0 | 3 | 611 | 0 |
python,django,compiler-construction,comparison,benchmarking
|
One thing you should considerate is the C extensions. Different implementations require different extension ways. At present, the CTYPES may be the most common one.
So I recommend you take CPython, in case of possible C extensions.
| 0 | 0 | 0 | 1 |
2011-03-27T18:16:00.000
| 2 | 0 | false | 5,451,246 | 0 | 0 | 1 | 1 |
Has anyone tried to compare those python implementations?
pypy
psyco
unladen swallow (is it dead?)
cpython
I am planning to squeeze something more from my server.
Setup:
Django 1.3
Python 2.7
Psycopg2 1.4
apache 2
mod_wsgi
and... Windows server
I am not a windows fanboy, but it has to be :{ There is some legacy code working on it.
|
Python, beginner's question! Repository or Object persisting itself?
| 5,463,687 | 1 | 8 | 2,180 | 0 |
python
|
Best of my recollection Django's models include save() and delete() methods so you can deal exclusively with objects, rather than interacting with a database connection object. I don't know that it's instantly a Python way of doing things, but I'm pretty sure it's a pervasive Django pattern.
If I was told "this is Django code" but the code diverged from how Django does things, that might be confusing.
| 0 | 0 | 0 | 0 |
2011-03-28T17:21:00.000
| 4 | 0.049958 | false | 5,462,635 | 1 | 0 | 1 | 3 |
I am a seasoned .Net developer who's trying to write some Python code. On one of the projects I am contributing to, we have a services layer which is a set of classes which abstract away functionality and a django web app which consumes these in process services (which are just classes).
I had created a repository layer and ensured that all interaction with the database happens through the services layer through this repository. We have a document oriented database and thus we do not have the usual object-relational muck.
During a recent code review, one developer who is supposedly seasoned with python shunned at this and commented that this was not the python way of doing things. He remarked that python developers are used to having a save and delete method on the object instance itself (and do not use the repository pattern as much) and this would confuse python devs looking to contribute to our OSS project. Python devs, your views? Would you be confused?
Edit: This is not django code, but will be code called by the django app (It an in process service layer)
|
Python, beginner's question! Repository or Object persisting itself?
| 5,522,389 | 3 | 8 | 2,180 | 0 |
python
|
Maybe that is a Django pattern, but not a Python one by all means.
That said, if the target audience of your module are Django developers, I would advise you to follow as much as possible the Django philosophy and its associated patterns.
| 0 | 0 | 0 | 0 |
2011-03-28T17:21:00.000
| 4 | 0.148885 | false | 5,462,635 | 1 | 0 | 1 | 3 |
I am a seasoned .Net developer who's trying to write some Python code. On one of the projects I am contributing to, we have a services layer which is a set of classes which abstract away functionality and a django web app which consumes these in process services (which are just classes).
I had created a repository layer and ensured that all interaction with the database happens through the services layer through this repository. We have a document oriented database and thus we do not have the usual object-relational muck.
During a recent code review, one developer who is supposedly seasoned with python shunned at this and commented that this was not the python way of doing things. He remarked that python developers are used to having a save and delete method on the object instance itself (and do not use the repository pattern as much) and this would confuse python devs looking to contribute to our OSS project. Python devs, your views? Would you be confused?
Edit: This is not django code, but will be code called by the django app (It an in process service layer)
|
Python, beginner's question! Repository or Object persisting itself?
| 6,884,336 | 2 | 8 | 2,180 | 0 |
python
|
Django's ORM provides save() and delete() methods on the object. SQLAlchemy on the other hand has a so called session to which you add or delete objects.
Both are very popular so I'd say that both methods are about equal in terms of popularity. However in the context of a Django application going with the Django convention is probably preferable unless you have a good reason not to.
| 0 | 0 | 0 | 0 |
2011-03-28T17:21:00.000
| 4 | 0.099668 | false | 5,462,635 | 1 | 0 | 1 | 3 |
I am a seasoned .Net developer who's trying to write some Python code. On one of the projects I am contributing to, we have a services layer which is a set of classes which abstract away functionality and a django web app which consumes these in process services (which are just classes).
I had created a repository layer and ensured that all interaction with the database happens through the services layer through this repository. We have a document oriented database and thus we do not have the usual object-relational muck.
During a recent code review, one developer who is supposedly seasoned with python shunned at this and commented that this was not the python way of doing things. He remarked that python developers are used to having a save and delete method on the object instance itself (and do not use the repository pattern as much) and this would confuse python devs looking to contribute to our OSS project. Python devs, your views? Would you be confused?
Edit: This is not django code, but will be code called by the django app (It an in process service layer)
|
form index in inlineformset
| 5,463,957 | 2 | 1 | 189 | 0 |
python,django,formset
|
No, objects in a collection don't generally have access to their index or key.
However if you're outputting the formset in a template, you're presumably looping through the forms. So you can use {% forloop.counter %} to get the index of the iteration.
| 0 | 0 | 0 | 0 |
2011-03-28T19:03:00.000
| 2 | 1.2 | true | 5,463,769 | 0 | 0 | 1 | 1 |
I have a formset created using inlineformset_factory. It doesn't matter what it looks like to answer this question. In the template I am looping through it with for form in forms.formset:
I want to be able to display the form index of the form in my template. By form index, I mean the number associated with that form in all of the formfields. Is there a variable that does this? I tried form.index and form.form_id and form.id is a field.
|
Python Twisted does not work on Eclipse
| 5,466,280 | 2 | 2 | 991 | 0 |
python,eclipse,twisted,pydev
|
Make sure you:
Have PyDev installed
Have twisted / zope.interface installed and in your PYTHONPATH.
Have configured your eclipse project as a python/pydev project.
Have configured the interpreter in the Eclipse environment (Pydev settings).
| 0 | 1 | 0 | 0 |
2011-03-28T19:04:00.000
| 1 | 1.2 | true | 5,463,782 | 1 | 0 | 1 | 1 |
I installed Twisted for Python and I am trying to build a simple server on Eclipse and I am getting the following error:
ImportError: No module named zope.interface
I'm not sure how to correct this. Doesn't Twisted install all of the dependencies first?
|
Multiple Sphinx Themes Used Simultaneously
| 11,197,460 | 1 | 1 | 393 | 0 |
python,python-sphinx
|
That is correct, only one Sphinx theme can be used at a time.
| 0 | 0 | 0 | 0 |
2011-03-29T06:01:00.000
| 1 | 1.2 | true | 5,468,511 | 0 | 0 | 1 | 1 |
I want to use multiple themes in Sphinx - so that one page has one theme, and a second page has a second theme. It seems to me that only one theme can be set at once - is this true?
Many thanks,
Ned
|
why hitting refresh button is not calling the view method of the page's url?
| 5,471,592 | 0 | 0 | 153 | 0 |
python,django,django-views
|
check caching.
first check cache meta tags on the client
then check web server cache
Note: "GET" requets some times cached on the server or client automatically.
| 0 | 0 | 0 | 0 |
2011-03-29T10:47:00.000
| 2 | 0 | false | 5,471,312 | 0 | 0 | 1 | 2 |
I have django supported web application. when i hit refresh button on a page the view method corresponding to the page's url is not being called. But it gets called when i re-enter the url in th address bar. can any one suggest the reason and the solution?
|
why hitting refresh button is not calling the view method of the page's url?
| 5,537,714 | 0 | 0 | 153 | 0 |
python,django,django-views
|
finally found the ans: just add the new tab command to your html page itself
| 0 | 0 | 0 | 0 |
2011-03-29T10:47:00.000
| 2 | 1.2 | true | 5,471,312 | 0 | 0 | 1 | 2 |
I have django supported web application. when i hit refresh button on a page the view method corresponding to the page's url is not being called. But it gets called when i re-enter the url in th address bar. can any one suggest the reason and the solution?
|
Including CAPTCHA on user registration page with Django
| 5,473,845 | 3 | 0 | 2,880 | 0 |
python,django,django-models,django-templates,captcha
|
Your question about which 3rd party solution is "better" is subjective, and stackoverflow doesn't generally like to answer subjective questions. Take some time and evaluate each in light of your needs.
You often don't need a fancy image captcha. Even a simple question like "what color is an orange?" will stop most spam bots. I posed a simple question on my registration form, asking the user to type the domain name of the site. Simple but very effective. You can also include an input box on the form, and hide it with CSS (display: none). If this input comes back to you filled out, chances are good a bot is trying to register.
It doesn't really matter that these 3rd party solutions are using Django forms, and you are using "simple HTML". In your registration view, you simply process request.POST. It doesn't matter how the form was generated.
| 0 | 0 | 0 | 0 |
2011-03-29T13:31:00.000
| 4 | 0.148885 | false | 5,473,340 | 0 | 0 | 1 | 1 |
I am a Django newbie. I created an app which has a user login/registration page. Now I want to include CAPTCHA also in the registration page. Can somebody guide me how to implement this in Django as i am quite new to it. On googling I found there are many modules which do the function out of the box. If this is the way to go, then which application is a better choice? Also I found most of them were explained on the basis of using Django Forms. But I used simple HTML forms instead of Django forms. Any help would be appreciated.
|
HTML/CSS/JS Syntax Highlighting in Eclipse
| 5,477,725 | 5 | 7 | 8,276 | 0 |
python,html,eclipse,syntax,cheetah
|
Can I somehow trick Eclipse into treating .tmpl files as if they were .html?
It's not a trick.
Under Windows -> Preferences, General -> Editors -> File Associations, you can associate *.tmpl files with your HTML editor.
| 0 | 0 | 0 | 1 |
2011-03-29T18:19:00.000
| 1 | 1.2 | true | 5,477,078 | 0 | 0 | 1 | 1 |
Hello How can I enable syntax highlighting for HTML/CSS/JS in Eclipse I am mainly developing in python using the PyDev package but right now I am creating Cheetah templates and they are very hard to read unhighlighted.
Any plugin/package suggestions related to Cheetah or just highlighting any file as html would be greatly appreciated.
thank you.
|
Need CGI (or another solution compatible with IIS 7) to handle *massive* uploads
| 5,479,573 | -1 | 1 | 300 | 0 |
python,perl,iis,iis-7,cgi
|
the windows TCP stack is limited to 4GB file uploads. Anymore than that is not possible.
| 0 | 1 | 0 | 0 |
2011-03-29T21:52:00.000
| 3 | -0.066568 | false | 5,479,387 | 0 | 0 | 1 | 2 |
We need to handle massive file uploads without spending resources on an IIS 7 server. To emphasize how light-weight this needs to be, let's say that we need to handle file uploads of sizes that are completely insane, like 100GB uploads, or something that can continue running for an extremely long time without consuming additional resources. Basically we need something that gives us control over the reception of the file from the moment it starts to the moment it ends.
A bit of background:
We're using ColdFusion as the server-side processor, but it has failed us when handling uploads beyond about 1GB and we've exhausted our configuration options. There's a long story behind that, but essentially, if a .cfm page (ColdFusion) is the destination of the file upload and it goes over about 1GB, it gives a 503 error... even if the target file doesn't exist. So clearly too much is going on merely by telling the server that we intend to process the file with a .cfm page.
We suspect that this is due to Java limitations because the server (or really, the workstation in this case) does not show any signs of load on CPU or memory. Since we have limited memory and this website is intended for a lot of concurrent uploads, we can't trust simply raising the virtual machine memory usage, especially because that simply doesn't work currently, even for a single connection... let alone the hundreds of concurrent connections we expect when we go live.
So we're down to writing a specialized solution using CGI that will handle file uploads only. Basically, we need control on the server-side that we don't get with ColdFusion or ASP.NET because those technologies do so many things on their own, behind the scenes, without giving us the control we need. They always end up spending up too many resources one way or the other for an arguably obvious reason; what we're trying to do is completely insane and not the intended function of those technologies. That's why we want a specialized uploader through CGI that bypasses all that ColdFusion/ASP.NET magic that keeps getting in the way, hoping it gives us the control we need.
But before we spent countless hours on this, I figured I'd ask around and see if anyone knows of a proper solution to this problem that might be viable in our case.
The only real restriction here is that it has to be CGI, and it has to run on IIS 7, therefore a Windows "Server" environment. We're fine with it being written in Python, Perl, name it... provided it can run as a CGI, but it has to run as a CGI... unless of course someone has better ideas on how to do this.
So the magic question is; are there CGI solutions out there that already do this or are we stuck with writing it on our own, hoping that the reason no one else has done it already is some other than it being impossible?
Thanks in advance.
|
Need CGI (or another solution compatible with IIS 7) to handle *massive* uploads
| 5,483,933 | 3 | 1 | 300 | 0 |
python,perl,iis,iis-7,cgi
|
You want WebDAV, not CGI. It provides all the nice bits that make file transfers not suck, like resuming and pausing.
| 0 | 1 | 0 | 0 |
2011-03-29T21:52:00.000
| 3 | 0.197375 | false | 5,479,387 | 0 | 0 | 1 | 2 |
We need to handle massive file uploads without spending resources on an IIS 7 server. To emphasize how light-weight this needs to be, let's say that we need to handle file uploads of sizes that are completely insane, like 100GB uploads, or something that can continue running for an extremely long time without consuming additional resources. Basically we need something that gives us control over the reception of the file from the moment it starts to the moment it ends.
A bit of background:
We're using ColdFusion as the server-side processor, but it has failed us when handling uploads beyond about 1GB and we've exhausted our configuration options. There's a long story behind that, but essentially, if a .cfm page (ColdFusion) is the destination of the file upload and it goes over about 1GB, it gives a 503 error... even if the target file doesn't exist. So clearly too much is going on merely by telling the server that we intend to process the file with a .cfm page.
We suspect that this is due to Java limitations because the server (or really, the workstation in this case) does not show any signs of load on CPU or memory. Since we have limited memory and this website is intended for a lot of concurrent uploads, we can't trust simply raising the virtual machine memory usage, especially because that simply doesn't work currently, even for a single connection... let alone the hundreds of concurrent connections we expect when we go live.
So we're down to writing a specialized solution using CGI that will handle file uploads only. Basically, we need control on the server-side that we don't get with ColdFusion or ASP.NET because those technologies do so many things on their own, behind the scenes, without giving us the control we need. They always end up spending up too many resources one way or the other for an arguably obvious reason; what we're trying to do is completely insane and not the intended function of those technologies. That's why we want a specialized uploader through CGI that bypasses all that ColdFusion/ASP.NET magic that keeps getting in the way, hoping it gives us the control we need.
But before we spent countless hours on this, I figured I'd ask around and see if anyone knows of a proper solution to this problem that might be viable in our case.
The only real restriction here is that it has to be CGI, and it has to run on IIS 7, therefore a Windows "Server" environment. We're fine with it being written in Python, Perl, name it... provided it can run as a CGI, but it has to run as a CGI... unless of course someone has better ideas on how to do this.
So the magic question is; are there CGI solutions out there that already do this or are we stuck with writing it on our own, hoping that the reason no one else has done it already is some other than it being impossible?
Thanks in advance.
|
Incorporating multiple login systems?
| 5,480,782 | 3 | 1 | 105 | 0 |
python,database,database-design,login
|
You can do this many ways, either you store most of the data in a generic usertable (as you are about to) and the provider details separated.
Or you make a design where you can connect multiple logins to same user. This will end up with something like
id user
id facebookuser (nullable)
id twitteruser (nullable)
This will maybe get you N many e-mail adresses (and still no password! since you arent the provider of the account); or none at all. It depends how much this user trust you in each provider.
Edit:
You might also want to normalize the data without nullables.
You can do this by having
id_user
id_facebookuser id_user
id_twitteruser id_user
| 0 | 0 | 0 | 1 |
2011-03-30T00:58:00.000
| 1 | 1.2 | true | 5,480,742 | 0 | 0 | 1 | 1 |
I have something simple right now, userdb schema is:
userid - autoincrement id email
email address
password
I want to incorporate Facebook and twitter, how would i deal with it on the DB side?
|
Django or Ruby on Rails, which one is better for web 2.0 heavy ajax app?
| 5,484,532 | 0 | 0 | 1,274 | 0 |
javascript,jquery,python,ruby-on-rails,django
|
We've had a blast developing with Django/Jquery, and development, in our opinion, is easier, faster. That being said, we tend to go with Django because of Python's raw power and reliability. Not to say ROR doesn't have similar strengths, but we get "stuck" more often than not when using ROR than when using Django.
If it's a small-ish web app and you're not worried about production deployment on a significant level then go with ROR. If you're looking for something better equipped, more reliable and more conducive to development then go with Django.
Keep in mind, though, that this all boils down to what you know, what you're most comfortable with. If you know a bit of Python go with Django, but know that Ruby is just as easy to pick up if you'd rather go that direction.
They're both win/win really.
| 0 | 0 | 0 | 0 |
2011-03-30T07:46:00.000
| 4 | 0 | false | 5,483,404 | 0 | 0 | 1 | 2 |
I want to build a heavy ajax web2.0 app and I don't have javascript, django or ruby on rails. I have some experience with python. I am not sure which one to choose. I have a backend database and have to run few queries for each page, no big deal. So, I am looking for a choice which is quite easy to learn and maintain in the future.
Thank you
|
Django or Ruby on Rails, which one is better for web 2.0 heavy ajax app?
| 5,483,855 | 0 | 0 | 1,274 | 0 |
javascript,jquery,python,ruby-on-rails,django
|
ROR has much better community activity. It's easier to learn without learning ruby (i do not recommend that way, but yes - you can write in ROR barely understanding ruby).
About performance: ruby 1.8 was much slower than python. But maybe ruby 1.9 is faster.
If you want to build smart ajax application and you understand javascript it does not matter which framework you will use. If not or you are lazy - ROR have some aid to ajax requests. Also take a note about django /admin/ :)
| 0 | 0 | 0 | 0 |
2011-03-30T07:46:00.000
| 4 | 0 | false | 5,483,404 | 0 | 0 | 1 | 2 |
I want to build a heavy ajax web2.0 app and I don't have javascript, django or ruby on rails. I have some experience with python. I am not sure which one to choose. I have a backend database and have to run few queries for each page, no big deal. So, I am looking for a choice which is quite easy to learn and maintain in the future.
Thank you
|
Does local GAE read and write to a local datastore file on the hard drive while it's running?
| 5,486,121 | 3 | 0 | 196 | 0 |
python,google-app-engine,local-storage
|
How the datastore reads and writes its underlying files varies - the standard datastore is read on startup, and written progressively, journal-style, as the app modifies data. The SQLite backend uses a SQLite database.
You shouldn't have to care, though - neither backend is designed for robustness in the face of failure, as they're development backends. You shouldn't be modifying or deleting the underlying files, either.
| 0 | 1 | 0 | 0 |
2011-03-30T10:09:00.000
| 2 | 1.2 | true | 5,484,900 | 0 | 0 | 1 | 1 |
I have just noticed that when I have a running instance of my GAE application, there nothing happens with the datastore file when I add or remove entries using Python code or in admin console. I can even remove the file and still have all data safe and sound in admin area and accessible from code. But when I restart my application, all data obviously goes away and I have a blank datastore. So, the question - does GAE reads all data from the file only when it starts and then deals with it in the memory, saving the data after I stop the application? Does it make any requests to the datastore file when the application is running? If it doesn't save anything to the file while it's running, then, possibly, data may be lost if the application unexpectedly stops? Please make it clear for me if you know how it works in this aspect.
|
Check if a function has a decorator
| 5,490,446 | 2 | 16 | 8,338 | 0 |
python,django,decorator,login-required
|
It seems that your situation is as follows:
1. You have pages that are secured and behind a login-required decorator
2. You have pages that are non-secure and can be visited in both a logged-in state and anonymous state.
If I understand your requirements, you want a user to be redirected to Main Page (Assuming this to be the Welcome Page that can be visited in both a logged-in and Anonymous state) when a user logs out.
Why wouldn't you just limit the user's ability to logout from only secure pages, and then set your redirect_url on logout to the welcome screen?
| 0 | 0 | 0 | 0 |
2011-03-30T17:09:00.000
| 2 | 0.197375 | false | 5,489,649 | 0 | 0 | 1 | 1 |
My question is a general one, but specifically my application is the login_required decorator for Django.
I'm curious if there is a way to check if a view/function has a specific decorator (in this case the login_required decorator)
I am redirecting after logging a user out, and I want to redirect to the main page if the page they are currently on has the login_required decorator. My searches have yielded no results so far.
|
what is the best way to scrape multiple domains with scrapy?
| 8,621,234 | 1 | 6 | 3,829 | 0 |
python,screen-scraping,scrapy
|
You can use a empty allowed_domains attribute to instruct scrapy not to filter any offsite request. But in that case you must be careful and only return relevant requests from your spider.
| 0 | 0 | 1 | 0 |
2011-03-31T08:44:00.000
| 6 | 0.033321 | false | 5,497,268 | 0 | 0 | 1 | 3 |
I have around 10 odd sites that I wish to scrape from. A couple of them are wordpress blogs and they follow the same html structure, albeit with different classes. The others are either forums or blogs of other formats.
The information I like to scrape is common - the post content, the timestamp, the author, title and the comments.
My question is, do i have to create one separate spider for each domain? If not, how can I create a generic spider that allows me scrape by loading options from a configuration file or something similar?
I figured I could load the xpath expressions from a file which location can be loaded via command line but there seems to be some difficulties when scraping for some domain requires that I use regex select(expression_here).re(regex) while some do not.
|
what is the best way to scrape multiple domains with scrapy?
| 5,508,257 | 0 | 6 | 3,829 | 0 |
python,screen-scraping,scrapy
|
You should use BeautifulSoup especially if you're using Python. It enables you to find elements in the page, and extract text using regular expressions.
| 0 | 0 | 1 | 0 |
2011-03-31T08:44:00.000
| 6 | 0 | false | 5,497,268 | 0 | 0 | 1 | 3 |
I have around 10 odd sites that I wish to scrape from. A couple of them are wordpress blogs and they follow the same html structure, albeit with different classes. The others are either forums or blogs of other formats.
The information I like to scrape is common - the post content, the timestamp, the author, title and the comments.
My question is, do i have to create one separate spider for each domain? If not, how can I create a generic spider that allows me scrape by loading options from a configuration file or something similar?
I figured I could load the xpath expressions from a file which location can be loaded via command line but there seems to be some difficulties when scraping for some domain requires that I use regex select(expression_here).re(regex) while some do not.
|
what is the best way to scrape multiple domains with scrapy?
| 6,232,758 | 1 | 6 | 3,829 | 0 |
python,screen-scraping,scrapy
|
I do sort of the same thing using the following XPath expressions:
'/html/head/title/text()' for the title
//p[string-length(text()) > 150]/text() for the post content.
| 0 | 0 | 1 | 0 |
2011-03-31T08:44:00.000
| 6 | 0.033321 | false | 5,497,268 | 0 | 0 | 1 | 3 |
I have around 10 odd sites that I wish to scrape from. A couple of them are wordpress blogs and they follow the same html structure, albeit with different classes. The others are either forums or blogs of other formats.
The information I like to scrape is common - the post content, the timestamp, the author, title and the comments.
My question is, do i have to create one separate spider for each domain? If not, how can I create a generic spider that allows me scrape by loading options from a configuration file or something similar?
I figured I could load the xpath expressions from a file which location can be loaded via command line but there seems to be some difficulties when scraping for some domain requires that I use regex select(expression_here).re(regex) while some do not.
|
Passing Data to a Python Web Crawler from PHP Script
| 5,499,681 | 0 | 1 | 1,079 | 0 |
php,python,stdout,stdin,web-crawler
|
Since i don't know too much about how python works just treat this like a wild idea.
Create an XML on your server which is accessible by both of python and PHP
On the PHP side you can insert new nodes to this XML about new urls with a processed=false flag
Python come and see for unprocessed tasks then fetch data and put sources onto your db
After successful fetching, toggle the processed flag
When next time PHP touch this XML, delete nodes with processed=true attributes
Hope it helps you in some way.
| 0 | 0 | 1 | 1 |
2011-03-31T12:06:00.000
| 3 | 0 | false | 5,499,558 | 0 | 0 | 1 | 2 |
I've got a python crawler crawling a few webpages every few minutes. I'm now trying to implement a user interface to be accessed over the web and to display the data obtained by the crawler. I'm going to use php/html for the interface. Anyway, the user interface needs some sort of button which triggers the crawler to crawl a specific website straight away (and not wait for the next crawl iteration).
Now, is there a way of sending data from the php script to the running python script? I was thinking about standard input/output, but could not find a way this can be done (writing from one process to another process stdin). Then I was thinking about using a shared file which php writes into and python reads from. But then I would need some way to let the python script know, that new data has been written to the file and a way to let the php script know when the crawler has finished its task. Another way would be sockets - but then I think, this would be a bit over the top and not as simple as possible.
Do you have any suggestions to keep everything as simple as possible but still allowing me to send data from a php script to a running python process?
Thanks in advance for any ideas!
Edit: I should note, that the crawler saves the obtained data into a sql database, which php can access. So passing data from the python crawler to the php script is no problem. It's the other way round.
|
Passing Data to a Python Web Crawler from PHP Script
| 5,500,025 | 1 | 1 | 1,079 | 0 |
php,python,stdout,stdin,web-crawler
|
Best possible way to remove dependencies of working with different languages is to use a message queuing library (like rabbitMQ or ActiveMQ)
By using this you can send direct messages from php to python or vice versa...
If you want an easy way out you need to modify your python script(more on the lines of what fabrik said) to poll a database(or a file) for any new jobs...and process it if it finds one...
| 0 | 0 | 1 | 1 |
2011-03-31T12:06:00.000
| 3 | 0.066568 | false | 5,499,558 | 0 | 0 | 1 | 2 |
I've got a python crawler crawling a few webpages every few minutes. I'm now trying to implement a user interface to be accessed over the web and to display the data obtained by the crawler. I'm going to use php/html for the interface. Anyway, the user interface needs some sort of button which triggers the crawler to crawl a specific website straight away (and not wait for the next crawl iteration).
Now, is there a way of sending data from the php script to the running python script? I was thinking about standard input/output, but could not find a way this can be done (writing from one process to another process stdin). Then I was thinking about using a shared file which php writes into and python reads from. But then I would need some way to let the python script know, that new data has been written to the file and a way to let the php script know when the crawler has finished its task. Another way would be sockets - but then I think, this would be a bit over the top and not as simple as possible.
Do you have any suggestions to keep everything as simple as possible but still allowing me to send data from a php script to a running python process?
Thanks in advance for any ideas!
Edit: I should note, that the crawler saves the obtained data into a sql database, which php can access. So passing data from the python crawler to the php script is no problem. It's the other way round.
|
pythonxy can't be reinstalled or uninsalled
| 5,508,380 | 0 | 0 | 927 | 0 |
python,installation,pythonxy
|
there is a simple way, delete the pythonxy directory (same as c:\python2.6),
then run pythonxy.exe again.
| 0 | 0 | 0 | 0 |
2011-04-01T00:50:00.000
| 1 | 0 | false | 5,508,113 | 1 | 0 | 1 | 1 |
For convenience, I am applying pythonxy
But when I try to reload or uninstall,log2del has been in the process,program stoped there
I can not reinstall or uninsall
|
How can I get plain text from within a HTML class given a URL in python?
| 5,508,324 | 1 | 0 | 197 | 0 |
python,html
|
look in the direction of beautiful soup.
| 0 | 0 | 1 | 0 |
2011-04-01T01:30:00.000
| 2 | 0.099668 | false | 5,508,314 | 1 | 0 | 1 | 1 |
How can I get plain text (stripped of HTML) from inside a any tag with the class name of calendardescription given a URL in python? Matching text in different tags should also be separated by a blank line. This is for text to speech purposes.
Thanks in advance.
|
Python web service for a java application?
| 5,515,258 | 0 | 0 | 2,930 | 0 |
java,python,web-services,web-applications
|
When developing in a framework, it is generally simpler to develop with the language of the framework than it is to develop with a different language.
Servlets are components of the web server (which is also called a Servlet container). The Servlet container and the required Servlet API is all Java. While you could Frankenstein in some sort of Python code, odds are good that the integration effort would eventually make that "simplicity" far more complex than you particularly desire.
If you want a Python web application, use a Python web framework. If you want a Java web application, use a Java framework. Don't try to make the two cross compatible, as the integration points (and used / offered conveniences) are not even guaranteed to be present on the "other side" of the fence.
| 0 | 0 | 0 | 0 |
2011-04-01T15:01:00.000
| 4 | 0 | false | 5,515,157 | 0 | 0 | 1 | 1 |
Forgive me if this is a stupid question. I am completely new to building web services and complete web apps.
I want to develop a particular functionality for a java based web application. However this functionality is simpler to develop with Python. So is it possible If i develop this web service with Python and use it for a Java based webapp?
|
Is there a simple way to write an ODT using Python?
| 5,520,909 | 7 | 6 | 4,956 | 0 |
python,uno,odt
|
Your mileage with odfpy may vary. I didn't like it - I ended up using a template ODT, created in OpenOffice, oppening the contents.xml with ziplib and elementtree, and updating that. (In your case, it would create only the relevant table rows and table cell nodes), then recorded everything back.
It is actually straightforward, but for making ElementTree properly work with the XML namespaces. (it is badly documente) But it can be done. I don't have the example, sorry.
| 0 | 0 | 0 | 0 |
2011-04-01T22:54:00.000
| 2 | 1.2 | true | 5,519,714 | 0 | 0 | 1 | 1 |
My point is that using either pod (from appy framework, which is a pain to use for me) or the OpenOffice UNO bridge that seems soon to be deprecated, and that requires OOo.org to run while launching my script is not satisfactory at all.
Can anyone point me to a neat way to produce a simple yet clean ODT (tables are my priority) without having to code it myself all over again ?
edit: I'm giving a try to ODFpy that seems to do what I need, more on that later.
|
Equivalent of Python's dir in Javascript
| 5,524,753 | 2 | 58 | 18,362 | 0 |
javascript,python,namespaces,interactive,dir
|
The global variables are kept in an easily accessible object (window) and so you can inspect/iterate over them easily. (Using something like the functions suggested by Glenjamin)
On the other hand, I don't know of any way to inspect local variables defined in functions or closures - if this is possible I'd at least guess it would be highly browser/console specific.
| 0 | 0 | 0 | 0 |
2011-04-02T14:33:00.000
| 9 | 0.044415 | false | 5,523,747 | 1 | 0 | 1 | 1 |
when I write Python code from the interpreter I can type dir() to have a list of names defined in the current scope. How can achieve to have the same information, programmatically, when I develop Javascript code from a browser using an interactive console like firebug, chrome console, etc?
|
file I/O with google app engine
| 5,525,121 | 1 | 1 | 278 | 0 |
python,html,xml,google-app-engine,datastore
|
Use a StringIO when you need a file-like object for use with libraries that act on files. (Although I believe most XML parsers will happily accept a string instead of requiring a file-like object.)
| 0 | 1 | 0 | 0 |
2011-04-02T18:17:00.000
| 1 | 1.2 | true | 5,524,991 | 0 | 0 | 1 | 1 |
I want to provide a field in my html file so that people can upload their XML files to be imported to the datastore. How can I read and process this file inside the app engine once it is uploaded ? (I dont want to store the file with blobstore. Just want to read, process and throw it away) Thanks
|
Prepopulate initial values for fields in the Django Admin without slugifying
| 5,526,953 | 1 | 0 | 363 | 0 |
python,django
|
There is no out of the box support for this (assuming you're talking replacing the exact, live editing that preopulated_fields provides).
The slug function is written in JavaScript, in django/contrib/admin/media/js/urlify.js
You could potentially insert a new script in the ModelAdmin extra JS property, but make sure your admin page doesn't actually need the "real" slugify script :)
| 0 | 0 | 0 | 0 |
2011-04-03T00:47:00.000
| 1 | 0.197375 | false | 5,526,820 | 0 | 0 | 1 | 1 |
In the Django admin, I can set a slug field to fill in automatically using prepopulated_fields. How can I set a field to fill in using a different function, for example just basic concatenation instead of lowercase and spaces-to-hyphens ?
|
How can I scrape an ASP.NET site that does all interaction as postbacks?
| 5,532,871 | 0 | 1 | 2,041 | 0 |
javascript,asp.net,python,screen-scraping
|
If you are just trying to simulate load, you might want to check out something like selenium, which runs through a browser and handles postbacks like a browser does.
| 0 | 0 | 1 | 0 |
2011-04-03T21:09:00.000
| 3 | 0 | false | 5,532,541 | 0 | 0 | 1 | 3 |
Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request.
|
How can I scrape an ASP.NET site that does all interaction as postbacks?
| 5,532,579 | 2 | 1 | 2,041 | 0 |
javascript,asp.net,python,screen-scraping
|
Without knowing any specifics, my hunch is that you are using a hardcoded session id and the web server's app domain recycled and created new encryption/decryption keys, rendering your hardcoded session id (which was encrypted by the old keys) useless.
| 0 | 0 | 1 | 0 |
2011-04-03T21:09:00.000
| 3 | 1.2 | true | 5,532,541 | 0 | 0 | 1 | 3 |
Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request.
|
How can I scrape an ASP.NET site that does all interaction as postbacks?
| 5,532,627 | 0 | 1 | 2,041 | 0 |
javascript,asp.net,python,screen-scraping
|
You could try using Firebugs NET tab to monitor all requests, browse around manually and then diff the requests that you generate with ones that your screen scraper is generating.
| 0 | 0 | 1 | 0 |
2011-04-03T21:09:00.000
| 3 | 0 | false | 5,532,541 | 0 | 0 | 1 | 3 |
Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request.
|
Static css file and xdv
| 5,539,492 | 3 | 3 | 289 | 0 |
python,css,plone,xdv
|
Clicking save in the portal_css ZMI management screen will redo the merging and change the version number in the resources.
| 0 | 0 | 0 | 0 |
2011-04-04T07:33:00.000
| 4 | 0.148885 | false | 5,535,714 | 0 | 0 | 1 | 2 |
What is the correct method to manage css file versioning using collective.xdv?
Now I use nginx to serve css directly. I tried to import them in the css_registry, but if I change a file the merged css doesn't update, I mean, its version number (eg. the 4931 in rescsstylesheets-cachekey4931.css) doesn't get incremented.
I use plone 4.04, any hints?
|
Static css file and xdv
| 5,536,194 | 3 | 3 | 289 | 0 |
python,css,plone,xdv
|
that's not a version number. that's portal_css tool that merges and caches CSS files together for better performance.
While developing you have to enable CSS/JS debug in order to see changes in real time. Go to ZMI -> portal_css/javascript and check "debug mode" flag to be on.
If I'm not wrong, from plone 4.x you have this enabled by default if you are running your instance in debug mode (bin/instance fg or bin/client fg). If this doesn't occur, check into you zope.conf for "debug-mode = on".
| 0 | 0 | 0 | 0 |
2011-04-04T07:33:00.000
| 4 | 0.148885 | false | 5,535,714 | 0 | 0 | 1 | 2 |
What is the correct method to manage css file versioning using collective.xdv?
Now I use nginx to serve css directly. I tried to import them in the css_registry, but if I change a file the merged css doesn't update, I mean, its version number (eg. the 4931 in rescsstylesheets-cachekey4931.css) doesn't get incremented.
I use plone 4.04, any hints?
|
I left Python learning because of Python 2 vs 3
| 5,538,940 | 6 | 2 | 750 | 0 |
python,compatibility
|
Hello
I had the same question because I began to learn Python 2 months ago.
So after reading some posts and informations, I decided to start with Python, 2.71, why?:
1/ Python 2.7.1 is really stable and has all the great libraries.
2/ It will be maintened for a long time for all the bugs (but not for the functions) so there will be 2.7.2 2.7.3...
3/ You may use the 3.xx syntax in your 2.7 code with the __future__ statement.
| 0 | 0 | 0 | 0 |
2011-04-04T12:55:00.000
| 4 | 1 | false | 5,538,747 | 1 | 0 | 1 | 3 |
I wanted To learn how to program for the first time.
Because i`m mainly practice on IT and Security, I choose to start with Python.
But, As is started to learn Python 3, I came to realize that Non of the modules I wanted to use were ported to Python 3, and even Django (one of the main reasons I wanted to learn python) and IronPython does not support python 3.
From my view, Python is not recommended for newbies because 1) Python 2 is about to be "out of future support" (the 2.7 is the last one). and 2) the Python 3 is not supported by the all important modules and frameworks...
So - someone who wants to learn python from scratch and not wasting time on a version that is about to be out of support (2), as no any good options (nor 2 or 3 version)...
Please correct me if I`m wrong (and before I move to C# :) ).
|
I left Python learning because of Python 2 vs 3
| 5,538,984 | 3 | 2 | 750 | 0 |
python,compatibility
|
+1 to all the replies you've received already. Yes start with Python 2, especially as you want to use libraries that are only available in 2. But whilst you are doing this, check what the differences are. The one that has bitten me is the change to print. Very minor but If I'd written all my prints in a python3 style in the beginning, porting to 3 would have been trivial (python 2 supports the function style print).
| 0 | 0 | 0 | 0 |
2011-04-04T12:55:00.000
| 4 | 0.148885 | false | 5,538,747 | 1 | 0 | 1 | 3 |
I wanted To learn how to program for the first time.
Because i`m mainly practice on IT and Security, I choose to start with Python.
But, As is started to learn Python 3, I came to realize that Non of the modules I wanted to use were ported to Python 3, and even Django (one of the main reasons I wanted to learn python) and IronPython does not support python 3.
From my view, Python is not recommended for newbies because 1) Python 2 is about to be "out of future support" (the 2.7 is the last one). and 2) the Python 3 is not supported by the all important modules and frameworks...
So - someone who wants to learn python from scratch and not wasting time on a version that is about to be out of support (2), as no any good options (nor 2 or 3 version)...
Please correct me if I`m wrong (and before I move to C# :) ).
|
I left Python learning because of Python 2 vs 3
| 5,538,809 | 5 | 2 | 750 | 0 |
python,compatibility
|
Python 2 and Python 3 are close enough that learning on the earlier version will give you a very solid grounding for migrating to 3 when it becomes more mainstream.
Abandoning a language just because it's transitioning to a new version is a bit silly, frankly.
| 0 | 0 | 0 | 0 |
2011-04-04T12:55:00.000
| 4 | 0.244919 | false | 5,538,747 | 1 | 0 | 1 | 3 |
I wanted To learn how to program for the first time.
Because i`m mainly practice on IT and Security, I choose to start with Python.
But, As is started to learn Python 3, I came to realize that Non of the modules I wanted to use were ported to Python 3, and even Django (one of the main reasons I wanted to learn python) and IronPython does not support python 3.
From my view, Python is not recommended for newbies because 1) Python 2 is about to be "out of future support" (the 2.7 is the last one). and 2) the Python 3 is not supported by the all important modules and frameworks...
So - someone who wants to learn python from scratch and not wasting time on a version that is about to be out of support (2), as no any good options (nor 2 or 3 version)...
Please correct me if I`m wrong (and before I move to C# :) ).
|
Retrieve list of tasks in a queue in Celery
| 50,170,855 | 2 | 188 | 188,607 | 0 |
python,celery
|
As far as I know Celery does not give API for examining tasks that are waiting in the queue. This is broker-specific. If you use Redis as a broker for an example, then examining tasks that are waiting in the celery (default) queue is as simple as:
connect to the broker
list items in the celery list (LRANGE command for an example)
Keep in mind that these are tasks WAITING to be picked by available workers. Your cluster may have some tasks running - those will not be in this list as they have already been picked.
The process of retrieving tasks in particular queue is broker-specific.
| 0 | 1 | 0 | 0 |
2011-04-04T21:35:00.000
| 14 | 0.028564 | false | 5,544,629 | 0 | 0 | 1 | 1 |
How can I retrieve a list of tasks in a queue that are yet to be processed?
|
Android app database syncing with remote database
| 11,871,778 | 1 | 4 | 1,618 | 1 |
python,android
|
1) Looks like this is pretty good way to manage your local & remote changes + support offline work. I don't think this is overkill
2) I think, you should cache user's changes locally with local timestamp until synchronizing is finished. Then server should manage all processing: track current version, commit and rollback update attempts. Less processing on client = better for you! (Easier to support and implement)
3) I'd choose polling if I want to support offline-mode, because in offline you can't keep your socket open and you will have to reopen it every time when Internet connection is restored.
PS: Looks like this is VEEERYY OLD question... LOL
| 0 | 0 | 0 | 0 |
2011-04-04T21:42:00.000
| 1 | 0.197375 | false | 5,544,689 | 0 | 0 | 1 | 1 |
I'm in the planning phase of an Android app which synchronizes to a web app. The web side will be written in Python with probably Django or Pyramid while the Android app will be straightforward java. My goal is to have the Android app work while there is no data connection, excluding the social/web aspects of the application.
This will be a run-of-the-mill app so I want to stick to something that can be installed easily through one click in the market and not require a separate download like CloudDB for Android.
I haven't found any databases that support this functionality so I will write it myself. One caveat with writing the sync logic is there will be some shared data between users that multiple users will be able to write to. This is a solo project so I thought I'd through this up here to see if I'm totally off-base.
The app will process local saves to the local sqlite database and then send messages to a service which will attempt to synchronize these changes to the remote database.
The sync service will alternate between checking for messages for the local app, i.e. changes to shared data by other users, and writing the local changes to the remote server.
All data will have a timestamp for tracking changes
When writing from the app to the server, if the server has newer information, the user will be warned about the conflict and prompted to overwrite what the server has or abandon the local changes. If the server has not been updated since the app last read the data, process the update.
When data comes from the server to the app, if the server has newer data overwrite the local data otherwise discard it as it will be handled in the next go around by the app updating the server.
Here's some questions:
1) Does this sound like overkill? Is there an easier way to handle this?
2) Where should this processing take place? On the client or the server? I'm thinking the advantage of the client is less processing on the server but if it's on the server, this makes it easier to implement other clients.
3) How should I handle the updates from the server? Incremental polling or comet/websocket? One thing to keep in mind is that I would prefer to go with a minimal installation on Webfaction to begin with as this is the startup.
Once these problems are tackled I do plan on contributing the solution to the geek community.
|
How do you implement multiple data types for an object in Django?
| 5,564,022 | 0 | 2 | 1,476 | 0 |
python,django
|
RDBMS with type based structures are not designed for that. For instance, take Google's big table, which don't complain what you store (i.e, product A property types may be entirely different from product B, though both are of type Product).
You need object based storage system, with type flexibility.
Are you sure you want that at any cost? We still can do that, but with lot of overhead. Here is pseudo code.
Model Product:
id (primary key)
Model AttributeType:
"""
Defines various data types available for storage.
ex: (1, 'INT') or (2,'STRING') or (3, 'FLOAT')
"""
id:
name:
Model ProductAttributes:
"""
Stores various attributes of a product
Ex: (P1,ATT1,100) or (P1, ATT3, 10.5) or (P2, ATT2, "ABCD")
"""
FK-> Product
FK-> AttributeType
Name 'String'
Value Blob
| 0 | 0 | 0 | 0 |
2011-04-05T02:24:00.000
| 2 | 0 | false | 5,546,499 | 0 | 0 | 1 | 1 |
I'd like to know the best way to associate various data types with an object in Django. Some of the types should be string, boolean, image file, choice from a list, or a link. For example, say you have a Product model. For product X, you'll want to add an image attribute, a string for the model name, and a link. For product Y, possible attributes would be an image, a weight decimal. What would be the best way to set this up? Are there any packages available that do this or something similar?
|
problem with soaplib (lxml) with apache2 + mod_wsgi
| 5,559,988 | 2 | 2 | 1,054 | 1 |
python,apache2,mingw,lxml,cx-oracle
|
It is indeed because of 'msvcrt90.dll'. From somewhere in micro patch revisions of Python 2.6 they stopped building in automatic dependencies on the DLL for extension modules and relied on Python executable doing it. When embedded in other systems however you are then dependent on that executable linking to DLL and in the case of Apache it doesn't. The change in Python has therefore broken many systems which embed Python on Windows and the only solution is for every extension module to have their own dependencies on required DLLs which many don't. The psycopg2 extension was badly affected by this and they have change their builds to add the dependency back in themselves now. You might go searching about the problem as it occurred for psycopg2. One of the solutions was to rebuild extensions with MinGW compiler on Windows instead.
| 0 | 0 | 0 | 0 |
2011-04-05T12:55:00.000
| 1 | 1.2 | true | 5,552,162 | 0 | 0 | 1 | 1 |
when I launch my application with apache2+modwsgi
I catch
Exception Type: ImportError
Exception Value: DLL load failed: The specified module could not be found.
in line
from lxml import etree
with Django dev server all works fine
Visual C++ Redistributable 2008 installed
Dependency walker told that msvcrt90.dll is missed
but there is same situation with cx_Oracle, but cx_Oracle's dll loads correct
any ideas?
windows 2003 server 64bit and windows XP sp3 32bit
python 2.7 32 bit
cx_Oracle 5.0.4 32bit
UPD:
download libxml2-2.7.7 and libxslt-1.1.26
tried to build with setup.py build --compiler mingw32
Building lxml version 2.3.
Building with Cython 0.14.1.
ERROR: 'xslt-config' is not recognized as an internal or external command,
operable program or batch file.
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running build
running build_py
running build_ext
skipping 'src/lxml\lxml.etree.c' Cython extension (up-to-date)
building 'lxml.etree' extension
C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python27\include -IC:\Python27\PC -c src/lxml\lxml.etree.c -o build\temp.win32-2.7\Release\src\lxml\lxml.et
ree.o -w
writing build\temp.win32-2.7\Release\src\lxml\etree.def
C:\MinGW\bin\gcc.exe -mno-cygwin -shared -s build\temp.win32-2.7\Release\src\lxml\lxml.etree.o build\temp.win32-2.7\Release\src\lxml\etree.def -LC:\Python27\lib
s -LC:\Python27\PCbuild -llibxslt -llibexslt -llibxml2 -liconv -lzlib -lWS2_32 -lpython27 -lmsvcr90 -o build\lib.win32-2.7\lxml\etree.pyd
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xd11): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xd24): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x1ee92): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x1eed6): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x2159e): undefined reference to `_imp__xmlMalloc'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x2e741): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x2e784): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f157): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f19a): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f4ac): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f4ef): more undefined references to `_imp__xmlFree' follow
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xb1ad5): undefined reference to `xsltLibxsltVersion'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xb1b9a): undefined reference to `xsltDocDefaultLoader'
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
UPD2:
I understand why import cx_Oracle works fine: cx_Oracle.pyd contains "MSVCRT.dll" dependence etree.pyd doesn't have it
|
Add 'object' to a stack with expiry timers and get notified when it expires
| 5,555,511 | 1 | 3 | 964 | 0 |
python,caching
|
I would probably use threading.Timer for this. A Timer object will call a specified function with given arguments after a period of time. So write a function that returns a JSON object to the pool, and start a timer that specifies the specific JSON object that has been reserved. Additionally, you can cancel a timer before it fires, which you will want to do if the client actually requests the object before the reservation expires.
To keep track of the pool, I would probably use a dict where the JSON object is the key and the value is either None if the object is not checked out, or the Timer instance if it is checked out. A separate list could be used to keep track of what object should be given out next; pop() from the end of the list when taking an object out and append() it back on when it's returned. Beware of possible race conditions updating both of these structures!
| 0 | 0 | 0 | 0 |
2011-04-05T14:44:00.000
| 3 | 0.066568 | false | 5,553,769 | 0 | 0 | 1 | 1 |
I have a need to reserve an object (JSON) within my app for a period of time (typically 180 seconds) At some point the client may or may not come back and request this object by its key.
The tricky part is that I need to be notified when this object expires so I can return it to the available pool if the client hasn't already requested it.
The obvious solutions are to use something like a timestamp in the database and then a periodic script to check for expired items but this doesn't feel like the nicest solution.
Ideally I'm looking for something like memcache that can call an event when an item expires, surely there is such a product out there?
My current framework is based around python, cherrpy, mongo, memcachce but I'm happy to add to it.
|
How to scrape HTTPS javascript web pages
| 5,561,974 | 1 | 12 | 9,750 | 0 |
java,javascript,python,https,web-scraping
|
If they've created a Web API that their JavaScript interfaces with, you might be able to scrape that directly, rather than trying to go the HTML route.
If they've obfuscated it or that option isn't available for some other reason, you'll basically need a Web browser to evaluate the JavaScript and then scrap the browser's DOM. Perhaps write a browser plugin?
| 0 | 0 | 1 | 0 |
2011-04-06T05:41:00.000
| 3 | 0.066568 | false | 5,561,950 | 0 | 0 | 1 | 1 |
I am trying to monitor day-to-day prices from an online catalogue.
The site uses HTTPS and generates the catalogue pages with javascript. How can i interface with the site and make it generate the pages I need?
I have done this with other sites where the HTML can easily be accessed, I have no problem parseing the HTML once generated.
I only know Python and Java.
Thanks in advance.
|
Pycharm 1.2 Ignoring A Directory Named cvs
| 5,564,814 | 4 | 3 | 274 | 0 |
python,intellij-idea,pycharm
|
Settings | File Types | Ignore files and folders, remove CVS from the ignored list.
| 0 | 0 | 0 | 0 |
2011-04-06T10:07:00.000
| 1 | 1.2 | true | 5,564,624 | 1 | 0 | 1 | 1 |
Am using Pycharm 1.2 on OSX 10.6 . One of my project directories is named cvs, but it is not showing up in the project explorer. I have tried uninstalling cvs version control plugin but this didn't resolve it. When i try to manually create the directory , i get a message < trying to create a directory with an ignored name, result will not be visible> .. How can i overide this?
|
Add a debugging page to a Django project?
| 5,581,104 | 1 | 1 | 94 | 0 |
python,django
|
You can use Django Debug Toolbar and enable it only for choosen IPs
| 0 | 0 | 0 | 0 |
2011-04-07T12:28:00.000
| 2 | 0.099668 | false | 5,581,005 | 0 | 0 | 1 | 1 |
I'm supplying a Django project to a client, who has requested a 'debugging' page that will show useful information.
[UPDATE for clarity: We'd like this page so we can use it for debugging in future: the clients are not very technical and I won't have direct access to their servers. In the event of future issues, it would be very useful if I could check out this page without asking them to edit the Debug setting or do any other server-side fiddling.]
The project will be running in production, so I can't set DEBUG=True.
What I would like is a page similar to the Django debugging page but without any sensitive information in.
I guess I can simply write my own, but does anyone have any ideas? Anything standard in Django I could use?
Thanks!
|
Django models "blob" field
| 5,581,586 | 1 | 2 | 4,606 | 0 |
python,mysql,django,django-models,django-blob
|
Django's ORM has no field for binary large objects. Either use something like a FileField, or search for candidate field classes using a search engine.
| 0 | 0 | 0 | 0 |
2011-04-07T12:59:00.000
| 3 | 1.2 | true | 5,581,466 | 0 | 0 | 1 | 1 |
I want to create a table like so -
CREATE TABLE trial_xml (
id int(11) DEFAULT NULL,
pid int(11) DEFAULT NULL,
sid varchar(256) CHARACTER SET utf8 NOT NULL,
data blob,
PRIMARY KEY (soid),
KEY suid_index (suid) )
ENGINE=MyISAM DEFAULT CHARSET=latin1
my question is how do I set "data" field as "blob" in django's models.py ??
I mean what's the syntax?
UPDATE: I dont want to set data field as longtext. I want only blob datafield.
|
python image frames to video
| 5,586,198 | 1 | 13 | 30,494 | 0 |
python,image-processing,video-encoding
|
You could use Popen just to run the ffmpeg in a subprocess.
| 0 | 0 | 0 | 0 |
2011-04-07T18:36:00.000
| 4 | 0.049958 | false | 5,585,872 | 0 | 0 | 1 | 1 |
I am writing a python/django application and it needs to do image manipulation and then combine the images into a video (each image is a frame). Image manipulation is easy. I'm using PIL, but for the convert to video part, I'm stuck. I've found pyffmpeg but that just seems to do decoding of videos to frames, not the other way around. Although I may have missed something. I've also heard that pythonMagick (imagemagick wrapper) can do this, but I can't seem to find anything about encoding in the docs.
This is running on a linux server and must be python (since that is what the application is in).
What should I use?
|
"Practical Django Projects" - Search function
| 5,588,180 | 0 | 0 | 179 | 0 |
python,django
|
If you are calling admin.autodiscover() in your urls.py Django's admin.site will be looking for admin.py files in all packages of your INSTALLED_APPS and import all found ModelAdmin classes and add them to the admin.site.
You have three inputs for SearchKeyword appearing in the admin because the there are three InlineAdmin's added.
| 0 | 0 | 0 | 0 |
2011-04-07T21:50:00.000
| 1 | 1.2 | true | 5,588,081 | 0 | 0 | 1 | 1 |
I am reading Chapter 3 of "Practical Django Projects", on how to make a CMS. I have improved the search function and everything works fine. However I am wondering why everythings works...
On page 35, I have added an admin.py file in the cms/search/ directory. How does the compiler know that he needs to take this file into account ?
On page 36-37, there is an improved version of the cms/search/models.py. It seems that the new file adds not just 1 keyword, but 3 ! How come ?
Thanks a lot
|
Best open source CMS for developers to customize and add dynamic pages and content
| 5,603,994 | 1 | 2 | 5,886 | 0 |
php,python,django,wordpress,content-management-system
|
If you are comfortable with hands-on programming, use Drupal. It is one of the, if not the most powerful, configurable and tested CMS around.
There are lots of CMSs' available out there and most of them are good also but the three that always stand out are Drupal, Joomla and Wordpress. Joomla and wordpress are easier to configure but not as customizable as Drupal is.
| 0 | 0 | 0 | 0 |
2011-04-09T08:53:00.000
| 6 | 0.033321 | false | 5,603,962 | 0 | 0 | 1 | 1 |
I want to build websites for multiple customers and want to take advantage of features that come with typical CMSs. But on top of that I need to do lots of customization like:
Writing my own templates on top of any existing templates to show the data in a form more suitable for these sites.
Extract some of the data from existing data sources which will be updated by different processes.
Implement my own login/auth mechanisms.
Do some of the SEO optimizations of the site myself and add some dynamic pages to the sites.
Which CMSs can handle these types of requirements or am I better off using something like Django. I am comfortable with both python and php but prefer python.
|
Python: Getting image dimensions from URL
| 5,608,147 | 0 | 2 | 1,778 | 0 |
python,django,image,url
|
There is no general way that you can know anything about an image until you retrieve (download) it. However, if the site you're downloading from has some standardized size in the URL (http://example.com/images/64x64/scary_clown.jpg), you might be able to use that -- assuming you trust them to enforce those sizes.
| 0 | 0 | 0 | 0 |
2011-04-09T21:24:00.000
| 1 | 0 | false | 5,608,019 | 0 | 0 | 1 | 1 |
I'm working on a Python app (using the Django framework and running on Google Appengine). I'm trying to obtain the dimensions (width/height) of a remote image, from its url.
Is there any I can do this without having to download the image? I haven't found anything so far...
|
RESTful API across multiple users
| 5,609,144 | 0 | 0 | 622 | 0 |
python,xml,linux,rest
|
Yes. Keep in mind that being RESTful is merely a way to organize your web application's URL's in a standard way. You can build your web application to do whatever you want.
| 0 | 0 | 1 | 0 |
2011-04-09T22:19:00.000
| 2 | 0 | false | 5,608,319 | 0 | 0 | 1 | 1 |
I am somewhat new to RESTful APIs.
I'm trying to implement a python system that will control various tasks across multiple computers, with one computer acting as the controller.
I would like all these tasks to be divided amongst multiple users (ex. task foo runs as user foo, and task bar runs as user bar) while handling all requests with a central system. The central system should also act as a simple web server and be able to server basic pages for status purposes.
It it possible to have each user register a "page" with a central server for the API and have the server pass all requests to the programs (probably written in Python)?
|
Google-App Engine logging problem
| 5,612,643 | 0 | 1 | 252 | 0 |
python,django,google-app-engine
|
Thanks Abdul you made me realize what the problem is. I had changed a URL in my application to point to the application that I had deployed to Google-App Engine. It should have been pointing to my local application. I had myapp.appspot.com/move instead of localhost/move
| 0 | 1 | 0 | 0 |
2011-04-10T14:24:00.000
| 2 | 1.2 | true | 5,612,390 | 0 | 0 | 1 | 1 |
I'm wondering if anyone has experienced problems with Google-App Engine's logging facility. Everything was working fine for me until this morning, I ran my local server and no logging messages were being displayed (well, none of my logging messages, the server GET messages etc.. are being displayed). Even errors are not being reported. I have no idea what is going on.
If this has happened to anyone, can you please advise on how to fix it?
|
Call a python function within a html file
| 5,615,253 | 7 | 15 | 101,647 | 0 |
python,html
|
Yes, but not directly; you can set the onclick handler to invoke a JavaScript function that will construct an XMLHttpRequest object and send a request to a page on your server. That page on your server can, in turn, be implemented using Python and do whatever it would need to do.
| 0 | 0 | 1 | 0 |
2011-04-10T22:47:00.000
| 5 | 1 | false | 5,615,228 | 0 | 0 | 1 | 1 |
Is there a way to call a python function when a certain link is clicked within a html page?
Thanks
|
Python hangs on lxml.etree.XMLSchema(tree) with apache + mod_wsgi
| 6,176,299 | 1 | 5 | 1,123 | 1 |
python,apache,mod-wsgi,lxml,xml-validation
|
I had a similar problem on a Linux system. Try installing a more recent version of libxml2 and reinstalling lxml, at least that's what did it for me.
| 0 | 0 | 0 | 0 |
2011-04-11T06:34:00.000
| 3 | 0.066568 | false | 5,617,599 | 0 | 0 | 1 | 2 |
Python hangs on
lxml.etree.XMLSchema(tree)
when I use it on apache server + mod_wsgi (Windows)
When I use Django dev server - all works fine
if you know about other nice XML validation solution against XSD, tell me pls
Update:
I'm using soaplib, which uses lxml
logger.debug("building schema...")
self.schema = etree.XMLSchema(etree.parse(f))
logger.debug("schema %r built, cleaning up..." % self.schema)
I see "building schema..." in apache logs, but I don't see "schema %r built, cleaning up..."
Update 2:
I built lxml 2.3 with MSVS 2010 visual C++; afterwards it crashes on this line self.schema = etree.XMLSchema(etree.parse(f)) with Unhandled exception at 0x7c919af2 in httpd.exe: 0xC0000005: Access violation writing location 0x00000010.
|
Python hangs on lxml.etree.XMLSchema(tree) with apache + mod_wsgi
| 6,685,198 | 2 | 5 | 1,123 | 1 |
python,apache,mod-wsgi,lxml,xml-validation
|
I had the same problem (lxml 2.2.6, mod_wsgi 3.2). A work around for this is to pass a file or filename to the constructor: XMLSchema(file=).
| 0 | 0 | 0 | 0 |
2011-04-11T06:34:00.000
| 3 | 0.132549 | false | 5,617,599 | 0 | 0 | 1 | 2 |
Python hangs on
lxml.etree.XMLSchema(tree)
when I use it on apache server + mod_wsgi (Windows)
When I use Django dev server - all works fine
if you know about other nice XML validation solution against XSD, tell me pls
Update:
I'm using soaplib, which uses lxml
logger.debug("building schema...")
self.schema = etree.XMLSchema(etree.parse(f))
logger.debug("schema %r built, cleaning up..." % self.schema)
I see "building schema..." in apache logs, but I don't see "schema %r built, cleaning up..."
Update 2:
I built lxml 2.3 with MSVS 2010 visual C++; afterwards it crashes on this line self.schema = etree.XMLSchema(etree.parse(f)) with Unhandled exception at 0x7c919af2 in httpd.exe: 0xC0000005: Access violation writing location 0x00000010.
|
memory profiler for ironpython
| 5,625,833 | 0 | 1 | 228 | 0 |
ironpython,memory-management
|
It looks like Python Memory Validator is a commercial product, I'd ask the creator.
| 0 | 0 | 0 | 0 |
2011-04-11T18:31:00.000
| 2 | 0 | false | 5,625,771 | 0 | 0 | 1 | 1 |
I've downloaded Python Memory Validator and am trying to install Heapy to try to get a profile for my ironpython application.
So far PMV seems to choke for some reason with the message: Failure injecting into executable image using CreateProcess()
This seems to be an issue integrating ironpython with PMV.
Can anyone provide any advice?
|
Test specific models in Django
| 5,626,840 | 0 | 1 | 292 | 0 |
python,django,testing,django-models,django-testing
|
You could try creating a whole new app that you only use on your development server.
E.g., if your app is called myapp you would call your testing app myapp_test.
Then in myapp_test's models.py you would from myapp import models and then subclass your models in there.
Then in your settings.py you either just try and remember to comment out the myapp_test application from INSTALLED_APPS when deploying to your production server. Or you can use the local_settings.py methodology to only have the myapp_test included in INSTALLED_APPS on your test machine.
| 0 | 0 | 0 | 0 |
2011-04-11T19:46:00.000
| 1 | 1.2 | true | 5,626,672 | 0 | 0 | 1 | 1 |
Is it possible to have a set of models just for testing purposes? The idea is that I've written an app that contains some helper abstract model HelperBase. Now I'd like to provide some models that would inherit from it in order to test it, say DerivedTest1, DerivedTest2. However I wouldn't really like those test models to appear in the production database in the end. I just want their tables to be constructed in the test database. Is it possible and if so - how to do it? I've already tried creating models in the tests.py file but this doesn't seem to work.
|
Unicode and UTF-8 encoding issue with Scrapy XPath selector text
| 5,628,065 | 1 | 3 | 15,989 | 0 |
python,django,unicode,utf-8,scrapy
|
U+FFFD is the replacement character that you get when you do some_bytes.decode('some-encoding', 'replace') and some substring of some_bytes can't be decoded.
You have TWO of them: u'H\ufffd\ufffdftsitz' ... this indicates that the u-umlaut was represented as TWO bytes each of which failed to decode. Most likely, the site is encoded in UTF-8 but the software is attempting to decode it as ASCII. Attempting to decode as ASCII usually happens when there is an unexpected conversion to Unicode, and ASCII is used as the default encoding. However in that case one would not expect the 'replace' arg to be used. More likely the code takes in an encoding and has been written by someone who thinks "doesn't raise an exception" means the same as "works".
Edit your question to provide the URL, and show the minimum code that produces u'H\ufffd\ufffdftsitz'.
| 0 | 0 | 0 | 0 |
2011-04-11T21:37:00.000
| 3 | 0.066568 | false | 5,627,868 | 0 | 0 | 1 | 1 |
I'm using Scrapy and Python (as part of a Django project) to scrape a site with German content. I have libxml2 installed as the backend for Scrapy selectors.
If I extract the word 'Hüftsitz' (this is how it is displayed on the site) through selectors, I get: u'H\ufffd\ufffdftsitz' (Scrapy XPath selectors return Unicode strings).
If I encode this into UTF-8, I get: 'H\xef\xbf\xbd\xef\xbf\xbdftsitz'. And if I print that, I get 'H??ftsitz' which isn't correct. I am wondering why this may be happening.
The character-set on the site is set to UTF-8. I am testing the above on a Python shell with sys.getdefaultencoding set to UTF-8. Using the Django application where the data from XPath selectors is written to a MySQL database with UTF-8 character set, I see the same behaviour.
Am I overlooking something obvious here? Any clues or help will be greatly appreciated.
|
Secure Options for storing Openssl password on a server (Linux, Python, CherryPy)
| 5,630,477 | 0 | 0 | 1,070 | 0 |
python,linux,security,passwords,openssl
|
For the sake of privacy for a user and other reasons passwords are generally not stored by servers. Typically users choose a password which is stored as a hash of some sort on the server.
Users then authenticate with the web application by checking the stored hash against a hash supplied based on user input. Once the client is authenticated a session identifier is provided allowing use of server resource(s). During this time a user can for instance upload the file. Encryption of the file on the server should be un-necessary assuming the hosting server is secured properly and and absent of other issues.
In this case, the authentication mechanism is not made clear, neither are the threats that pose a danger, or the life cycle of that uploaded file.
It seems that a server is receiving an encrypted file, plus some type of password. Is the protection of the password being considered during the transmission phase, or as storage on the server? The HTTPS protocol can help guard against threats concerning the transmission of the file/data. As I see from your description the concern seems to be storage on the server side.
Encrypting the passwords once they have been received by the server (either individually or by using a master password) adds another layer of security, but this approach is not without fault as the passphrase either (1) needs to be stored on the server in cleartext for accessing the files (2) or needs to be entered manually by an administrator when needed as part of any processing requiring the password - note that any resources encrypted with the password become un-useable to users.
While I am not completely aware of what is going on, the most secure thing to do would be to re-work the web application and carefully think through the design and its requirements.
| 0 | 0 | 0 | 0 |
2011-04-12T03:41:00.000
| 2 | 0 | false | 5,630,152 | 0 | 0 | 1 | 1 |
I've implemented a HTTP server (CherryPy and Python) that receives an encrypted file from a client (Android). I'm using OpenSSL to decrypt the uploaded file. Currently I'm using openssl -enc -pass file:password.txt -in encryptedfile -out decryptedfile to perform to decryption on the server side. As you can see the password used by openssl is stored in a plain text file (password.txt).
Is there a more secure way to store this OpenSSL password?
Thanks.
|
Django Login Form with "Remember Me" Option, What would be the best way?
| 5,643,001 | 3 | 3 | 4,872 | 0 |
python,django,login,remember-me
|
Have it provide a cookie with no expire date or a very large one.
| 0 | 0 | 0 | 0 |
2011-04-12T22:57:00.000
| 2 | 0.291313 | false | 5,642,592 | 0 | 0 | 1 | 1 |
I am developing a django application and i would like some suggestions on what would be the best way to provide "Remember Me" option with the Login Form. I am really concerned about the performance of the application and am not sure if using sessions would be a good choice. Please Suggest What you think.
|
python manage.py syncdb
| 5,643,247 | 1 | 0 | 3,164 | 1 |
python,django,postgresql
|
Check your settings.py file. The most likely reason for this issue is that the username for the database is set to "winepad". Change that to the appropriate value and rerun python manage.py syncdb That should fix the issue.
| 0 | 0 | 0 | 0 |
2011-04-13T00:37:00.000
| 2 | 0.099668 | false | 5,643,201 | 0 | 0 | 1 | 1 |
I am very new to python and Django, was actually thrown in to finish off some coding for my company since our coder left for overseas.
When I run python manage.py syncdb I receive the following error
psycopg2.OperationalError: FATAL: password authentication failed for user "winepad"
I'm not sure why I am being prompted for user "winepad" as I've created no such user by that name, I am running the sync from a folder named winepad. In my pg_hba.conf file all I have is a postgres account which I altered with a new password.
Any help would be greatly appreciated as the instructions I left are causing me some issues.
Thank you in advance
|
Beginning MySQL/Python
| 5,643,494 | 0 | 1 | 1,826 | 1 |
python,mysql,django,new-operator
|
Django uses its own ORM, so I guess it's not completely necessary to learn MySQL first, but I suspect it would help a fair bit to know what's going on behind the scenes, and it will help you think in the correct way to formulate your queries.
I would start learning MySQL (or any other SQL), after you've got a pretty good grip on Python, but probably before you start learning Django, or perhaps alongside. You won't need a thorough understanding of SQL. At least, not to get started.
Err... ORM/Object Relational Mapper, it hides/abstracts the complexities of SQL and lets you access your data through the simple objects/models you define in Python. For example, you might have a "Person" model with Name, Age, etc. That Name and Age could be stored and retrieved from the database transparently just be accessing the object, without having to write any SQL. (Just a simple .save() and .get())
| 0 | 0 | 0 | 0 |
2011-04-13T01:16:00.000
| 4 | 0 | false | 5,643,400 | 0 | 0 | 1 | 2 |
I have just begun learning Python. Eventually I will learn Django, as my goal is to able to do web development (video sharing/social networking). At which point should I begin learning MySQL? Do I need to know it before I even begin Django? If so, how much should I look to know before diving into Django? Thank you.
|
Beginning MySQL/Python
| 5,654,701 | 0 | 1 | 1,826 | 1 |
python,mysql,django,new-operator
|
As Django documents somehow Recommends, It is better to learning PostgreSQL.
PostgreSQL is working pretty with Django, I never had any problem with Django/PostgreSQL.
I all know is sometimes i have weird error when working with MySQL.
| 0 | 0 | 0 | 0 |
2011-04-13T01:16:00.000
| 4 | 0 | false | 5,643,400 | 0 | 0 | 1 | 2 |
I have just begun learning Python. Eventually I will learn Django, as my goal is to able to do web development (video sharing/social networking). At which point should I begin learning MySQL? Do I need to know it before I even begin Django? If so, how much should I look to know before diving into Django? Thank you.
|
Celery - collision of task_ids
| 5,647,566 | 0 | 1 | 304 | 0 |
python,django,celery
|
It shouldn't be possible and even if, it should be very rare. My guess would be that the same task is executed a second time after your exception. Maybe there is a problem with your routing keys as the worker doesn't get the task? Or the broker has a problem, I've seen funny problems with RabbitMQ. Deleting it's database (RABBITMQ_MNESIA_BASE) helped in my case.
| 0 | 1 | 0 | 0 |
2011-04-13T08:54:00.000
| 1 | 0 | false | 5,646,679 | 0 | 0 | 1 | 1 |
I'm getting HardTimeLimit exception for my tasks. After log examination i found -
task is not being received by celery ( No "Got task from broker:" message for task id)
task with the same id was executed couple a days ago.
Task ids are assigned automatically by @task decorator, tasks are started by django, there are ~2k tasks per day ( and ~30 collisions per day).
How ID's collision is possible? How to prevent it.
|
How to parse a specific wiki page & automate that?
| 5,647,655 | 1 | 2 | 923 | 0 |
python,parsing,screen-scraping
|
What scripting language should I use to do this?
Python will do, as you've tagged your question.
looks like Python (using urllib2 & BeautifulSoup) should do the job, but is it the best way of approaching the problem.
It's workable. I'd use lxml.etree personally. An alternative is fetching the page in the raw format, then you have a different parsing task.
I know I could also use the WikiMedia api but is using python a good idea for general parsing problems?
This appears to be a statement and an unrelated argumentative question. Subjectively, if I was approaching the problem you're asking about, I'd use python.
Also the tabular data on the wikipedia page may change so I need to parse every day. How do I automate the script for this?
Unix cron job.
Also any ideas for version control without external tools like svn so that updates can be easily reverted if need be?
A Subversion repository can be run on the same machine as the script you've written. Alternatively you could use a distributed version control system, e.g. git.
Curiously, you've not mentioned what you're planning on doing with this data.
| 0 | 0 | 1 | 0 |
2011-04-13T10:02:00.000
| 2 | 1.2 | true | 5,647,413 | 0 | 0 | 1 | 1 |
I am try to make a web application that needs to parse one specific wikipedia page & extract some information which is stored in a table format on the page. The extracted data would then need to be stored onto a database.
I haven't really done anything like this before. What scripting language should I use to do this? I have been reading a little & looks like Python (using urllib2 & BeautifulSoup) should do the job, but is it the best way of approaching the problem.
I know I could also use the WikiMedia api but is using python a good idea for general parsing problems?
Also the tabular data on the wikipedia page may change so I need to parse every day. How do I automate the script for this? Also any ideas for version control without external tools like svn so that updates can be easily reverted if need be?
|
csrf_token problem django
| 59,506,095 | 0 | 0 | 1,058 | 0 |
python,django,forms,csrf
|
Use the {% csrf_token %} validation inside your HTML form as sample below.
<form> {% csrf_token %} </form>
| 0 | 0 | 0 | 0 |
2011-04-13T14:43:00.000
| 3 | 0 | false | 5,651,101 | 0 | 0 | 1 | 3 |
I have put in the {% csrf_token %} and the context_instance=RequestContext(request)) but I still get the error CSRF token missing or incorrect. Thanks in advance! or not.
|
csrf_token problem django
| 5,653,153 | 0 | 0 | 1,058 | 0 |
python,django,forms,csrf
|
You append {% csrf_token %} to the form ? (<form>{% csrf_token %} ...</form>)
| 0 | 0 | 0 | 0 |
2011-04-13T14:43:00.000
| 3 | 0 | false | 5,651,101 | 0 | 0 | 1 | 3 |
I have put in the {% csrf_token %} and the context_instance=RequestContext(request)) but I still get the error CSRF token missing or incorrect. Thanks in advance! or not.
|
csrf_token problem django
| 5,653,770 | 3 | 0 | 1,058 | 0 |
python,django,forms,csrf
|
Make sure the CSRF token template variable is inside your form. If you're sure it is, view the page's HTML source to make sure that it's actually printing out the hidden input field. If all else fails, check your settings to make sure the CSRF middleware is enabled and configured properly.
Additionally, having the source (or at least a few lines of context) would be greatly helpful in figuring out your problem.
| 0 | 0 | 0 | 0 |
2011-04-13T14:43:00.000
| 3 | 0.197375 | false | 5,651,101 | 0 | 0 | 1 | 3 |
I have put in the {% csrf_token %} and the context_instance=RequestContext(request)) but I still get the error CSRF token missing or incorrect. Thanks in advance! or not.
|
Inconsistent page has expired message in internet explorer
| 5,653,835 | 1 | 1 | 110 | 0 |
python,internet-explorer,zope
|
Is your page served on HTTPS?
If so this is the expected behavior. By default IE will not cache a secured page on disk, nor will it automatically resubmit pages with POST data.
This is security feature (prevent cache sniffing, etc) and is about the only thing IE does correctly.
| 0 | 0 | 1 | 0 |
2011-04-13T17:56:00.000
| 1 | 1.2 | true | 5,653,478 | 0 | 0 | 1 | 1 |
I have an application developed in Python-Zope where only on some of the pages, I am getting "page has expired issue" and this does not come every time. This issue comes when I click on "Back" or "Cancel" buttons which use browser history to redirect to the earlier pages. I have reviewed my code and there is not code setting response headers to prevent page caching.
Also the issue is with internet explorer only and code works fine with mozilla.
Is there a way I can prevent this message?
Thanks in advance.
|
How to accurately obtain page hits on a django page when using memcached
| 5,654,472 | 3 | 0 | 124 | 0 |
python,django,memcached
|
A) Use Google Analytics to determine within 2% your page views
B) Build an app to hold request data (time, browser, IP, etc) and create middleware that stores info about each request in that app. Place this middleware above your cache middleware.
| 0 | 0 | 0 | 0 |
2011-04-13T18:46:00.000
| 1 | 1.2 | true | 5,654,001 | 0 | 0 | 1 | 1 |
I've developed a web application in django, and I'm interested in accurately knowing how many people visited certain pages and keeping that info in my database. As I was already running some code whenever a page was loaded, I had a small bit of code that increased the counter in 1.
However, when implementing memcached in this application, the cached pages are served statically and the code is not executed. I'm thinking on adding javascript code in the page that tells the server the page has been served, but somehow that doesn't look like the best idea.
Is there any way to compromise between having memcached statically provide the dynamic pages as long as they don't change, but still be able to know the page has been served for statistical purposes in my database?
Thanks in Advance!
|
How do I speed up iteration of large datasets in Django
| 5,656,734 | 3 | 7 | 4,372 | 1 |
python,django
|
1500 records is far from being a large dataset, and seven seconds is really too much. There is probably some problem in your models, you can easily check it by getting (as Brandon says) the values() query, and then create explicitly the 1500 object by iterating the dictionary. Just convert the ValuesQuerySet into a list before the construction to factor out the db connection.
| 0 | 0 | 0 | 0 |
2011-04-13T22:03:00.000
| 4 | 0.148885 | false | 5,656,238 | 0 | 0 | 1 | 2 |
I have a query set of approximately 1500 records from a Django ORM query. I have used the select_related() and only() methods to make sure the query is tight. I have also used connection.queries to make sure there is only this one query. That is, I have made sure no extra queries are getting called on each iteration.
When I run the query cut and paste from connection.queries it runs in 0.02 seconds. However, it takes seven seconds to iterate over those records and do nothing with them (pass).
What can I do to speed this up? What causes this slowness?
|
How do I speed up iteration of large datasets in Django
| 5,657,066 | 1 | 7 | 4,372 | 1 |
python,django
|
Does your model's Meta declaration tell it to "order by" a field that is stored off in some other related table? If so, your attempt to iterate might be triggering 1,500 queries as Django runs off and grabs that field for each item, and then sorts them. Showing us your code would help us unravel the problem!
| 0 | 0 | 0 | 0 |
2011-04-13T22:03:00.000
| 4 | 0.049958 | false | 5,656,238 | 0 | 0 | 1 | 2 |
I have a query set of approximately 1500 records from a Django ORM query. I have used the select_related() and only() methods to make sure the query is tight. I have also used connection.queries to make sure there is only this one query. That is, I have made sure no extra queries are getting called on each iteration.
When I run the query cut and paste from connection.queries it runs in 0.02 seconds. However, it takes seven seconds to iterate over those records and do nothing with them (pass).
What can I do to speed this up? What causes this slowness?
|
How to model lending items between a group of companies
| 5,656,695 | 0 | 2 | 198 | 1 |
django,design-patterns,database-design,django-models,python
|
Option #1 is probably the cleanest choice. An Item has only one owner company and is possessed by only one possessing company.
Put two FK to Company in Item, and remember to explicitly define the related_name of the two inverses to be different each other.
As you want to avoid touching the Item model, either add the FKs from outside, like in field.contribute_to_class(), or put a new model with a one-to-one rel to Item, plus the foreign keys.
The second method is easier to implement but the first will be more natural to use once implemented.
| 0 | 0 | 0 | 0 |
2011-04-13T22:17:00.000
| 3 | 0 | false | 5,656,345 | 0 | 0 | 1 | 1 |
I have a group of related companies that share items they own with one-another. Each item has a company that owns it and a company that has possession of it. Obviously, the company that owns the item can also have possession of it. Also, companies sometimes permanently transfer ownership of items instead of just lending it, so I have to allow for that as well.
I'm trying to decide how to model ownership and possession of the items. I have a Company table and an Item table.
Here are the options as I see them:
Inventory table with entries for each Item - Company relationship. Has a company field pointing to a Company and has Boolean fields is_owner and has_possession.
Inventory table with entries for each Item. Has an owner_company field and a possessing_company field that each point to a Company.
Two separate tables: ItemOwner and ItemHolder**.
So far I'm leaning towards option three, but the tables are so similar it feels like duplication. Option two would have only one row per item (cleaner than option one in this regard), but having two fields on one table that both reference the Company table doesn't smell right (and it's messy to draw in an ER diagram!).
Database design is not my specialty (I've mostly used non-relational databases), so I don't know what the best practice would be in this situation. Additionally, I'm brand new to Python and Django, so there might be an obvious idiom or pattern I'm missing out on.
What is the best way to model this without Company and Item being polluted by knowledge of ownership and possession? Or am I missing the point by wanting to keep my models so segregated? What is the Pythonic way?
Update
I've realized I'm focusing too much on database design. Would it be wise to just write good OO code and let Django's ORM do it's thing?
|
How to add message to state in OpenERP
| 5,670,071 | 0 | 0 | 811 | 0 |
python,notifications,openerp
|
It looks like the form class in the client has a message_state() method that displays a message in the status bar. If you look through the client/bin/modules/gui/window/form.py file, you can find several calls. I didn't see any easy way to set that message from within a standard module, so you might have to hack the client code.
If you want to display a message from within regular module code, I think you're stuck with a pop-up warning dialog.
| 0 | 0 | 0 | 0 |
2011-04-14T05:53:00.000
| 2 | 0 | false | 5,659,117 | 0 | 0 | 1 | 2 |
I have a very simple question. How to add own message to state on status bar?
|
How to add message to state in OpenERP
| 5,903,115 | 0 | 0 | 811 | 0 |
python,notifications,openerp
|
Well U can custom message on form with changes in form.mako with understanding of mochikit.
| 0 | 0 | 0 | 0 |
2011-04-14T05:53:00.000
| 2 | 0 | false | 5,659,117 | 0 | 0 | 1 | 2 |
I have a very simple question. How to add own message to state on status bar?
|
Creating a real time website using PHP
| 5,664,340 | 2 | 3 | 1,845 | 0 |
php,python,node.js,real-time
|
I've looked at node js and nowjs, but
I'm weary about coding a while site in
Express (I wonder about security
holes, code maintainability, lack of a
good ORM).
I can personally vouch for code maintainability if you can do JavaScript. I personally find JavaScript more maintainable then PHP but that's probably due to lack of PHP experience.
ORM is not an issue as node.js favours document based databases. Document based databases and JSON go hand in hand, I find couch db and it's map/reduce system easy to use and it feels natural with json.
In terms of security holes, yes a node.js server is young and there may be holes. These are un avoidable. There are currently no known exploits and I would say it's not much more vulnerable
then IIS/apache/nginx until someone points a big flaw.
I want to site to be able to use real
time (or near real time) data (e.g.
for chat and real time feeds). I need
it to be able to scale to thousands of
concurrent users.
Scalability like that requires non-blocking IO. This requires a non-blocking IO server likes nginx or node.js (Yes blocking IO could work but you need so much more hardware).
Personally I would advice using node.js over PHP as it's easier to write non blocking IO in node. You can do it in PHP but you have to make all the right design and architecture decisions. I doubt there are any truly async non-blocking PHP frameworks.
Python's twisted / Ruby's EventMachine together with nginx, can work but I have no expertise with those. At least with node you can't accidentally call a blocking library or make use of the native blocking libraries since JavaScript has no native IO.
| 0 | 0 | 0 | 1 |
2011-04-14T13:46:00.000
| 2 | 0.197375 | false | 5,664,225 | 0 | 0 | 1 | 1 |
I'm currently creating a website using PHP and the Kohana framework. I want to site to be able to use real time (or near real time) data (e.g. for chat and real time feeds). I need it to be able to scale to thousands of concurrent users. I've done a lot of reading and still have no idea what the best method is for this.
Does anyone have any experience with StreamHub? Is it possible to use this with PHP?
Am I digging myself into a hole here and need to switch languages? I've looked at node js and nowjs, but I'm weary about coding a while site in Express (I wonder about security holes, code maintainability, lack of a good ORM). I've read about Twisted Python, but have no idea what web framework would work well on top of that, and I'd prefer not to use Nevow - maybe Django can be used well with Twisted Python? I'm just looking to be pointed in the right direction, so I don't go too far in PHP and realize I can't get the near real-time results that I need.
Thanks for the help.
|
Python urllib2 automatic form filling and retrieval of results
| 5,668,120 | 0 | 9 | 24,175 | 0 |
python,forms,automation,urllib2,urllib
|
I’ve only done a little bit of this, but:
You’ve got the HTML of the form page. Extract the name attribute for each form field you need to fill in.
Create a dictionary mapping the names of each form field with the values you want submit.
Use urllib.urlencode to turn the dictionary into the body of your post request.
Include this encoded data as the second argument to urllib2.Request(), after the URL that the form should be submitted to.
The server will either return a resulting web page, or return a redirect to a resulting web page. If it does the latter, you’ll need to issue a GET request to the URL specified in the redirect response.
I hope that makes some sort of sense?
| 0 | 0 | 1 | 0 |
2011-04-14T18:17:00.000
| 3 | 0 | false | 5,667,699 | 0 | 0 | 1 | 1 |
I'm looking to be able to query a site for warranty information on a machine that this script would be running on. It should be able to fill out a form if needed ( like in the case of say HP's service site) and would then be able to retrieve the resulting web page.
I already have the bits in place to parse the resulting html that is reported back I'm just having trouble with what needs to be done in order to do a POST of data that needs to be put in the fields and then being able to retrieve the resulting page.
|
Python Backend Design Patterns
| 5,671,966 | 0 | 0 | 4,070 | 1 |
python,backend
|
Use Apache, Django and Piston.
Use REST as the protocol.
Write as little code as possible.
Django models, forms, and admin interface.
Piston wrapppers for your resources.
| 0 | 0 | 0 | 0 |
2011-04-14T22:55:00.000
| 2 | 0 | false | 5,670,639 | 0 | 0 | 1 | 1 |
I am now working on a big backend system for a real-time and history tracking web service.
I am highly experienced in Python and intend to use it with sqlalchemy (MySQL) to develop the backend.
I don't have any major experience developing robust and sustainable backend systems and I was wondering if you guys could point me out to some documentation / books about backend design patterns? I basically need to feed data to a database by querying different services (over HTML / SOAP / JSON) at realtime, and to keep history of that data.
Thanks!
|
Multiple chat rooms - Is using ports the only way ? What if there are hundreds of rooms?
| 5,672,152 | 1 | 2 | 1,196 | 0 |
python,livechat
|
You could try doing something like IRC, where the current "room" is sent from the client to the server "before" the text (/PRIVMSG #room-name Hello World), delimited by a space. For example, you could send ROOMNAME Sample text from the browser to the server.
Using AJAX would be the most reasonable option. I've never used web2py, but I'm guessing you could just use JSON to parse the data between the browser and the server, if you wanted to be fancy.
| 0 | 0 | 1 | 0 |
2011-04-15T03:30:00.000
| 2 | 0.099668 | false | 5,672,100 | 0 | 0 | 1 | 1 |
Need some direction on this.
I'm writing a chat room browser-application, however there is a subtle difference.
These are collaboration chats where one person types and the other person can see live ever keystroke entered by the other person as they type.
Also the the chat space is not a single line but a textarea space, like the one here (SO) to enter a question.
All keystrokes including tabs/spaces/enter should be visible live to the other person. And only one person can type at one time (I guess locking should be trivial)
I haven't written a multiple chatroom application. A simple client/server where both are communicatiing over a port is something I've written.
So here are the questions
1.) How is a multiple chatroom application written ? Is it also port based ?
2.) Showing the other persons every keystroke as they type is I guess possible through ajax. Is there any other mechanism available ?
Note : I'm going to use a python framework (web2py) but I don't think framework would matter here.
Any suggestions are welcome, thanks !
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.