Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Unable to run django custom command
66,928,557
0
2
2,913
0
python,django
Sometimes this can happen if you haven't added your app to INSTALLED_APPS = [ 'app', ] in settings.py
0
0
0
0
2012-11-08T14:04:00.000
3
0
false
13,290,514
0
0
1
1
I've create a command in app/management/commands and this command was working fine. I'm unable to run this command now. I'm getting the following error: Unknown command: 'my_custom_command_name' I'm using a virtual env. I don't see this in list of commands when I type pythong manage.py. I've this app installed in my settings and It was working previously
fake geolocation with scrapy crawler
13,299,332
2
1
722
0
python,web-crawler,scrapy
If the site you are scraping does IP based detection, your only option is going to be to change your IP somehow. This means either using a different server (I don't believe EC2 operates in India) or proxying your server requests. Perhaps you can find an Indian proxy service?
0
0
1
0
2012-11-08T22:16:00.000
1
0.379949
false
13,298,788
0
0
1
1
I am trying to scrape a website which serves different page depending upon the geolocation of the IP sending the request. I am using an amazon EC2 located in US(which means it serves up a page meant for US) but I want the page that will be served in India. Does scrapy provide a way to work around this somehow?
Apache/PHP to Nginx/Tornado/Python
13,304,821
6
5
2,544
0
php,python,django,nginx,tornado
I'll go point by point: Yes. It's ok to run tornado and nginx on one server. You can use nginx as reverse proxy for tornado also. Haproxy will give you benefit, if you have more than one server instances. Also it will allow you to proxy websockets directly to tornado. Actually, nginx can be used for redirects, with no problems. I haven't heard about using redis for redirects - it's key/value storage... may be you mean something else? Again, you can write blocking part in django and non-blocking part in tornado. Also tornado has some non-blocking libs for db queries. Not sure that you need powers of django here. Yes, it's ok to run apache behind nginx. A lot of projects use nginx in front of apache for serving static files. Actually question is very basic - answer also. I can be more detailed on any of the point if you wish.
0
1
0
1
2012-11-08T22:31:00.000
1
1.2
true
13,299,023
0
0
1
1
Our website has developed a need for real-time updates, and we are considering various comet/long-polling solutions. After researching, we have settled on nginx as a reverse proxy to 4 tornado instances (hosted on Amazon EC2). We are currently using the traditional LAMP stack and have written a substantial amount of code in PHP. We are willing to convert our PHP code to Python to better support this solution. Here are my questions: Assuming a quad-core processor, is it ok for nginx to be running on the same server as the 4 tornado instances, or is it recommended to run two separate servers: one for nginx and one for the 4 tornado processes? Is there a benefit to using HAProxy in front of Nginx? Doesn't Nginx handle load-balancing very well by itself? From my research, Nginx doesn't appear to have a great URL redirecting module. Is it preferred to use Redis for redirects? If so, should Redis be in front of Nginx, or behind? A large portion of our application code will not be involved in real-time updates. This code contains several database queries and filesystem reads, so it clearly isn't suitable for a non-blocking app server. From my research, I've read that the blocking issue is mitigated simply by having multiple Tornado instances, while others suggest using a separate app server (ex. Gunicorn/Django/Flask) for blocking calls. What is the best way to handle blocking calls when using a non-blocking server? Converting our code from PHP to Python will be a lengthy process. Is it acceptable to simultaneously run Apache/PHP and Tornado behind Nginx, or should we just stick to on language (either tornado with gunicorn/django/flask or tornado by itself)?
Can flask (using jinja2) render templates using 'windows-1251' encoding?
68,141,752
0
3
3,341
0
python,crystal-reports,flask,jinja2
In my case loaders.py had a hardcode "utf-8" in several places which I replaced with "windows-1251" and for me everything worked!
0
0
0
0
2012-11-09T06:51:00.000
3
0
false
13,303,464
0
0
1
1
I write a simple frontend for pretty old reporting system, which uses Crystal Reports 8 Web Component Server. And I need to make a 'POST' request to this Web Component. When I'm making request from page encoded using standard UTF-8, all form data is passed in UTF-8 too. And that's the problem, because CR8 Web Component Server doesn't understand UTF-8 (or does it and I'm wrong?). I've tried to put accept-charset="ISO-8859-5" and accept-charset="windows-1251" in parameters and had no luck with it. Here's more info, that can be usefull: This frontend will be working on Windows Server 2003 with IIS6, Only suitable browser is IE, because CR8 Web Component Server uses ActiveX component. (There's also a java plugin, but for some reason it doesn't work at all). So I need flask (jinja2) to render templates using 'windows-1251' encoding, because parameter names and values can contain cyrillic characters. It there any way I can achieve this?
Appengine Search API - Globally Consistent
13,315,587
0
0
174
0
python,google-app-engine,full-text-search,gae-search
This depends on whether or not you have any globally consistent indexes. If you do, then you should migrate all of your data from those indexes to new, per-document-consistent (which is the default) indexes. To do this: Loop through the documents you have stored in the global index and reindexing them in the new index. Change references from the global index to the new per-document index. Ensure everything works, then delete the documents from your global index (not necessary to complete the migration, but still a good idea). You then should remove any mention of consistency from your code; the default is per-document consistent, and eventually we will remove the ability to specify a consistency at all. If you don't have any data in a globally consistent index, you're probably getting the warning because you're specifying a consistency. If you stop specifying the consistency it should go away. Note that there is a known issue with the Python API that causes a lot of erroneous deprecation warnings about consistency, so you could be seeing that as well. That issue will be fixed in the next release.
0
1
0
0
2012-11-09T17:37:00.000
1
0
false
13,313,118
0
0
1
1
I've been using the appengine python experimental searchAPI. It works great. With release 1.7.3 I updated all of the deprecated methods. However, I am now getting this warning: DeprecationWarning: consistency is deprecated. GLOBALLY_CONSIST However, I'm not sure how to address it in my code. Can anyone point me in the right direction?
Django collectstatic error
13,314,802
0
0
1,413
0
python,django
This looks like it's caused by files being collected by collectstatic having outrageously inaccurate last modified timestamps (like before 1970). Try searching google for tools that allow you to modify your files' last modified dates and change them to something reasonable.
0
0
0
0
2012-11-09T18:09:00.000
2
0
false
13,313,609
0
0
1
2
Error: This will overwrite existing files! Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel: yes Traceback (most recent call last): File "manage.py", line 14, in <module> execute_manager(settings) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 459, in execute_manager utility.execute() File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 232, in execute output = self.handle(*args, **options) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 371, in handle return self.handle_noargs(**options) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 163, in handle_noargs collected = self.collect() File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 113, in collect handler(path, prefixed_path, storage) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 287, in copy_file if not self.delete_file(path, prefixed_path, source_storage): File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 219, in delete_file self.storage.modified_time(prefixed_path) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\files\storage.py", line 264, in modified_time return datetime.fromtimestamp(os.path.getmtime(self.path(name))) ValueError: timestamp out of range for platform localtime()/gmtime() function (env) D:\CODE\wamp\www\lezcheung\lezcms> Anyone know help me?
Django collectstatic error
14,785,611
0
0
1,413
0
python,django
I discoreved. This is just cause I put some fonts in /static/fonts/ and the django don't accept fonts on Static Folder. So, i changed this files to /media/fonts/. Worked! :D
0
0
0
0
2012-11-09T18:09:00.000
2
0
false
13,313,609
0
0
1
2
Error: This will overwrite existing files! Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel: yes Traceback (most recent call last): File "manage.py", line 14, in <module> execute_manager(settings) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 459, in execute_manager utility.execute() File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 232, in execute output = self.handle(*args, **options) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 371, in handle return self.handle_noargs(**options) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 163, in handle_noargs collected = self.collect() File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 113, in collect handler(path, prefixed_path, storage) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 287, in copy_file if not self.delete_file(path, prefixed_path, source_storage): File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 219, in delete_file self.storage.modified_time(prefixed_path) File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\files\storage.py", line 264, in modified_time return datetime.fromtimestamp(os.path.getmtime(self.path(name))) ValueError: timestamp out of range for platform localtime()/gmtime() function (env) D:\CODE\wamp\www\lezcheung\lezcms> Anyone know help me?
Get list of all routes defined in the Flask app
57,108,419
4
179
103,592
0
python,flask
You can view all the Routes via flask shell by running the following commands after exporting or setting FLASK_APP environment variable. flask shell app.url_map
0
0
0
0
2012-11-09T23:31:00.000
11
0.072599
false
13,317,536
0
0
1
1
I have a complex Flask-based web app. There are lots of separate files with view functions. Their URLs are defined with the @app.route('/...') decorator. Is there a way to get a list of all the routes that have been declared throughout my app? Perhaps there is some method I can call on the app object?
Scrapyd: How to specify libs and common folders that deployed projects can use?
13,344,717
0
3
973
0
python,scrapyd
I found the answer by adding mylibs to site-packages of python by using setup.py inside mylib folder. That way I could import everything inside mylib in my projects. Actually mylibs were way outside from the location where setup.py of my deploy-able project is present. setup.py looks for packages on same level and inside the folders where it is located.
0
0
0
0
2012-11-10T01:19:00.000
2
1.2
true
13,318,291
0
0
1
1
Scrapyd is service where we can eggify deploy our projects. However I am facing a problem. I have a Project named MyScrapers whose spider classes uses an import statement as follows: from mylibs.common.my_base_spider import MyBaseSpider The path to my_base_spider is /home/myprojectset/mylibs/common/my_base_spider While setting environment variable PYTHONPATH=$HOME/myprojectset/, I am able to run MyScrapers using scrapy command: scrapy crawl MyScrapers. But when I use scrapyd for deploying MyScrapers by following command: scrapy deploy scrapyd2 -p MyScrapers, I get the following error: Server response (200): {"status": "error", "message": "ImportError: No module named mylibs.common.my_base_spider"} Please tell how to make deployed project to use these libs?
Python Flask Modifying Page after loaded
13,338,019
3
1
1,583
0
python,web,flask
Short answer: you can't. Longer answer: once you have "sent the page" (that is, you have completed a HTTP response) there is no way for you to change what was sent. You can, however, use JavaScript to make additional HTTP requests to the server, and use the HTTP responses to modify the DOM which will change the page that the person is looking at. There are many ways to make a live chat feed, all of which are too complicated to put in a single Stack Overflow answer, but you can be sure that they all use JavaScript.
0
0
0
0
2012-11-12T03:15:00.000
3
0.197375
false
13,337,924
0
0
1
1
I have a question about using Flask with Python. Lets say I want to make a website for some mod I'm making for a game, and I want to put in a live chat feed, how would I go around modifying the contents of the page after the page has been sent to the person?
How to define a Model with fields filled by other data sources than database in django?
13,399,089
0
3
1,001
0
python,django,django-models,django-forms
I have now a partial solution. I override the Manager and in particular its all() and get() functions (because I only need those functions for now). all() returns a queryset in which I added the result of some logics that give me objects build from external datas (taken through xmlrpc in my case). I added those objects to the qs through _result_cache attribute. I think it's not clean and in fact my Model is now a custom Model and I don't have any database field. I may use it to fill database Models... However I can use it the same way as classic models: MyModel.objects.all() for example. If anyone has another idea I'd really appreciate. Regards
0
0
0
0
2012-11-12T15:25:00.000
2
0
false
13,346,470
0
0
1
1
Does anyone can tell me if it's possible to create a Model class, with some model fields and some other fields taking their data from external data sources. The point is that I would like this model to be exploited the same way as another model by ModelForm for instance. I mean if I redefine "objects" Manager of the model by specifying the actions to get the datas for special fields (those who may not be linked to datas from the database), would the modelForm link the input with the fields not attached to the database ? Similar question about related objects. If I have a Model that has a relation with that special Model, can I get this Model instances through the classic way to get related objects (with both the classic model fields and the non-database fields) ? Please tell me if I'm not clear, I'll reformulate. Thanks. EDIT: I tried to make a Model with custom fields, and then override the default Manager and its functions: all, get, ... to get objects like it would be with classical Model and Manager, it works. However, I don't use QuerySet, and it seems that the only way to get ModelForm, related objects and the admin functionnalities, working with it, is to build the QuerySet properly and let it being returned by the manager. That's why now I'm wondering if it's possible to properly and manually build a QuerySet with data got from external sources, or tell django-admin, model forms and related objects to take care of another class than queryset on this Model. Thanks
Installing Python modules for OpenERP 6.1 in Windows
13,358,175
1
6
3,249
0
python,openerp
Good question.. Openerp on windows uses a dll for python (python26.dll in /Server/server of the openerp folder in program files). It looks like all the extra libraries are in the same folder, so you should be able to download the extra libraries to that folder and restart the service. (I usually stop the service and run it manually from the command line - its easier to see if there are any errors etc while debugging) Let us know if you get it working!
0
1
0
0
2012-11-12T15:38:00.000
2
0.099668
false
13,346,698
0
0
1
1
I installed OpenERP 6.1 on windows using the AllInOne package. I did NOT install Python separately. Apparently OpenERP folders already contain the required python executables. Now when I try to install certain addons, I usually come across requirements to install certain python modules. E.g. to install Jasper_Server, I need to install http2, pypdf and python-dime. As there is no separate Python installation, there is no C:\Python or anything like that. Where and how do I install these python packages so that I am able to install the addon? Thanks
Ironpython: How to see when a WPF application fails?
13,352,605
1
0
240
0
wpf,xaml,ironpython,sharpdevelop
You could put in some code to catch the error and log it to a file. Something possibly simpler is to compile your application as a Console Application. This can be done via Project Options - Application - Output type. Then you will get a console window when you run your WPF application and any exception that happens at startup will be logged to this window.
1
0
0
0
2012-11-12T16:22:00.000
1
1.2
true
13,347,378
0
0
1
1
I am doing an application with GUI using WPF/XAML with Ironpython and SharpDevelop, until now it works fine, when I'm in the development environment I can see the errors in console and know what is wrong. But when I build and deploy the app for us on other system or I ran it outside of the development environment and there is no longer the console when there is some error or crashes, it fails silently, and I cannot know what went wrong. How can I alert or log to see what fails?
QuickFIX logon trouble: multiple rapid fire logon attempts being sent
13,368,991
1
1
1,050
0
python,quickfix
Solved! I think there was something wrong with my datadictionary (FIX44.xml) file. I had seen a problem in it before, but thought I fixed it. I got a new copy online and dropped it in and now everything seems to be working. Maybe the bad dictionary was not letting FIX accept the logon response?
0
0
0
0
2012-11-12T21:10:00.000
2
1.2
true
13,351,608
0
0
1
2
QuickFIX logon trouble: (using QuickFIX, with FIX 4.4 in Python 2.7) Once I do initiator.start() a connection is made, and logon message is sent. However, I don't ever see the ACK and session status message that the broker is sending back (all the overloaded Application methods are just supposed to print out what they receive). QuickFIX immediately re-tries the logon (according to the broker log files), and the same thing happens, but according to the server, I am already logged in. QuickFIX then issues a Logout command, which the server complies with. I have tried enter Timeout values in the settings file, but to no avail. (Do I need to explicitly reference these values in the code to have the utilized, or will the engine see them and act accordingly automatically?) Any ideas what is going on here?
QuickFIX logon trouble: multiple rapid fire logon attempts being sent
13,368,881
2
1
1,050
0
python,quickfix
Sounds like you do not have message logs enabled. If your app rejects messages below the application level (such as if the seq no is wrong, or the message is malformed), then it'll be rejected before your custom message handlers even see it. If you are starting your Initiator with a ScreenLogStore, change it to a FileLogStore. This will create a log file that will contain every message sent and received on the session, valid or not. Dollars to donuts you'll see your Logon acks in there as well as some Transport-layer rejections.
0
0
0
0
2012-11-12T21:10:00.000
2
0.197375
false
13,351,608
0
0
1
2
QuickFIX logon trouble: (using QuickFIX, with FIX 4.4 in Python 2.7) Once I do initiator.start() a connection is made, and logon message is sent. However, I don't ever see the ACK and session status message that the broker is sending back (all the overloaded Application methods are just supposed to print out what they receive). QuickFIX immediately re-tries the logon (according to the broker log files), and the same thing happens, but according to the server, I am already logged in. QuickFIX then issues a Logout command, which the server complies with. I have tried enter Timeout values in the settings file, but to no avail. (Do I need to explicitly reference these values in the code to have the utilized, or will the engine see them and act accordingly automatically?) Any ideas what is going on here?
Django: Implementing a nested, reusable component design
13,397,523
0
8
1,068
0
python,django,django-views
It is an interesting problem. Ideally you should pull all the components from database before rendering. But looking at hierarchy, making template tags makes sense. These template tag will pull appropriate data. Assume for the purpose of this problem that database query gets cached due to search locality.
0
0
0
0
2012-11-12T21:16:00.000
2
0
false
13,351,694
0
0
1
1
I'm working on a big social networking app in Django where I expect to use certain front-end components many times, and often with functionality designed in such a way that custom components contain other custom components, which might contain yet smaller subcomponents (ad infinitum). All of these components are typically dynamic generated. I'm trying to figure out the best way to architect this in the Django framework, such that my components are easy to maintain and have clear programming interfaces. Relying heavily on global context would seem to be the opposite of this, however, I can see advantages in avoiding redundant queries by doing them all at once in the view. Custom inclusion template tags seem like a good fit for implementing components, but I'm wondering, do highly nested template tags can create performance issues, or does the parsing architecture prevent this? What is the best way of making it self-documenting at the view level what context is needed to render the main page template, custom tags and all? I'm imagining it to be a minor nightmare to try to properly maintain the code to set up the template context. Lastly, what is the best way to maintain the CSS for these components? Feel free to suggest other recommended approaches for creating a nested-component design.
How to FTP into a virtual machine?
13,353,762
1
0
5,276
0
python,django,ftp,virtualenv,virtualbox
The reason that the the client reported back "Connection refused by server" is that the server returned a TCP packet with the reset bit set, in response to an application trying to connect to a port that is not being listened on by an application, or by a firewall. I think that the FTP service is not running, or running on an alternate port. Take a look at the output from netstat -nltp (on Linux) or netstat -ntlb (on windows). You should see a program that is waiting to hear request on TCP port 21. If you don't see the program listed at all or not on the expected port that your client is going to try and connect to, then modify the FTP servers configuration file.
0
1
0
0
2012-11-12T23:08:00.000
1
1.2
true
13,353,113
0
0
1
1
I've recently started learning Django and have set up a virtual machine running a Django server on VirtualEnv. I can use the runserver command to run the basic Django installation server and view it on another computer with the local IP address. However, I can't figure out how to connect to my virtual machine with my FTP client so that I can edit files on my host machine (Windows). I've tried using the IP address of the virtual machine with an FTP client but it says "Connection refused by server". Any help would be appreciated, thanks!
Robot Framework - using User Libraries
13,602,048
0
2
2,367
0
java,python,robotframework
Try to put your Library into this folder: ...YourPythonFolder\Lib\site-packages\ or, if this doesn't work, make in the folder "site-packages" folder with the name "MyLibrary" and put your library there. This should work.
0
0
0
1
2012-11-13T07:58:00.000
2
0
false
13,357,227
0
0
1
1
i am facing difficulty when trying to run my tests. Here is what i did : Create a java project with one class which has one method called hello(String name) Exported this as a jar and kept it in the same directory where i keep my test case file. my Test case looks like this. Setting * * Value * * Value * * Value * * Value * * Value * Library MyLibrary Variable * * Value * * Value * * Value * * Value * * Value * Test Case * * Action * * Argument * * Argument * * Argument * * Argument * MyTest hello World Keyword * * Action * * Argument * * Argument * * Argument * * Argument * I always get the following error : Error in file 'C:\Users\yahiya\Desktop\robot-practice\testcase_template.tsv' in table 'Setting': Importing test library 'MyLibrary' failed: ImportError: No module named MyLibrary I have configured Pythopath in the system variables in my windows machine. Please let me know what am i doing wrong here. Thanks
Pros and Cons of html-output for statistical data
13,358,764
3
0
196
0
python,html,output,tabular
Why not do both ? Make your data available as CSV (for simple export to scripts etc.) and provide a decorated HTML version. At some stage you may want (say) a proper Excel sheet, a PDF etc. So I would enforce a separation of the data generation from the rendering. Make your generator return a structure that can be consumed by an abstract renderer, and your concrete implementations would present CSV, PDF, HTML etc.
0
0
0
0
2012-11-13T10:03:00.000
2
0.291313
false
13,358,729
0
0
1
2
I am using Python3 to calculate some statistics from language corpora. Until now I was exporting the results in a csv-file or directly on the shell. A few days ago I started learning how to output the data to html-tables. I must say I really like it, it deals perfect height/width of cell and unicodes and you can apply color to different values. although I think there are some problem when dealing with large data or tables. Anyway, my question is, I'mot not sure if I should continue in this direction and output the results to html. Can someone with experience in this field help me with some pros and cons of using html as output?
Pros and Cons of html-output for statistical data
13,359,571
1
0
196
0
python,html,output,tabular
The question lists some benefits of HTML format. These alone are sufficient for using it as one of output formats. Used that way, it does not really matter much what you cannot easily do with the HTML format, as you can use other formats as needed. Benefits include reasonable default rendering, which can be fine-tuned in many ways using CSS, possibly with alternate style sheets (now supported even by IE). You can also include links. What you cannot do in HTML without scripting is computation, sorting, reordering, that kind of stuff. But they can be added with JavaScript – not trivial, but doable. There’s a technical difficulty with large tables: by default, a browser will start showing any content in the table only after having got, parsed, and processed the entire table. This may cause a delay of several seconds. A way to deal with this is to use fixed layout (table-layout: fixed) with specific widths set on table columns (they need not be fixed in physical units; the great em unit works OK, and on modern browsers you can use ch too). Another difficulty is bad line breaks. It’s easy fixable with CSS (or HTML), but authors often miss the issue, causing e.g. cell contents like “10 m” to be split into two lines. Other common problems with formatting statistical data in HTML include: Not aligning numeric fields to the right. Using serif fonts. Using fonts where not all digits have equal width. Using the unnoticeable hyphen “-” insted of the proper Unicode minus “−” (U+2212, &minus;). Not indicating missing values in some reasonable way, leaving some cells empty. (Browsers may treat empty cells in odd ways.) Insufficient horizontal padding, making cell contents (almost) hit cell border or cell background edge. There are good and fairly easy solutions to such problems, so this is just something to be noted when using HTML as output format, not an argument against it.
0
0
0
0
2012-11-13T10:03:00.000
2
1.2
true
13,358,729
0
0
1
2
I am using Python3 to calculate some statistics from language corpora. Until now I was exporting the results in a csv-file or directly on the shell. A few days ago I started learning how to output the data to html-tables. I must say I really like it, it deals perfect height/width of cell and unicodes and you can apply color to different values. although I think there are some problem when dealing with large data or tables. Anyway, my question is, I'mot not sure if I should continue in this direction and output the results to html. Can someone with experience in this field help me with some pros and cons of using html as output?
Asynchronous replacement for Celery
13,429,864
1
4
1,476
0
python,django,asynchronous,celery,gevent
Have you tried to use Celery + eventlet? It works well in our project
0
1
0
0
2012-11-13T11:42:00.000
2
0.099668
false
13,360,145
0
0
1
1
We're using Celery for background tasks in our Django project. Unfortunately, we have many blocking sockets in tasks, that can be established for a long time. So Celery becomes fully loaded and does not respond. Gevent can help me with sockets, but Celery has only experimental support of gevent (and as I found in practice, it doesn't work well). So I considered to switch to another task queue system. I can choose between two different ways: Write my own task system. This is a least preferred choice, because it requires much time. Find good and well-tried replacement for Celery that will work after monkey patching. Is there any analogue of Celery, that will guarantee me execution of my tasks even after sudden exit?
API Design - JSON or URL Parameters?
13,365,631
3
1
384
0
python,api,flask
The proper RESTful way of deleting a resource is to send a DELETE request, and put the scoping information in the URI (not the body), like /api/records?id=10 or /api/records/10. The method information should be in the HTTP method, not the URI. I suggest you read "RESTful web services" to learn the best practices on API design.
0
0
0
0
2012-11-13T17:21:00.000
1
1.2
true
13,365,521
0
0
1
1
I am just starting to learn how to design/write RESTful APIs. I have a general question: Assume I have some sort of simple SQL database and I'm writing an API that allows to create a new record, view records, delete a record or update a record. Assuming I want to delete a record, is it usually better to pass in the ID of the record in the URL, for example, /api/delete_record?id=10, or is it better to do something like: /api/record and have it accept GET, POST, PATCH and DELETE, and the data is handled through the JSON body in the request. I've written a small API using Flask in Python and what I have is just one URL: /record which accepts all the above HTTP methods. It looks at the method in the request and expects the request body in JSON accordingly. Is that considered good or bad practice? Any suggestions would be greatly appreciated. Please note that I am still very new to all of this. I've worked with APIs before but I've never developed any. Thanks!
503 error with urllib and flask
13,393,776
0
0
1,095
0
python,json,heroku,urllib2
nevermind I fixed it. All i did was I referenced the local json file on my computer rather than the url.
0
0
0
0
2012-11-14T07:09:00.000
1
0
false
13,374,423
0
0
1
1
When i run this Heroku app in debug mode, it works perfectly, but when I push the changes and visit the page, the page refuses to load and i get a 503 error. I can't figure out what is wrong (seeing as how debug says everything is fine :( ) [using python 2.7] fixed. see comment
Where does node.js fit in a stack or enhance it
13,384,050
0
0
228
1
python,node.js,web-applications
It would replace Python (flask/werkzeug) in both your view server and your API server.
0
0
0
0
2012-11-14T15:51:00.000
1
1.2
true
13,382,262
0
0
1
1
I am interested in learning more about node.js and utilizing it in a new project. The problem I am having is envisioning where I could enhance my web stack with it and what role it would play. All I have really done with it is followed a tutorial or two where you make something like a todo app in all JS. That is all fine and dandy but where do I leverage this is in a more complex web architecture. so here is an example of how I plan on setting up my application web server for serving views: Python (flask/werkzeug) Jinja nginx html/css/js API sever: Python (flask/werkzeug) SQLAlchemy (ORM) nginx supervisor + gunicorn DB Server Postgres So is there any part of this stack that could be replaced or enhanced by introducing nodeJS I would assume it would be best used on the API server but not exactly sure how.
How to stop Django from caching dynamic templates?
13,393,855
0
1
1,291
0
python,django,caching
After changing your code, make sure you are restarting your server e.g. apache or fastcgi.
0
0
0
0
2012-11-15T05:37:00.000
2
0
false
13,392,095
0
0
1
1
I am really new to Django, and I'm trying have my site display a server status as text. This text, however, is dynamic. I do not understand why, if I go in my model and change the server status function to return 'cats', I don't see 'cats' appear in my browser for like 5 minutes. From what I have learned so far, I suspect this has to do with Django caching templates on the server side. I have tried removing .pyc files, using @never_cache, and editing settings.py to use DummyCache, and clearing browser cache, all to no avail. Does anyone know what's going on, or what a possible fix might be? Thanks!
How to expose an NLTK based ML(machine learning) Python Script as a Web Service?
13,399,425
0
4
1,090
0
python,machine-learning,cherrypy
NLTK based system tends to be slow at response per request, but good throughput can be achieved given enough RAM.
0
0
0
0
2012-11-15T09:51:00.000
2
0
false
13,394,969
0
1
1
1
Let me explain what I'm trying to achieve. In the past while working on Java platform, I used to write Java codes(say, to push or pull data from MySQL database etc.) then create a war file which essentially bundles all the class files, supporting files etc and put it under a servlet container like Tomcat and this becomes a web service and can be invoked from any platform. In my current scenario, I've majority of work being done in Java, however the Natural Language Processing(NLP)/Machine Learning(ML) part is being done in Python using the NLTK, Scipy, Numpy etc libraries. I'm trying to use the services of this Python engine in existing Java code. Integrating the Python code to Java through something like Jython is not that straight-forward(as Jython does not support calling any python module which has C based extensions, as far as I know), So I thought the next option would be to make it a web service, similar to what I had done with Java web services in the past. Now comes the actual crux of the question, how do I run the ML engine as a web service and call the same from any platform, in my current scenario this happens to be Java. I tried looking in the web, for various options to achieve this and found things like CherryPy, Werkzeug etc but not able to find the right approach or any sample code or anything that shows how to invoke a NLTK-Python script and serve the result through web, and eventually replicating the functionality Java web service provides. In the Python-NLTK code, the ML engine does a data-training on a large corpus(this takes 3-4 minutes) and we don't want the Python code to go through this step every time a method is invoked. If I make it a web service, the data-training will happen only once, when the service starts and then the service is ready to be invoked and use the already trained engine. Now coming back to the problem, I'm pretty new to this web service things in Python and would appreciate any pointers on how to achieve this .Also, any pointers on achieving the goal of calling NLTK based python scripts from Java, without using web services approach and which can deployed on production servers to give good performance would also be helpful and appreciable. Thanks in advance. Just for a note, I'm currently running all my code on a Linux machine with Python 2.6, JDK 1.6 installed on it.
Python long running process
13,404,694
1
0
338
0
python
You should use an asynchronous data approach to transfer data from a PHP script - or directly from the Python script, to an already rendered HTML page on the user side. Check a javascript framework for the way that is easier for you to do that (for example, jquery). Then return an html page minus results to the user, with the javascript code to show a "calculating" animation, and fetch the reslts, in xml or json from the proper URL when they are done.
0
0
0
1
2012-11-15T18:22:00.000
1
1.2
true
13,403,741
0
0
1
1
I have a Python web application in which one function that can take up to 30 seconds to complete. I have been kicking off the process with a cURL request (inc. parameters) from PHP but I don't want the user staring at a blank screen the whole time the Python function is working. Is there a way to have it process the data 'in the background', e.g. close the http socket and allow the user to do other things while it continues to process the data? Thank you.
How can I debug python web site on amazon EC2?
13,407,676
5
3
743
0
python,django,amazon-s3,amazon-web-services,amazon-ec2
You just need to find out where your code is located on the server. SSH to one of the instances and then you can use the python interactive shell to run your django code for debugging, use the manage.py commands for database debugging, tests etc. Once you have connected to the instance, it's just an OS.
0
0
0
0
2012-11-15T22:44:00.000
1
1.2
true
13,407,554
0
0
1
1
I am new to web development. This is probably a dumb question but I could not quite find exact answer or tutorial that could help me. The company I am working at has its site(which is built in python django )hosted on amazon EC2. I want to know where I can start about debugging this production site and check logs and databases that are stored there. I have the account information but is there anyway I can access all the stuff using command line(like an ubuntu shell) or tutorial for the same ?
How do I use a pre-made app with my project?
13,447,213
0
0
109
0
python,django,macos
Rule of thumb is: if app's documentary doesn't explain how to install (use, etc.) app, then its better to forget about using that app. How can you rely on 5-month-not-updated-not-tested-not-well-documented app? There should be better solution.
0
0
0
0
2012-11-16T00:32:00.000
1
0
false
13,408,685
0
0
1
1
sorry I'm a total noob but I can't find anywhere that actually explains this. I want to make a web blog, and I figured instead of rolling my own I would use a pre-made one, and I picked the blog from the basic apps project (https://github.com/nathanborror/django-basic-apps). I installed everything fine, added the apps to my settings file, synced the DB's, etc. But now I don't know what to do. How to I actually use the blog? When I run the test server it says I have to do manage.py startapp but I already have the app folder. What should I do? Again, sorry for the noob question. Best, Jake
LLVM IR to Python Compiler
13,416,469
2
6
1,835
0
python,compiler-construction,code-generation,llvm,converter
LLVM up to 3.0 provided a C backend (see lib/Target/CBackend) which should be a good starting point for implementing a simple Python code generator.
0
0
0
1
2012-11-16T11:26:00.000
1
1.2
true
13,415,660
0
0
1
1
Is there any tool to convert the LLVM IR code to Python code? I know it is possible to convert it to Javascript (https://github.com/kripken/emscripten/wiki), to Java (http://da.vidr.cc/projects/lljvm/) and I would love to convert it to Python also. Additionaly if such tool does not exist, could you provide any information, what is the best tool to base on (maybe I should extend the emscripten with other language - Javascript and Python are similar to each other in some terms ;) )
How to run multiple scrapyd servers?
15,516,604
0
2
1,381
0
python,web-scraping,scrapy,scrapyd
What about use the same sqlite database? The dbs_dir is set in scrapyd.script._get_config().
0
0
0
0
2012-11-16T15:38:00.000
2
0
false
13,419,734
0
0
1
1
I have been searching for documentation on the Scrapyd Service but it is very slim. I was wondering if anyone has any idea how to set up multiple Scrapyd servers that point to the same schedule queue?
List of methods and parameters available to Django's class based views?
13,425,827
0
0
90
0
python,django
You can use python manage.py shell and import the views you want and then use dir() for example, dir(TemplateView) or you can read the source code or you use help() method to give quick overview. For example, help(TemplateView)
0
0
0
0
2012-11-16T21:45:00.000
4
0
false
13,424,875
0
0
1
2
I read the documentation and its kind of vague when it comes to outlining the methods/parameters/properties available to Class based views, is there a list of some website that provides such as list anywhere?
List of methods and parameters available to Django's class based views?
13,424,901
1
0
90
0
python,django
You should use your python manage.py shell and simply import your views and use dir(my_view) and help(my_view)
0
0
0
0
2012-11-16T21:45:00.000
4
0.049958
false
13,424,875
0
0
1
2
I read the documentation and its kind of vague when it comes to outlining the methods/parameters/properties available to Class based views, is there a list of some website that provides such as list anywhere?
Pass data in google app engine using POST
13,427,499
1
1
297
0
python,google-app-engine,http-post,http-get
Links inherently generate GET requests. If you want to generate a POST request, you'd need to either: Use a form with method="POST" and submit it, or Use AJAX to load the new page.
0
1
0
0
2012-11-17T03:45:00.000
1
1.2
true
13,427,477
0
0
1
1
I'm trying to pass a variable from one page to another using google app engine, I know how to pass it using GET put putting it in the URL. But I would like to keep the URL clean, and I might need to pass a larger amount of data, so how can pass info using post. To illustrate, I have a page with a series of links, each goes to /viewTaskGroup.html, and I want to pass the name of the group I want to view based on which link they click (so I can search and get it back and display it), but I'd rather not use GET if possible. I didn't think any code is required, but if you need any I'm happy to provide any needed.
Streaming audio and video
13,435,380
5
2
1,745
0
python,linux,streaming,video-streaming,audio-streaming
A good start for trying different options is to use vlc (http://www.videolan.org) Its file->transmit menu command opens a wizard with which you can play. Another good one is gstreamer, (http://www.gstreamer.net), the gst-launch program in particular, which allows you to build pipelines from the command line.
0
1
0
0
2012-11-17T18:42:00.000
2
1.2
true
13,433,597
0
0
1
1
I've been trying for a while but struggling. I have two projects: Stream audio to server for distribution over the web Stream audio and video from a webcam to a server for distribution over the web. I have thus far tried ffmpeg and ffserver, PulseAudio, mjpegstreamer (I got this working but no audio) and IceCast all with little luck. While I'm sure this is likely my fault, I was wondering if there are any more option? I've spent a while experimenting with Linux options and was also wondering if there were options with Python having recently played with OpenCV. If anyone can suggest more options to look into Python or Linux based it would be much appreciated or point me at some good tutorials or explainations of what I've already used it would be much appreciated.
Web scraping - web login issue
13,437,094
0
2
684
0
python,web-scraping,casperjs
Because you mentioned CasperJS I can assume that web site generate some data by using JavaScript. My suggestion would be check WebKit. It is a browser "engine", that will let you do what ever you want with web-site. You can use PyQt4 framework, which is very good, and has a good documentation.
0
0
1
0
2012-11-17T20:52:00.000
5
0
false
13,434,664
0
0
1
1
So I am trying to scrape something that is behind a login system. I tried using CasperJS, but am having issues with the form, so maybe that is not the way to go; I checked the source code of the site and the form name is "theform" but I can never login must be doing something wrong. Does any have any tutorials on how to do this correctly using CasperJS, I've looked at the API and google and nothing really works. Or does someone have any recommendations on how to do web scraping easily. I have to be able to check a simple conditional state and click a few buttons, that is all.
Understanding imports in views.py - Django
13,439,337
0
1
95
0
django,python-2.7,django-views
A Django process is loaded once and remains active to handle incoming requests. So if you define the list as a global variable, it stays in RAM and all is fine. It is discouraged to manipulate the list though.
0
0
0
0
2012-11-18T09:23:00.000
1
1.2
true
13,438,920
0
0
1
1
I have a very big python list ( ~ 1M strings) defined in a .py file. I import it in my views.py to access the list in my views. My question is does the list gets loaded in RAM for every user coming to the web app, or does it loads just one single time and is used for all users ?
Python/Django: how to get files fastest (based on path and name)
13,440,101
0
0
57
0
python,operating-system
The access time for an individual file are not affected by the quantity of files in the same directory. running ls -l on a directory with more files in it will take longer of course. Same as viewing that directory in the file browser. Of course it might be easier to work with these images if you store them in a subdirectory defined by the user's name. But that just depends on what you are going to doing with them. There is no technical reason to do so. Think about it like this. The full path to the image file (/srv/site/images/my_pony.jpg) is the actual address of the file. Your web server process looks there, and returns any data it finds or a 404 if there is nothing. What it doesn't do is list all the files in /srv/site/images and look through that list to see if it contains an item called my_pony.jpg.
0
0
0
0
2012-11-18T12:28:00.000
2
1.2
true
13,440,079
0
0
1
1
My website users can upload image files, which then need to be found whenever they are to be displayed on a page (using src = ""). Currently, I put all images into one directory. What if there are many files - is it slow to find the right file? Are they indexed? Should I create subdirectories instead? I use Python/Django. Everything is on webfaction.
Python + Django on Android
21,954,459
1
14
34,933
0
android,python,django
Well if your end goal is to develop Web applications and host host them on your Android and since you had flask there why not give bottle.py a shot. It's just one file that you copy into your sl4a scripts folder and voila. Bottle is minimalist and near similar to flask. No rooting or Unix environments required.
0
0
0
0
2012-11-18T21:01:00.000
10
0.019997
false
13,444,534
0
0
1
1
I am a Django developer and wanted to know if anyone has any idea of the possibilities of installing and developing on Django using an Android tablet such as the nexus 7. This seems like a reasonably powerful device, can be hooked up with a bluetooth keyboard, and has linux at the core of the OS. So - is it possible to install Python and Django (or even Flask) on Android?
Scrapy Python Crawler - Different Spider for each?
13,451,254
0
1
252
0
python,search,scrapy,web-crawler
different PROJECT for each site is a WORST idea . different SPIDER for each site is a GOOD idea . if you can adjust multiple sites in one SPIDER (based of there nature) is a BEST idea . but again all depends on your Requirements.
0
0
1
0
2012-11-18T23:08:00.000
1
1.2
true
13,445,585
0
0
1
1
I have a lot of different sites I want to scrape using scrapy. I was wondering what is the best way of doing this? Do you use a different "project" for each site you want to scrape, or do you use a different "spider", or neither? Any input would be appreciated
In practice, how eventual is the "eventual consistency" in HRD?
21,716,718
1
8
560
0
python,google-app-engine,app-engine-ndb
If you have a small app then your data probably live on the same part of the same disk and you have one instance. You probably won't notice eventual consistency. As your app grows, you notice it more. Usually it takes milliseconds to reach consistency, but I've seen cases where it takes an hour or more. Generally, queries is where you notice it most. One way to reduce the impact is to query by keys only and then use ndb.get_multi() to load the entities. Fetching entities by keys ensures that you get the latest version of that entity. It doesn't guarantee that the keys list is strongly consistent, though. So you might get entities that don't match the query conditions, so loop through the entities and skip the ones that don't match. From what I've noticed, the pain of eventual consistency grows gradually as your app grows. At some point you do need to take it seriously and update the critical areas of your code to handle it.
0
0
0
0
2012-11-19T05:49:00.000
3
1.2
true
13,448,366
0
0
1
3
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
13,457,830
0
8
560
0
python,google-app-engine,app-engine-ndb
The replication speed is going to be primarily server-workload-dependent. Typically on an unloaded system the replication delay is going to be milliseconds. But the idea of "eventually consistent" is that you need to write your app so that you don't rely on that; any replication delay needs to be allowable within the constraints of your application.
0
0
0
0
2012-11-19T05:49:00.000
3
0
false
13,448,366
0
0
1
3
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
13,457,661
0
8
560
0
python,google-app-engine,app-engine-ndb
What's the worst case if you get inconsistent results? Does a user see some unimportant info that's out of date? That's probably ok. Will you miscalculate something important, like the price of something? Or the number of items in stock in a store? In that case, you would want to avoid that chance occurence. From observation only, it seems like eventually consistent results show up more as your dataset gets larger, I suspect as your data is split across more tablets. Also, if you're reading your entities back with get() requests by key/id, it'll always be consistent. Make sure you're doing a query to get eventually consistent results.
0
0
0
0
2012-11-19T05:49:00.000
3
0
false
13,448,366
0
0
1
3
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
Beautifulsoup 4 prettify outputs XHTML, not HTML
13,456,177
2
4
1,423
0
python,html,beautifulsoup
No, there is no way to force the .prettify() method to not output XHTML-compliant HTML.
0
0
0
0
2012-11-19T14:37:00.000
1
1.2
true
13,455,988
0
0
1
1
I'm trying to parse and prettify a bunch of files made with Microsoft FrontPage. Beautifulsoup parses them with no problem, but when I try to print the output with prettify(), tags like <meta> or <br> are rewritten as <meta ... /> and <br/>. Is there a way to force HTML output?
Django hvad - Best practice to work with multi-lingual object in template
13,459,152
2
2
347
0
python,django,plugins,multilingual,django-hvad
You should act like this: extract each translation in single variable, put them in list ant let that list go to view. Then you can iterate through each translation in view.
0
0
0
0
2012-11-19T16:36:00.000
1
1.2
true
13,458,191
0
0
1
1
I have an article object which has several languages. What's the best way to work with this object? I need to display all attributes in every language. Is is possible to get just my article object and iterate through the languages in the template? Thanks for you help! Ron
Can i use selenium with Scrapy without actual browser opening with python
16,050,387
8
2
2,960
0
python,selenium,scrapy
Updated: PhantomJS is abandoned, and you can use headless browsers directly now, such like Firefox and Chrome! Use PhantomJS instead. You can do browser = webdriver.PhantomJS() in selenium v2.32.0.
0
0
1
0
2012-11-20T07:53:00.000
2
1
false
13,468,755
0
0
1
1
I want to do some web crawling with scrapy and python. I have found few code examples from internet where they use selenium with scrapy. I don't know much about selenium but only knows that it automates some web tasks. and browser actually opens and do stuff. but i don't want the actual browser to open but i want everything to happen from command line. Can i do that in selenium and scrapy
Scrapy why bother with Items when you can just directly insert?
13,469,554
7
1
1,529
0
python,scrapy
If you insert directly inside a spider, then your spider will block until the data is inserted. If you create an Item and pass it to the Pipeline, the spider can continue to crawl while the data is inserted. Also, there might be race conditions if multiple spiders try to insert data at the same time.
0
0
1
0
2012-11-20T08:38:00.000
2
1
false
13,469,321
0
0
1
1
I will be using scrapy to crawl a domain. I plan to store all that information into my db with sqlalchemy. It's pretty simple xpath selectors per page, and I plan to use HttpCacheMiddleware. In theory, I can just insert data into my db as soon as I have data from the spiders (this requires hxs to be instantiated at least). This will allow me to bypass instantiating any Item subclasses so there won't be any items to go through my pipelines. I see the advantages of doing so as: Less CPU intensive since there won't be any CPU processing for the pipelines Prevents memory leaks. Disk I/O is a lot faster than Network I/O so I don't think this will impact the spiders a lot. Is there a reason why I would want to use Scrapy's Item class?
How do I make a web server to make timed multiple choice tests?
13,476,327
1
1
947
0
python,web-applications,haskell,clojure,lisp
When the server-side creates the form, encode an hidden field with the timestamp of the request, so when the users POSTs his form, you can see the time difference. How to implement that is up to you, which server you have available, and several other factors.
0
0
0
1
2012-11-20T12:44:00.000
4
0.049958
false
13,473,489
0
0
1
1
I'd like to make a webapp that asks people multiple choice questions, and times how long they take to answer. I'd like those who want to, to be able to make accounts, and to store the data for how well they've done and how their performance is increasing. I've never written any sort of web app before, although I'm a good programmer and understand how http works. I'm assuming (without evidence) that it's better to use a 'framework' than to hack something together from scratch, and I'd appreciate advice on which framework people think would be most appropriate. I hope that it will prove popular, but would rather get something working than spend time at the start worrying about scaling. Is this sane? And I'd like to be able to develop and test this on my own machine, and then deploy it to a virtual server or some other hosting solution. I'd prefer to use a language like Clojure or Lisp or Haskell, but if the advantages of using, say, Python or Ruby would outweigh the fact that I'd enjoy it more in a more maths-y language, then I like both of those too. I probably draw the line at perl, but if perl or even something like Java or C have compelling advantages then I'm quite happy with them too. They just don't seem appropriate for this sort of thing.
How to change the post-login page using django-registration?
13,481,428
0
0
40
0
python,django,django-registration
LOGIN_REDIRECT_URL = '/' This redirects to home url.
0
0
0
0
2012-11-20T20:15:00.000
1
1.2
true
13,481,352
0
0
1
1
I am using django-registration and django-registration_defaults (for the templates) in my app. How do I change the page the user sees after he/she logs in? I looked through the documentation but was unable to find anything.
Django-admin.py not being recognized suddenly
66,409,423
1
1
4,143
0
python,django
You can try with following code py -m django startproject add_your_project_name_here
0
0
0
0
2012-11-20T22:05:00.000
4
0.049958
false
13,483,004
0
0
1
2
I tried starting a new Django project yesterday but when I did "django-admin.py startproject projectname" I got an error stating: "django-admin.py is not recognized as an internal or external command." The strange thing is, when I first installed Django, I made a few projects and everything worked fine. But now after going back a few months later it has suddenly stopped working. I've tried looking around for an answer and all I could find is that this typically has to do with the system path settings, however, I know that I have the proper paths set up so I don't understand what's happening. Does anybody have any idea what's going on?
Django-admin.py not being recognized suddenly
59,301,923
1
1
4,143
0
python,django
i am totally new to coding, so pardon my amateur answers. I had similar problem - i realized that while my Django was installed on C Drive, my files were saved on D drive and i was trying to run django-admin from D drive in the command prompt which was giving the above error. what worked for me was the following Located the Django-admin.exe and django-admin.py file which was in below path C:\Users[Username]\AppData\Local\Programs\Python\Python38-32\Scripts> copied both these files into the D drive folder where i was trying to create new projects then on the terminal command prompt (which was set to D Drive projects) ran django-admin startproject [filename] and it created a new file [filename]in that folder and this error was resolved
0
0
0
0
2012-11-20T22:05:00.000
4
0.049958
false
13,483,004
0
0
1
2
I tried starting a new Django project yesterday but when I did "django-admin.py startproject projectname" I got an error stating: "django-admin.py is not recognized as an internal or external command." The strange thing is, when I first installed Django, I made a few projects and everything worked fine. But now after going back a few months later it has suddenly stopped working. I've tried looking around for an answer and all I could find is that this typically has to do with the system path settings, however, I know that I have the proper paths set up so I don't understand what's happening. Does anybody have any idea what's going on?
Do i need to pass all the hidden fields as well with the form in scrapy
13,490,825
2
0
341
0
python,forms,httpwebrequest,scrapy
if you are using FormRequest.from_response() then all hidden values are already pre-populated automatically. but in most of the cases you need to override them as well depends on website functionality and behavior.
0
0
0
0
2012-11-21T07:25:00.000
2
0.197375
false
13,488,266
0
0
1
2
I want to know that if i need to perform some search on the job site , then do i need to pass only those variables which are visible on the form or all the variables , even some hidden fields like The form is here http://www.example.com/search.php Now there are two fields on the form like searchTerm and area and there are 5 hidden fields The form submits to http://www.example.com/submit.php Now i have these doubts Do i need to open the form page with scrapy with form page url or with the post url DO i need to pass the hidden variables as well or they will automatically gets posted with the form
Do i need to pass all the hidden fields as well with the form in scrapy
13,492,597
1
0
341
0
python,forms,httpwebrequest,scrapy
Sometimes you can go without some of the hidden fields, other times - not. You cannot know the server logic. It's up to the website how it is handling each of the form fields.
0
0
0
0
2012-11-21T07:25:00.000
2
0.099668
false
13,488,266
0
0
1
2
I want to know that if i need to perform some search on the job site , then do i need to pass only those variables which are visible on the form or all the variables , even some hidden fields like The form is here http://www.example.com/search.php Now there are two fields on the form like searchTerm and area and there are 5 hidden fields The form submits to http://www.example.com/submit.php Now i have these doubts Do i need to open the form page with scrapy with form page url or with the post url DO i need to pass the hidden variables as well or they will automatically gets posted with the form
postgres installation error on Mac 10.6.8
13,495,557
1
1
291
1
python,django,postgresql
Er, not sure how we can help you with that. One is for bash, one is for SQL. No, that's for running the development webserver, as the tutorial explains. There's no need to do that, that's what the virtualenv is for. This has nothing to do with Python versions, you simply don't seem to be in the right directory. Note that, again as the tutorial explains, manage.py isn't created until you've run django-admin.py startproject myprojectname. Have you done that? You presumably created the virtualenv using 3.2. Delete it and recreate it with 2.7. You shouldn't be "reading in a forum" about how to do the Django tutorial. You should just be following the tutorial.
0
0
0
0
2012-11-21T14:12:00.000
1
1.2
true
13,495,135
0
0
1
1
I'm new to web development and I'm trying to get my mac set up for doing Django tutorials and helping some developers with a project that uses postgres. I will try to specify my questions as much as possible. However, it seems that there are lots of floating parts to this question and I'm not quite understanding some parts of the connection between an SQL Shell, virtual environments, paths, databases, terminals (which seem to be necessary to get running on this web development project). I will detail what I did and the error messages that appear. If you could help me with the error messages or simply post links to tutorials that help me better understand how these floating parts work together, I would very much appreciate it. I installed postgres and pgAdmin III and set it up on the default port. I created a test database. Now when I try to open it on the local server, I get an error message: 'ERROR: column "datconfig" does not exist LINE1:...b.dattablespace AS spcoid, spcname, datallowconn, dataconfig,... Here is what I did before I closed pgAdmin and then reopened it: Installation: The Setup told me that an existing data directory was found at /Library/PostgreSQL/9.2/data set to use port 5433. I loaded an .sql file that I wanted to test (I saved it on my desktop and loaded it into the database from there). I'm not sure whether this is related to the problem or not, but I also have virtual environments in a folder ~/Sites/django_test (i.e. when I tell the bash Terminal to “activate” this folder, it puts me in a an (env)). I read in a forum that I need to do the Django tutorials by running “python manage.py runserver" at the bash Terminal command line. When I do this, I get an error message saying “can't open file 'manage.py': [Errno 2] No such file or directory”. Even when I run the command in the (env), I get the error message: /Library/Frameworks/Python.framework/Versions/3.2/Resources/Python.app/Contents/MacOS/Python: can't open file 'manage.py': [Errno 2] No such file or directory (Which I presume is telling me that the path is still set on an incorrect version of Python (3.2), even though I want to use version 2.7 and trashed the 3.2 version from my system. ) I think that there are a few gaps in my understanding here: I don’t understand the difference between typing in commands into my bash Terminal versus my SQL shell Is running “python manage.py runserver” the same as running Python programs with an IDE like IDLE? How and where do I adjust your $PATH environment variable so that the correct python occurs first on the path? I think that I installed the correct Python version into the virtual environment using pip install. Why am I still receiving a “No such file or directory” error? Why does Python version 3.2 still appear in the path indicated by my error message is I trashed it? If you could help me with these questions, or simply list links with any tutorials that explain this, that would be much appreciated. And again, sorry for not being more specific. But I thought that it would be more helpful to list the problems that I have with these different pieces rather than just one, since its their interrelatedness that seems to be causing the error messages. Thanks!
Heroku Node.js + Python
13,785,484
2
3
2,109
0
python,node.js,heroku,cedar
After having played around a little, and also doing some reading, it seems like Heroku apps that need this have 2 main options: 1) Use some kind of back-end, that both apps can talk to. Examples would be a DB, Redis, 0mq, etc. 2) Use what I suggested above. I actually went ahead and implemented it, and it works. Just thought I'd share what I've found.
0
0
1
0
2012-11-21T17:31:00.000
1
1.2
true
13,498,828
0
0
1
1
I am trying to build a web-app that has both a Python part and a Node.js part. The Python part is a RESTful API server, and the Node.js will use sockets.io and act as a push server. Both will need to access the same DB instance (Heroku Postgres in my case). The Python part will need to talk to the Node.js part in order to send push messages to be delivered to clients. I have the Python and DB parts built and deployed, running under a "web" dyno. I am not sure how to build the Node part -- and especially how the Python part can talk to the Node.js part. I am assuming that the Node.js will need to be a new Heroku app, so that it too can run on a 'web' dyno, so that it benefits from the HTTP routing stack, and clients can connect to it. In such a case, will my Python dynos will be accessing it using just like regular clients? What are the alternatives? How is this usually done?
Scrapy Crawling Speed is Slow (60 pages / min)
13,585,472
2
8
4,117
0
python,http,scrapy,web-crawler
Are you sure you are allowed to crawl the destination site at high speed? Many sites implement download threshold and "after a while" start responding slowly.
0
0
1
0
2012-11-22T02:45:00.000
1
0.379949
false
13,505,194
0
0
1
1
I am experiencing slow crawl speeds with scrapy (around 1 page / sec). I'm crawling a major website from aws servers so I don't think its a network issue. Cpu utilization is nowhere near 100 and if I start multiple scrapy processes crawl speed is much faster. Scrapy seems to crawl a bunch of pages, then hangs for several seconds, and then repeats. I've tried playing with: CONCURRENT_REQUESTS = CONCURRENT_REQUESTS_PER_DOMAIN = 500 but this doesn't really seem to move the needle past about 20.
Why i see `u` in front of all the text in python text
13,507,452
2
0
170
0
python,scrapy
The u symbol is added in displaying the strings to represent that the object is a Unicode string. Similarly, if you use a unicode string in your code, you can use a unicode literal by adding a u symbol next to the string itself.
0
0
0
0
2012-11-22T07:07:00.000
3
0.132549
false
13,507,434
0
0
1
1
I am using scrapy to scrap the website. My items are appearing like this {'company': [u'Resource Agility'], i am sick of this u. is that normal? i want to know that if i store my value in database, does the u also get in there? Is there any way to hide that u?
Django model naming conventions
13,518,421
0
2
931
0
python,django,django-models,naming-conventions
Depending on the structure of other parts of your code, you could use the name of the Django app, a package, or a module to further scope the names.
0
0
0
0
2012-11-22T18:21:00.000
2
0
false
13,518,222
0
0
1
2
I have a pretty stupid question about model naming conventions in Django. Imagine a farmstead which has buildings which have rooms. Farmstead --> Buildings --> Rooms With Farmstead it is ok, let's call it a Farmstead. Next one: Building or FarmsteadBuilding? BuildingRoom, Room or FarmsteadBuildingRoom?
Django model naming conventions
13,518,370
6
2
931
0
python,django,django-models,naming-conventions
If all your instances of Room belongs to a Building (and there is no another kind of models like Apartment) and all your instances of Building belongs to a Farmstead (following the same idea), so just use the name of your models like Farmstead, Building and Room. It's not necessary to specify something that is already specified in your business logic.
0
0
0
0
2012-11-22T18:21:00.000
2
1.2
true
13,518,222
0
0
1
2
I have a pretty stupid question about model naming conventions in Django. Imagine a farmstead which has buildings which have rooms. Farmstead --> Buildings --> Rooms With Farmstead it is ok, let's call it a Farmstead. Next one: Building or FarmsteadBuilding? BuildingRoom, Room or FarmsteadBuildingRoom?
element playbin2 query_position always returns query failed
13,529,688
1
0
700
0
python,gstreamer,python-gstreamer
What does "you'll need to thread your own gst object" mean? And what does "wait until the query succeeds" mean? State changes from NULL to PAUSED or PLAYING state are asynchronous. You will usually only be able to do a successful duration query once the pipeline is prerolled (so state >= PAUSED). When you get an ASYNC_DONE message on the pipeline's (playbin2's) GstBus, then you can query.
0
0
1
0
2012-11-22T19:38:00.000
3
0.066568
false
13,519,086
0
0
1
3
I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams. my player is the playbin2 element When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed. The song is definetly playing. (state is not NULL) Does anyone have any experience with this? PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error.
element playbin2 query_position always returns query failed
13,529,066
0
0
700
0
python,gstreamer,python-gstreamer
I found it on my own. Problem was with threading. Apparently, you'll need to thread your gst object and just wait until the query succeeds.
0
0
1
0
2012-11-22T19:38:00.000
3
0
false
13,519,086
0
0
1
3
I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams. my player is the playbin2 element When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed. The song is definetly playing. (state is not NULL) Does anyone have any experience with this? PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error.
element playbin2 query_position always returns query failed
13,525,709
0
0
700
0
python,gstreamer,python-gstreamer
From what source are you streaming? If you query the position from the playbin2 I'd say you do everything right. Can you file a bug for gstreamer, include a minimal python snippet that exposes the problem and tell from which source you stream - ideally its public.
0
0
1
0
2012-11-22T19:38:00.000
3
0
false
13,519,086
0
0
1
3
I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams. my player is the playbin2 element When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed. The song is definetly playing. (state is not NULL) Does anyone have any experience with this? PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error.
Can the use of Beautiful Soup with Scrapy increase the performance
13,560,416
0
0
1,925
0
python,beautifulsoup,scrapy
well the answer is you should try to parse couple of pages with HtmlSelector then Using beautiful Soup. and find some stats. 2ndly most of people use beautiful Soup even lxml for parsing because they already used to for using this. Scrapy basic motive is Crawling if you are not comfortable with Xpath you can go with beautiful Soup , lxml (although lxml package also support xpath) even Only Regex for Parsing.
0
0
0
0
2012-11-23T04:29:00.000
2
0
false
13,523,115
0
0
1
1
I am doing crawling everything in scrapy. I have seen that many people are using beautiful Soup for parsing. I just wanted to know that is there any advantage in terms of speed , efficiency or more slectrors etc which help me in creating spiders and crawlers or scrapy alone should be enough for me
How to check if file exists in Google Cloud Storage?
13,644,827
1
40
52,000
0
python,google-cloud-storage,file-exists
I guess there is no function to check directly if the file exists given its path. I have created a function that uses the files.listdir() API function to list all the files in the bucket and match it against the file name that we want. It returns true if found and false if not.
0
0
1
1
2012-11-23T08:39:00.000
13
1.2
true
13,525,482
0
0
1
1
I have a script where I want to check if a file exists in a bucket and if it doesn't then create one. I tried using os.path.exists(file_path) where file_path = "/gs/testbucket", but I got a file not found error. I know that I can use the files.listdir() API function to list all the files located at a path and then check if the file I want is one of them. But I was wondering whether there is another way to check whether the file exists.
Scrapy inside java?
13,584,941
0
0
1,443
0
java,python,scrapy
I doubt you can run twisted under jython and scrapy is based on twisted. Not sure what you want to do but I recommend to run scrapyd and use the web service interface to communicate with java. Can you give us more details on what you want to achieve?
0
0
0
0
2012-11-23T15:44:00.000
2
1.2
true
13,532,170
0
0
1
1
Is it possible to use Scrapy from within a Java project? With Jython for example, or maybe "indirect" solutions.
Web2py unable to access internet [connection refused]
13,545,399
0
0
412
0
python,linux,apache,web2py
Try testing the proxy theory by ssh -D tunneling to a server outside the proxy and seeing if that works for you.
0
0
1
0
2012-11-24T19:23:00.000
1
0
false
13,544,715
0
0
1
1
I have deployed a web2py application on a server that is running on Apache web server. All seems to be working fine, except for the fact that the web2py modules are not able to connect to an external website. in web2py admin page, i get the following errors : 1. Unable to check for upgrades 2. Unable to download because: I am using web2py 1.9.9, CentOS 5 I am also behind an institute proxy. I am guessing that the issue has to do something with the proxy configurations.
Detect background color of a website
13,548,265
-1
1
1,034
0
java,python,html
I don't know anything about Java or Python, but could you have it parse the html code and look for something like 'background-color: < color >'?
0
0
0
0
2012-11-25T04:20:00.000
3
-0.066568
false
13,548,239
0
0
1
1
I am trying to detect color of different elements in a webpage(saved on machine). Currently I am trying to write a code in python. The initial approach which I followed is: find color word in html file in different tags using regular expressions. try to read the hex value. But this approach is very stupid. I am new to website design, can you please help me with this.
New tables created in web2py not seen when running in Google app Engine
13,551,914
0
1
100
1
python,google-app-engine,web2py
App Engine datastore doesn't really have tables. That said, if web2py is able to make use of the datastore (I'm not familiar with it), then Kinds (a bit like tables) will only show up in the admin-console (/_ah/admin locally) once an entity has been created (i.e. tables only show up once one row has been inserted, you'll never see empty tables).
0
1
0
0
2012-11-25T05:29:00.000
1
0
false
13,548,590
0
0
1
1
I have created an app using web2py and have declared certain new table in it using the syntax db.define_table() but the tables created are not visible when I run the app in Google App Engine even on my local server. The tables that web2py creates by itself like auth_user and others in auth are available. What am I missing here? I have declared the new table in db.py in my application. Thanks in advance
Python module issue
13,573,647
2
2
2,446
1
python,linux,mysql-python,bluehost
I think you upgraded your OS installation which in turn upgraded libmysqlclient and broke native extension. What you can do is reinstall libmysqlclient16 again (how to do it depends your particular OS) and that should fix your issue. Other approach would be to uninstall MySQLdb module and reinstall it again, forcing python to compile it against a newer library.
0
0
0
0
2012-11-26T21:20:00.000
2
1.2
true
13,573,359
0
0
1
2
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
Python module issue
13,591,200
0
2
2,446
1
python,linux,mysql-python,bluehost
You were right. Bluehost upgraded MySQL. Here is what I did: 1) remove the "build" directory in the "MySQL-python-1.2.3" directory 2) remove the egg 3) build the module again "python setup.py build" 4) install the module again "python setup.py install --prefix=$HOME/.local" Morale of the story for me is to remove the old stuff when reinstalling module
0
0
0
0
2012-11-26T21:20:00.000
2
0
false
13,573,359
0
0
1
2
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
Will I have more control over my spider if I use lxml over BeautifulSoup?
13,578,055
3
1
120
0
python,parsing,beautifulsoup,lxml
I don't really think this question makes a whole lot of sense. You need to give more explanation of what exactly your goals are. BeautifulSoup and lxml are two tools that in large part do the same things, but have different features and API philosophies and structure. It's not a matter of "which gives you more control," but rather "which is the right tool for the job?" I use both. I prefer the BeautifulSoup syntax, as I find it more natural, but I find that lxml is better when I'm trying to parse unknown quantities on the fly based on variables--e.g., generating XPath strings that include variable values, which I will then use to extract specific elements from varying pages. So really, it depends on what you're trying to do. TL;DR I find BeautifulSoup easier and more natural to use but lxml ultimately to be more powerful and versatile. Also, lxml wins the speed contest, no question.
0
0
1
0
2012-11-27T05:28:00.000
1
1.2
true
13,577,922
0
0
1
1
I am learning to make spiders and crawlers. This spidering is my passion and I am going to do that for a long time. For parsing I am thinking of using BeautifulSoup. But some people say that if I use lxml, I will have more control. Now I don't know much. But I am ready to work hard even if using lxml is harder. But if that gives me full control then I am ready for it. So what is your opinion?
differences between scrapy.crawler and scrapy.spider?
13,584,851
2
3
1,470
0
python,scrapy
CrawlerSpider is a sub-class of BaseSpider : This is the calls you need to extend if you want your spider to follow links according to the "Rule" list. "Crawler" is the main crawler sub-classed by CrawlerProcess. You will have to sub-class CrawlerSpider in you spider but I don't think you will have to touch Crawler.
0
0
1
0
2012-11-27T05:55:00.000
1
1.2
true
13,578,170
0
0
1
1
I am new to Scrapy and quite confused about crawler and spider. It seems that both of them can crawl the website and parse items. There are a Crawler class(/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py) and a CrawlerSpider class(/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/crawl.py) in Scrapy. Does anyone could tell me the differences between them? And which one should I use in what conditions? Thanks a lot in advance!
Minimal effort setup of Django for webdesigner
13,678,321
0
2
215
0
python,django,heroku
Why not store all of the assets on S3? It sounds to me that they don't really need to be part of the application at all, but external resources that the application references.
0
0
0
0
2012-11-27T12:43:00.000
2
0
false
13,584,524
0
0
1
2
In the old world I had a pretty ideal development setup going to work together with a webdesigner. Keep in mind we mostly do small/fast projects, so this is how it worked: I have a staging site on a server (Webfaction or other) Designer accesses that site and edits templates and assets to his satisfaction I SSH in regularly to checkin everything into source control, update files from upstream, resolve conflicts It works brilliantly because the designer does not need to learn git, python, package tools, syncdb, migrations etc. And there's only the one so we don't have any conflicts on staging either. Now the problem is in the new world under Heroku, this is not possible. Or is it? In any way, I would like your advice on a development setup that caters to those who are not technical.
Minimal effort setup of Django for webdesigner
13,584,919
0
2
215
0
python,django,heroku
How about a static 'showcase' site where all possible UI elements, templates, etc are shown using dummy content. The designer can connect, edit stuff and you merge in the changes in the end. Another option would be a test server with the full application running (kind of like you did it before) but with the option to connect via FTP or whatever the designer prefers.
0
0
0
0
2012-11-27T12:43:00.000
2
0
false
13,584,524
0
0
1
2
In the old world I had a pretty ideal development setup going to work together with a webdesigner. Keep in mind we mostly do small/fast projects, so this is how it worked: I have a staging site on a server (Webfaction or other) Designer accesses that site and edits templates and assets to his satisfaction I SSH in regularly to checkin everything into source control, update files from upstream, resolve conflicts It works brilliantly because the designer does not need to learn git, python, package tools, syncdb, migrations etc. And there's only the one so we don't have any conflicts on staging either. Now the problem is in the new world under Heroku, this is not possible. Or is it? In any way, I would like your advice on a development setup that caters to those who are not technical.
Deactivating needless properties in Django Admin
13,590,715
0
0
57
0
python,django
Another way to solve this is by adding javascript that disable the "other" fields when you set one. You can then enforce this in the form/model validation. But I should say that I think the best way to deal with this, if it can be applied to your problem, is the way PT114 propose.
0
0
0
0
2012-11-27T17:12:00.000
4
0
false
13,589,417
0
0
1
1
We have three properties on my animal model: dog_name cat_name monkey_name One of them must be filled (no more! animal is a dog, a cat or a monkey) and if I set for example cat_name, I want dog_name and monkey_name to be deactivated (user shouldn't set more than one name). Is it possible to set this in django admin? This example is maybe stupid, but I tried to explain my intensions - deactivating needless properties.
Django cannot import from app/models.py
13,591,321
1
0
426
0
python,django,django-models
Well you already know it is because of mutual dependencies. The way around it would be to split the util file in to two so, that you could avoid circular imports by separating the parts where you are required to call the models. Also, as suggested by Mipadi instead of using a global import statement you could simply make the import in the method scope Moreover, it would really depend how you are trying to use the models. For instance, you could access the models by "app_name.class_name" but really depends on the context in which you want to use.
0
0
0
0
2012-11-27T19:01:00.000
1
0.197375
false
13,591,170
0
0
1
1
In my Django app (lets call it app) I have a number of files: views.py, models.py and I created my own utils.py. Unfortunately, while I can include my models in my views.py file simply by saying from models import * In my utils.py file, if I try the same thing, and then work with a model, I get an exception Global name: MyModel is not defined. models.py does indeed include utils.py, and I understand this may be a circular dependcy, but it worked fine until I added a recent change. Is this the cause, if so is the only solution to refactor my utils file?
Change dependencies code on dotcloud. Django
13,594,735
1
1
67
0
python,django,dotcloud
If you are using a requirements.txt, no, there is not a way to do that from pypi, since Dotcloud is simply downloading the packages you've specified from pypi, and obviously your changes within your virtualenv are not going to be reflected by the canonical versions of the packages. In order to use the edited versions of your dependencies, you'll have to bundle them into your code like any other module you've written, and import them from there.
0
0
0
0
2012-11-27T22:17:00.000
2
0.099668
false
13,594,164
0
0
1
2
I'm deploying my Django app with Dotcloud. While developing locally, I had to make changes inside the code of some dependencies (that are in my virtualenv). So my question is: is there a way to make the same changes on the dependencies (for example django-registration or django_socketio) while deploying on dotcloud? Thank you for your help.
Change dependencies code on dotcloud. Django
13,594,763
1
1
67
0
python,django,dotcloud
There are many ways, but not all of them are clean/easy/possible. If those dependencies are on github, bitbucket, or a similar code repository, you can: fork the dependency, edit your fork, point to the fork in your requirements.txt file. This will allow you to track further changes to those dependencies, and easily merge your own modifications with future versions. Otherwise, you can include the (modified) dependencies with your code. It's not very clean and increases the size of your app, but that's fine too. Last but not least, you can write a very hackish postinstall script, to locate the .py file to be modified (e.g. import foo ; foopath = foo.__file__), then apply a patch on that file. This would probably cause most sysadmins to cringe in terror, but it's worth mentioning :-)
0
0
0
0
2012-11-27T22:17:00.000
2
1.2
true
13,594,164
0
0
1
2
I'm deploying my Django app with Dotcloud. While developing locally, I had to make changes inside the code of some dependencies (that are in my virtualenv). So my question is: is there a way to make the same changes on the dependencies (for example django-registration or django_socketio) while deploying on dotcloud? Thank you for your help.
Tracking online status?
13,610,314
0
1
302
0
javascript,python,html,django,web-applications
As Sanjay says, prefer using memory solutions (online statuses have a quite brief use) like the Django cache (Redis or Memcache). If you want a simple way of updating the online status of an user on an already loaded web page, use any lib like jQuery, AJAX-poll an URL giving the status of an user, and then update the tiny bit of your page showing your wanted status. Don't poll this page too often, once every 15 seconds seems reasonable.
0
0
0
0
2012-11-28T16:38:00.000
2
0
false
13,609,985
0
0
1
2
I am quite new to web development and am working on this social networking site. Now I want to add functionality to show if a person is online. Now one of the ways I figure out doing this is by keeping online status bit in the database. My question is how to do it dynamically. Say the page is loaded and a user (say connection) comes online. How do I dynamically change status of that connection on that page. I wanted to know if there are any tools(libraries) available for this type of tracking. My site is in python using django framework. I think something can be done using javascript/ jquery . I want to know if I am going in the right direction or is there anything else I should look into?
Tracking online status?
13,610,508
1
1
302
0
javascript,python,html,django,web-applications
Create a new model with a last_activity DateTimeField and a OneToOneField to User. Alternatively, if you are subclassing User, using a custom User in django 1.5, or using a user profile, just add the field to that model. Write a custom middleware that automatically updates the last_activity field for each user on every request. Write an is_online method in one of your models that uses a timedelta to determine a user's inactivity period to return a boolean for whether they are online. For example, if their last_activity was more than 15 minutes ago, return False. Write a view that is polled through jQuery ajax to return a particular user's online status.
0
0
0
0
2012-11-28T16:38:00.000
2
0.099668
false
13,609,985
0
0
1
2
I am quite new to web development and am working on this social networking site. Now I want to add functionality to show if a person is online. Now one of the ways I figure out doing this is by keeping online status bit in the database. My question is how to do it dynamically. Say the page is loaded and a user (say connection) comes online. How do I dynamically change status of that connection on that page. I wanted to know if there are any tools(libraries) available for this type of tracking. My site is in python using django framework. I think something can be done using javascript/ jquery . I want to know if I am going in the right direction or is there anything else I should look into?
How to setup WSGI server to run similarly to Apache?
13,619,836
3
1
300
0
python,tornado,wsgi,cherrypy
What you are after would possibly happen anyway for WSGI severs. This is because any Python exception only affects the current request and the framework or WSGI server would catch the exception, log it and translate it to a HTTP 500 status page. The application would still be in memory and would continue to handle future requests. What we get down to is what exactly you mean by 'crashes Apache process'. It would be rare for your code to crash, as in cause the process to completely exit due to a core dump, the whole process. So are you being confused in your terminology in equating an application language level error to a full process crash. Even if you did find a way to crash a process, Apache/mod_wsgi handles that okay and the process will be replaced. The Gunicorn WSGI server will also do that. CherryPy will not unless you have a process manager running which monitors it and the process monitor restarts it. Tornado in its single process mode will have the same problem. Using Tornado as the worker in Gunicorn is one way around that plus I believe Tornado itself may have some process manager in it now for running multiple process which allow it to restart processes if they die. Do note that if your application bug which caused the Python exception is bad enough and it corrupts state within the process, subsequent requests may possibly have issues. This is the one difference with PHP. With PHP, after any request, whether successful or not, the application is effectively thrown away and doesn't persist. So buggy code cannot affect subsequent requests. In Python, because the process with loaded code and retained state is kept between requests, then technically you could get things in a state where you would have to restart the process to fix it. I don't know of any WSGI server though that has a mechanism to automatically restart a process if one request returned an error response.
0
1
0
1
2012-11-29T04:49:00.000
2
0.291313
false
13,619,021
0
0
1
1
I'm coming from PHP/Apache world where running an application is super easy. Whenever PHP application crashes Apache process running that request will stop but server will be still ruining happily and respond to other clients. Is there a way to have Python application work in a smilar way. How would I setup wsgi server like Tornado or CherryPy so it will work similarly? also, how would I run several applications from one server with different domains?
Virtualenv and python - how to work outside the terminal?
13,619,252
2
3
2,074
0
python,virtualenv
Tell Eclipse or Idle that the python interpreter is django_venv/bin/python instead of /usr/bin/python
0
1
0
0
2012-11-29T04:55:00.000
2
1.2
true
13,619,088
1
0
1
1
When I enter my virtual environment (source django_venv/bin/activate), how do I make that environment transfer to apps run outside the terminal, such as Eclipse or even Idle? Even if I run Idle from the virtualenv terminal window command line (by typing idle), none of my pip installed frameworks are available within Idle, such as SQLAlchemy (which is found just fine when running a python script from within the virtual environment).
Sharding a Django Project
13,639,532
1
3
1,300
1
python,django,postgresql,sharding
I agree with @DanielRoseman. Also, how many is too many rows. If you are careful with indexing, you can handle a lot of rows with no performance problems. Keep your indexed values small (ints). I've got tables in excess of 400 million rows that produce sub-second responses even when joining with other many million row tables. It might make more sense to break user up into multiple tables so that the user object has a core of commonly used things and then the "profile" info lives elsewhere (std Django setup). Copies would be a small table referencing books which has the bulk of the data. Considering how much ram you can put into a DB server these days, sharding before you have too seems wrong.
0
0
0
0
2012-11-29T07:32:00.000
2
0.099668
false
13,620,867
0
0
1
1
I'm starting a Django project and need to shard multiple tables that are likely to all be of too many rows. I've looked through threads here and elsewhere, and followed the Django multi-db documentation, but am still not sure how that all stitches together. My models have relationships that would be broken by sharding, so it seems like the options are to either drop the foreign keys of forgo sharding the respective models. For argument's sake, consider the classic Authot, Publisher and Book scenario, but throw in book copies and users that can own them. Say books and users had to be sharded. How would you approach that? A user may own a copy of a book that's not in the same database. In general, what are the best practices you have used for routing and the sharding itself? Did you use Django database routers, manually selected a database inside commands based on your sharding logic, or overridden some parts of the ORM to achive that? I'm using PostgreSQL on Ubuntu, if it matters. Many thanks.
Transactions in Web2Py over Google App Engine
13,892,159
1
0
192
0
python,google-app-engine,web2py
Mutual exclusion is already built into DBMS so we just have to use that. Lets take an example. First, your table in the model should be defined in such a way that your room number should be unique (use UNIQUE constraint). When User1 and User2 both query for a room, they should get a response saying the room is vacant. When both the users send the "BOOK" request for that room at the same time, the booking function should directly insert the "BOOK" request of both users into the db. But only one will actually be executed (because of the UNIQUE constraint) and the other will produce a DAL exception. Catch the exception and respond to the user whose "BOOK" request was unsuccesful, saying You just missed this room by an instant :-) Hope this helped.
0
1
0
0
2012-11-29T09:45:00.000
1
0.197375
false
13,622,895
0
0
1
1
I'm making a room reservation system in Web2Py over Google App Engine. When a user is booking a Room the system must be sure that this room is really available and no one else have reserved it just a moment before. To be sure I make a query to see if the room is available, then I make the reservation. The problem is how can I do this transaction in a kind of "Mutual exclusion" to be sure that this room is really for this user? Thank you!! :)
django site name needs to match domain name registered?
13,623,382
0
0
102
0
python,django,django-sites
No. The Django site name does not have anything to do with how it's hosted - it's purely used for internal stuff like displaying the name on the site itself and on emails.
0
0
0
0
2012-11-29T10:02:00.000
1
1.2
true
13,623,206
0
0
1
1
I am new to django framework. I created a site with name "project" and it is working on local machine. Now, I am trying to move on to my test server ("ideometrics.se)". I created a subdomain ("project.ideometrics.se") to access this application from that subdomain. Do I have to change my django site name to "project.ideometrics.se" to make it work on my server ? Any help is appreciated.
How to use browser cookies programmatically
13,628,291
0
0
925
0
java,python,cookies,http-headers,httpwebrequest
When you send the login information (and usually in response to many other requests) the server will set some cookies to the client, you must keep track of them and send them back to the server for each subsequent request. A full implementation would also keep track of the time they are supposed to be stored.
0
0
1
0
2012-11-29T14:42:00.000
1
1.2
true
13,628,190
0
0
1
1
I have a crawler that automates the login and crawling for a website, but since the login was changed it is not working anymore. I am wondering, can I feed the browser cookie (aka, I manually log-in) to my HTTP request? Is there anything particularly wrong in principle that wouldn't make this work? How do I find the browser cookies relevant for the website? If it works, how do I get the "raw" cookie strings I can stick into my HTTP request? I am quite new to this area, so forgive my ignorant questions. I can use either PYthon or Java
last_accessed time in beaker session always None, but _accessed_time is changing
13,762,515
1
1
194
0
python,mod-wsgi,wsgi,beaker
It turns out this behaviour is down to multiprocessing via apache. It was resolved by using an external store to manage tracking when the session ID is first seen, and maintaining my own 'last_accessed_time' etc.
0
0
0
0
2012-11-30T12:19:00.000
1
0.197375
false
13,645,120
0
0
1
1
I'm using beakers WSGI SessionMiddleware to manage a session between browser and application. I am trying to differentiate between when a session is first accessed against any further requests. Fom the docs it appears there are two useful values made available in the WSGI environment, ["beaker.session"].last_accessed and ["beaker.session"]["_accessed_time"] However, on repeated requests ["beaker.session"].last_accessed is always returning None, while the timestamp value in ["beaker.session"]["_accessed_time"] can be seen to be increasing with each request. Each request performs a ["beaker.session"].save() - I have tried various combinations of setting auto=True in the session, and using .save() / .persist(), but no joy : .last_accessed is always None. I am not using the session to actually persist any data, only to manage the creation of and pass through the session.id. ( I am using a session type of 'cookie' )
How can I disable javascript in firefox with selenium?
29,955,598
0
3
5,735
0
python
You can disable javascript directly from the browser. Steps: Type About:config in url Click I'll be careful, I promise Search for javascript.enabled Right click -> Toggle Value = false
0
0
1
0
2012-12-01T01:35:00.000
2
0
false
13,655,486
0
0
1
1
How can I add preferences to the browser so it launches without javascript?
When a web backend does more than simply reply to requests, how should my application be structured?
13,775,086
1
2
183
0
python,architecture,rabbitmq,web-frameworks,gevent
My first thought is that you could use a service oriented architecture to separate these tasks. Each of these services could run a Flask app on a separate port (or machine (or pool of machines)) and communicate to each other using simple HTTP. The breakdown might go something like this: GameService: Handles incoming connections from players and communicates with them through socketio. GameFinderService: Accepts POST requests from GameService to start looking for games for player X. Accepts GET requests from GameService to get the next best game for playerX. You could use Redis as a backing store for this short-lived queue of games per connected player that gets updated each time GameStatusService (below) notifies us of a change. GameStatusService: Monitors in-progress games via UDP and when a notable event occurs e.g. new game created, player disconnects, etc it notifies GameFinderService of the change. GameFinderService would then update its queues appropriately for each connected player. Redis is really nice because it serves as a data structure store that allows you to maintain both short and long lived data structures such as queues without too much overhead.
0
0
0
0
2012-12-01T05:34:00.000
1
0.197375
false
13,656,736
0
0
1
1
I'm creating a website that allows players to queue to find similarly skilled players for a multiplayer video game. Simple web backends only modify a database and create a response using a template, but in addition to that, my backend has to: Communicate with players in real-time (via gevent-socketio) while they queue or play Run calculations in the background to find balanced games, slowly compromising game quality as waiting time grows (and inform players via SocketIO when a game has been found) Monitor in progress games via a UDP socket (and if a player disconnects, ask the queue for a substitute) and eventually update the database with the results I know how I would do these things individually, but I'm wondering how I should separate these components and let them communicate. I imagine that my web framework (Flask) shouldn't be very involved at all in these other things. Since I already must use gevent, I'm currently planning to start separate greenlets for each of these tasks. This will work for all my tasks (with the possible exception of the calculations) because they will usually be waiting for something to happen. However, this won't scale at all because I can't run more Flask instances. Everything would be dependent on the greenlets running in just a single thread. So is this the best way? Is there another way to handle separating these tasks (especially with languages I might use in the future that don't have coroutines)? I've heard of RabbitMQ/ZeroMQ and Celery and other such tools, but I wasn't sure how and whether to use them to solve this problem.
New Django middleware not getting called
13,690,608
2
2
1,449
0
python,django,amazon-ec2,memcached,django-middleware
It was a silly glitch. I found out that i needed to reload gunicorn server to make the new middleware work. Thanks everybody for the help.
0
0
0
0
2012-12-01T14:17:00.000
3
1.2
true
13,660,301
0
0
1
1
I am quite new to web development. I am working on a website hosted on amazon ec2 server. The site is in python using django framework. I am using memcached to cache some client information. My site and caching works on local machine but not on the EC2 server. I checked memcached server and found out that it was not able to set the keys. Is there something I might need to change in settings.py so that keys are set appropriately on the server or something else that I might be missing. EDIT: Found out the problem. I added a new middleware for setting keys in the memcache. That is not getting called. It works perfectly on the local machine. On the server I am using gunicorn as the app server and nginx as the reverse proxy. Can any of these cause the problems. Also I tried to reload nginx but that didn't help either.
How to share the same model on another AWS Instance
13,664,767
2
0
33
0
python,django
Instead of writing the same data models twice you can create a small django app (which will contain the model definition and logic) as a python module and install it on both the two servers / apps.
0
0
0
0
2012-12-01T22:40:00.000
1
1.2
true
13,664,482
0
0
1
1
I want to write my first Python program using Django. The site will be hosted on Amazon. However my API will use Django and Piston sitting on another instance. I don’t want to have to replicate my Models across two servers. How can I get the API to share the same model as the main Django instance, or should I?
Where is the django admin media folder situated?
13,666,236
2
0
1,602
0
python,django
Make sure you have STATIC_ROOT defined in your settings. Define STATIC_URL. Use python manage.py collectstatic command to collect every static file from every app (including contrib.admin) in your STATIC_ROOT folder.
0
0
0
0
2012-12-02T02:54:00.000
1
1.2
true
13,665,968
0
0
1
1
My admin css is not working. I tried to find it in folder: /usr/local/lib/python2.7/site-packages/django/contrib/admin There is no media folder there. I am using Django 1.5a.
Advice on which language to persue for browser automation & scraping
13,672,402
1
2
155
0
c#,asp.net,python,web2py,browser-automation
Selenium is a pretty good library for automation if you want to scrape information off of javascript enabled pages. It has bindings for a number of languages. If you only want basic scraping though, I would go with Mechanize; no need to open a browser.
0
0
1
0
2012-12-02T18:28:00.000
1
0.197375
false
13,672,346
0
0
1
1
Novice to programming. I have most of my experience in python. I am comparind this to C#. I have created small web apps using Web2py, and have read 'learn python the hard way'. I have limited to no C# experience besides setting up and playing in VS. My end goal is to be able to develop web apps (So far I do like web2py), and even some web automation programs using GUI's. For example, an application that will allow me to put / get information in a database from my GUI, and then post it to my site's either via a database connection, or post to other sites that are not mine, through automation. I really like python so far, but I feel like since I do want to work with GUI applications, that C# may be the best bet... More specifically, does Python even compare, or have modules/library that will help me do GUI web & browser automation, versus C#? How about with just basic scraping? Pulling data from numerous sites to display in a database. Does Python still have an edge? Thanks. I hope this question has some objectivity to it considering the different libraries and modules available. If it is too subjective , please accept my apologies.
How will my usage of manage.py and django-admin.py change with virtualenv?
13,683,513
0
0
72
0
python,django
No, only thing what virtualenv does, is that it creates an environment that has its own installation directories, that doesn’t share libraries with other virtualenv environments (and optionally doesn’t access the globally installed libraries either). Therefore it just means, that your project will use libraries and packages from virtualenv. So you won't have to change your manage.py.
0
0
0
0
2012-12-03T12:29:00.000
2
0
false
13,683,289
1
0
1
1
I have completely shifted my all packages to virtualenv, but my project files were generated by the global Django installation. I want to know what changes I need to make to the manage.py file, and do I need to use the virtualenv django-admin.py file now?
Application that uses Django models need to be a Django app?
13,689,659
2
0
56
0
python,django
No, importing your models is enough, as long as you have Django installed and correctly configured.
0
0
0
0
2012-12-03T18:42:00.000
1
1.2
true
13,689,617
0
0
1
1
What is the definition of a Django application? Any application that uses Django features, such as orm and url-view mapping? I ask because I have a component which has 2 sub-components: a web service server and a standalone application. The web service server uses Django views to map url to request handlers. The web service server and the application use Django models and a database managed by Django. The web service server obviously needs to be a Django application. The standalone application must be a Django application as well? Thanks in advance.
Parsing data to Objective-C with XML or JSON with Python / Django backend
13,690,743
1
0
401
0
python,objective-c,xml,django,json
It really depends on the data you need to represent. If you need to represent programming language objects, JSON is probably you best choice, being more lightweight and human-readable than XML. If you need to represent a complex data structure with its custom schema, you will probably want to give XML a shot. That being said, Objective-C provides both XML and JSON parsers.
0
0
0
0
2012-12-03T19:46:00.000
1
1.2
true
13,690,514
0
0
1
1
Ok all! I have a question for you...Currently looking at building an Iphone app with Objective-C and were going to be using Python / Django as the back-end as the website is already built. Meaning all the content is already stored in the database. Were going to use an app called tastypie as our API which can either pull then data into a JSON format or XML. However I want to know which is going to be the best for my needs, either JSON or XML? The data that is going to be pulled is a directory list which will display a map within each property. Then a page which will display a load of recipes. If you could give your thoughts on which you think is going to be the best to use from JSON or XML, would be awesome! :) If you need to know any more information, please let me know. Thanks, Josh
upsert throws error when I pass Id as externalIdFieldName in Beatbox API while contacting Salesforce
13,695,617
3
0
565
0
python,api,salesforce
If you already know the salesforce Id of the record, just call update instead.
0
0
1
0
2012-12-04T02:48:00.000
2
1.2
true
13,695,322
0
0
1
1
I'm using beatbox API to update/insert data in to salesforce opportunity object. upsert() throws INVALID FIELD error when I pass Id as externalIDFieldName. Currently I'm using another unique external Id and it's working fine but I want to use the salesforce Id. Please shed some light on what I'm missing.
Can i have multiple virtual env on same computer withsame name
13,697,179
5
4
1,400
0
python,django,virtualenv
It is possible to create multiple virtualenvs with the same name; they must be in different parent directories, however. Alternately, you could create multiple virtualenvs in the same parent directory, but with different names.
0
0
0
0
2012-12-04T05:48:00.000
1
1.2
true
13,696,872
1
0
1
1
I am making the base skeleton of some Django project files so that I can put them on git and whenever I need to make a new Django site I can grab the files from git and start a blank project. In my fabfile, I'm generating a virtualenv named virutalenv. I just want to know that if I need to make many sites on single computer then all will have same not but they will be in the project directory. Is that ok?
Can I store a blob with a key_name with Google Appengine ndb?
13,714,228
1
1
337
0
python,google-app-engine,blob,blobstore
When you upload data to the blobstore you receive a blob_key and a file_name. The blob_key is unique. The file_name is NOT unique. When you do another upload with the same file_name a new version is stored in the blobstore with the same file_name and a new unique blob_key. The first blob is NOT deleted. You have to do it yourself. To administer these uploaded blobs, you create a datastore entity with your own key_name. You can use the file_name for this purpose. And you can use a BlobKeyProperty (NDB) or blobstore.BlobReferenceProperty (datastore) in this entity to reference your blob (to save your blob_key reference). In this way your key_name / file_name uniquely identifies your blob.
0
1
0
0
2012-12-04T16:54:00.000
1
1.2
true
13,707,922
0
0
1
1
I am building a service where you can upload images. On the blob creation I would like to supply a key_name, which will be used by the relevant entity to retrieve it later.
Can i divide the models in different files in django
13,718,988
0
8
2,427
0
python,django,django-models
you can separate the model file like this : -------models/ -------------- init.py -------------- usermodels.py --------------othermodel.py in the init.py: ---------------from usermodels import * ---------------from othermodel import * and in the *models.py , add META class: --------class Meta: --------------app_label = 'appName'
0
0
0
0
2012-12-05T07:59:00.000
5
0
false
13,718,656
0
0
1
1
Currently all my models are in models.py. Ist becomming very messy. Can i have the separate file like base_models.py so that i put my main models there which i don't want to touch Also same case for views and put in separate folder rather than develop a new app