Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
django admin inline with only one form without add another option
44,101,423
5
3
1,934
0
python,django,inline
Instead of using ForeignKey, use OneToOneField and it will display just one item without the add another link.
0
0
0
0
2015-08-06T03:41:00.000
1
0.761594
false
31,846,436
0
0
1
1
I have a foreign key in model and I am making inline in admin side. I passed extra=0 to display only one form and its working but I am getting Add another model in admin. I dont want to display Add another model in admin just one form only. How can I do that . How can I remove Add another option from admin
Django runserver - page is loading in other PCs
31,848,062
0
0
51
0
django,python-2.7
If both computers are in same network, you can use local IP and the port you indicated with runserver command. For instance, if the computer with django app has an IP of 192.168.1.145, you need to go to http://192.168.1.145:8000 to access your app in other computers with same network. If it's about accessing the app from different computers with different networks. We have servers for that. If you have to need the app from your own computer, you need to get a static IP.(It's not recommended though.) Call your ISP for static IP.
0
0
0
0
2015-08-06T05:00:00.000
1
0
false
31,847,163
0
0
1
1
I am running a django runserver from my macbook at home. Able to load the page in my mac. But when i tried copy the link and load the page on other PC the page is not loading. Why? Please help..
Django (grappelli): how add my own css to all the pages or how to extend admin's base.html?
31,850,151
2
1
3,792
0
python,django,django-admin,django-grappelli
If you want to change the appearance of the admin in general you should override admin templates. This is covered in details here: Overriding admin templates. Sometimes you can just extend the original admin file and then overwrite a block like {% block extrastyle %}{% endblock %} in django/contrib/admin/templates/admin/base.html as an example. If your style is model specific you can add additional styles via the Media meta class in your admin.py. See an example here: class MyModelAdmin(admin.ModelAdmin): class Media: js = ('js/admin/my_own_admin.js',) css = { 'all': ('css/admin/my_own_admin.css',) }
0
0
0
0
2015-08-06T07:50:00.000
4
0.099668
false
31,849,867
0
0
1
1
In Django grappelli, how can I add my own css files to all the admin pages? Or is there a way to extend admin's base.html template?
Celery worker stops consuming from a specific queue while it consumes from other queues
32,602,968
1
0
1,570
0
python,django,rabbitmq,celery
I found the problem in my code, So in one of my task i was opening a connection to parse using urllib3 that was getting hung. After moving out that portion in async task, things are working fine now.
0
1
0
0
2015-08-06T19:12:00.000
1
1.2
true
31,863,996
0
0
1
1
I am using rabbitmq as broker, there is a strange behaviour that is happening in my production environment only. Randomly sometimes my celery stops consuming messages from a queue, while it consumes from other queues. This leads to pileup on messages in queue, if i restart my celeryd everything starts to work fine. "/var/logs/celeryd/worker" does not indicate any error. I am not even sure where to start looking as i am new to python/django. Any help will be greatly appreciated.
Installing Scrapy on Python VirtualEnv
33,439,385
0
1
2,964
0
python,scrapy,virtualenv
It's not possible to do what I wanted to do on the GoDaddy plan I had.
0
0
0
0
2015-08-06T21:51:00.000
3
1.2
true
31,866,429
0
0
1
1
Here's my problem, I have a shared hosting (GoDaddy Linux Hosting package) account and I'd like to create .py file to do some scraping for me. To do this I need the scrapy module (scrapy.org). Because of the shared account I can't install new modules so I installed VirtualEnv and created a new virtual env. that has pip, wheel, etc. preinstalled. Running pip install scrapydoes NOT complete successfully because scrapy has lot of dependencies like libxml2 and it also needs python-dev tools. If I had access to 'sudo apt-get ...' this would be easy but I dont'. I can only use pip and easy_install. So How do I install the python dev tool? And how do I install the dependencies? Is this even possible? Cheers
flask application deployment: rabbitmq and celery
31,885,764
1
0
603
0
python,deployment
I don't see why you couldn't deploy on the same node (that's essentially what I do when I'm developing locally), but if you want to be able to rapidly scale you'll probably want them to be separate. I haven't used rabbitmq in production with celery, but I use redis as the broker and it was easy for me to get redis as a service. The web app sends messages to the broker and worker nodes pick up the messages (and perhaps provide a result to the broker). You can scale the web app, broker service (or the underlying node it's running on), and the number of worker nodes as appropriate. Separating the components allows you to scale them individually and I find that it's easier to maintain.
0
1
0
0
2015-08-07T14:01:00.000
1
0.197375
false
31,879,606
0
0
1
1
My web app is using celery for async job and rabbitmq for messaging, etc. The standard stuff. When it comes to deployment, are rabbitmq and celery normally deployed in the same node where the web app is running or separate? What are the differences?
Can't see permissions for new model in Django's admin interface
44,893,888
0
1
725
0
django,python-2.7,django-admin
I fixed a very similar issue today where I couldn't assign Users permissions concerning tables that were created in multiple databases because those tables didn't appear in the list of "available permissions." It appears that I accidentally migrated the model creation migrations to the default database before I correctly used the --database DATABASE flag with manage.py migrate. So I had the same table names in both the default and auxiliary databases. I dropped the tables in the default database, leaving only the tables in the auxiliary database, and then the tables appeared in the permissions list.
0
0
0
0
2015-08-07T14:52:00.000
1
0
false
31,880,734
0
0
1
1
I registered a new model in Django's admin interface but I can't see any permissions related to it that I can assign to users or groups. Could it be related to the fact that my models come from a different database?
How to I hide my secret_key using virtualenv and Django?
31,883,608
2
16
14,236
0
python,django,virtualenv
Common approach, if you'd like to configure region, but did not want to store sensitive information in repo, is to pass it through environment variables. When you need it just call os.environ('SECRET') (even in your settings.py). Better with some fallback value. Virtualenv does not helps you to hide anything, it just prevent you system-wide Python installation from littering by one-project-required-packages.
0
0
0
0
2015-08-07T17:30:00.000
4
0.099668
false
31,883,505
0
0
1
2
I am using Django, python, virtualenv, virtualenvwrapper and Vagrant. So far I have simply left my secret_key inside of the settings.py file. This works file for local files. However I have already placed my files in Git. I know this is not acceptable for production(Apache). What is the correct way to go about hiding my secret_key? Should I use virtualenv to hide it?
How to I hide my secret_key using virtualenv and Django?
63,311,862
0
16
14,236
0
python,django,virtualenv
The solution I use is to create a file sec.py and place it next to my settings.py file. Then in at line 1 of settings.py call from .sec import *. Be sure to include the period in front of the file name. Be sure to list sec.py in your .gitignore file.
0
0
0
0
2015-08-07T17:30:00.000
4
0
false
31,883,505
0
0
1
2
I am using Django, python, virtualenv, virtualenvwrapper and Vagrant. So far I have simply left my secret_key inside of the settings.py file. This works file for local files. However I have already placed my files in Git. I know this is not acceptable for production(Apache). What is the correct way to go about hiding my secret_key? Should I use virtualenv to hide it?
Django 1.7 Migrations hanging
48,024,289
-1
9
4,210
0
python,django,database-migration
Worth noting for future readers that the migrations can hang when trying to apply a migration for an incorrect size CharField (DB implementation dependent). I was trying to alter a CharField to be greater than size 255 and it was just hanging. Even after terminating the connections as stated it would not fix it as a CharField of size greater than 255 as that was incorrect with my implementation (postgresql). TLDR; Ensure your CharField is 255 or less, if greater change your CharField to a TextField and it could fix your problem!
0
0
0
0
2015-08-07T18:41:00.000
3
-0.066568
false
31,884,573
0
0
1
2
I have a django migration I am trying to apply. It gets made fine (it's small, it's only adding a CharField to two different Models. However when I run the actual migrate it hangs (no failure, no success, just sits). Through googling I've found that other open connections can mess with it so I restarted the DB. However this DB is connect to continuously running jobs and new queries do sneak in right away. However they are small, and last time I tried restarting I THINK I was able to execute my migrate before anything else. Still nothing. Are there any other known issues that cause something like this?
Django 1.7 Migrations hanging
31,884,628
6
9
4,210
0
python,django,database-migration
At least in PostgreSQL you cannot modify tables (even if it's just adding new columns) while there are active transactions. The easiest workaround for this is usually to: run the migration script (which will hang) restart your webserver/wsgi container When restarting your webserver all open transactions will be aborted (assuming you don't have background processes which also have transactions open), so as soon as no transactions are blocking your table, the migration will finish.
0
0
0
0
2015-08-07T18:41:00.000
3
1.2
true
31,884,573
0
0
1
2
I have a django migration I am trying to apply. It gets made fine (it's small, it's only adding a CharField to two different Models. However when I run the actual migrate it hangs (no failure, no success, just sits). Through googling I've found that other open connections can mess with it so I restarted the DB. However this DB is connect to continuously running jobs and new queries do sneak in right away. However they are small, and last time I tried restarting I THINK I was able to execute my migrate before anything else. Still nothing. Are there any other known issues that cause something like this?
How to update my production database in a django/heroku project with a script
31,888,017
1
1
673
0
python,django,postgresql,heroku,django-models
i suggest you update the data in your local then make a fixture, commit and push it in your heroku. then do load the data using the terminal update data (locally) make a fixture (manage.py dumpdata) commit and push to heroku login via terminal (heroku login) load the data (heroku run python manage.py loaddata .json)
0
0
0
0
2015-08-07T21:13:00.000
1
1.2
true
31,886,734
0
0
1
1
I want to update a field in my users table on my django project hosted on heroku. Is there a way I can run a script(if so from where?) using what? That allows me to update a field in the database? I could do this manually in the django admin but it would take way to long as there are large number of users. Any advice is appreciated.
How to run a flask app on a remote server from a local system?
31,889,144
5
3
8,067
0
python,flask,remote-access,flask-restful
The problem is not from Flask, The IP specified in app.run(host='0.0.0.0') must be owned by your server. If you want to launch Flask on remote server, deploy the code on that server using SSH and run it using a remote session.
0
0
0
0
2015-08-08T02:47:00.000
1
1.2
true
31,889,124
0
0
1
1
I'm able to run the flask app on the local system using app.run(). But when I try to run it on remote server using app.run(host='0.0.0.0',port='81') or app.run(host='<remote ip>'),both don't work. I want to know if something else has to be done.
What does "app.run(host='0.0.0.0') " mean in Flask
31,904,923
29
53
132,602
0
python,web,tcp,flask,server
To answer to your second question. You can just hit the IP address of the machine that your flask app is running, e.g. 192.168.1.100 in a browser on different machine on the same network and you are there. Though, you will not be able to access it if you are on a different network. Firewalls or VLans can cause you problems with reaching your application. If that computer has a public IP, then you can hit that IP from anywhere on the planet and you will be able to reach the app. Usually this might impose some configuration, since most of the public servers are behind some sort of router or firewall.
0
0
0
0
2015-08-09T13:32:00.000
1
1
false
31,904,761
0
0
1
1
I am reading the Flask documentation. I was told that with app.run(host='0.0.0.0'), I could make the server publicly available. What does it mean ? How can I visit the server in another computer (just localhost:5000 in my own computer) ?
Running ApScheduler in Gunicorn Without Duplicating Per Worker
31,929,832
0
7
1,182
0
python,uwsgi,gunicorn,apscheduler
I'm not aware of any way to do this with either, at least not without some sort of RPC. That is, run APScheduler in a separate process and then connect to it from each worker. You may want to look up projects like RPyC and Execnet to do that.
0
1
0
0
2015-08-10T02:22:00.000
1
0
false
31,910,812
0
0
1
1
The title basically says it all. I have gunicorn running my app with 5 workers. I have a data structure that all the workers need access to that is being updated on a schedule by apscheduler. Currently apscheduler is being run once per worker, but I just want it run once period. Is there a way to do this? I've tried using the --preload option, which let's me load the shared data structure just once, but doesn't seem to let all the workers have access to it when it updates. I'm open to switching to uWSGI if that helps.
Django select_for_update when row doesn't exist yet
43,764,100
0
0
469
0
python,django,django-models,django-orm
When you make an insert with another thread, the database blocks another insert transaction as long as there is conflict by a constraint (primary key, unique index, etc.)
0
0
0
0
2015-08-10T21:13:00.000
2
0
false
31,929,287
0
0
1
2
Can you use Django's select_for_update to lock a row that doesn't exist yet to lock that row from other threads reading it while you create and then save a model?
Django select_for_update when row doesn't exist yet
32,100,507
1
0
469
0
python,django,django-models,django-orm
Answer: no, you cannot do that.
0
0
0
0
2015-08-10T21:13:00.000
2
1.2
true
31,929,287
0
0
1
2
Can you use Django's select_for_update to lock a row that doesn't exist yet to lock that row from other threads reading it while you create and then save a model?
what is the IP address of my heroku application
31,932,292
3
18
40,929
0
python,django,heroku
To my knowledge you can not get an ip for a heroku application. You could create a proxy with a known ip that serves as a middleman for the application. Otherwise you might want to look at whether heroku is still the correct solution for you
0
0
0
0
2015-08-11T02:25:00.000
2
0.291313
false
31,932,218
0
0
1
1
So in my django application, i'm running a task that will request from an api some data in the form of json. in order for me to get this data, i need to give the IP address of where the requests are going to come from (my heroku app) how do i get the ip address in which my heroku application will request at
Django-admin creates wrong django version inside virtualenv
56,271,468
0
1
1,969
0
python,django,django-admin,virtualenv
I had the same problem. Could be related to your zsh/bash settings. I realized that using zsh (my default) I would get django-admin version 1.11 despite the Django version was 2.1! When I tried the same thing with bash I would get django-admin version 2.1 (the correct version). Certainly a misconfiguration. So, I strongly suggest you check your zsh or bash settings to check for paths you might have.
0
0
0
0
2015-08-11T10:48:00.000
4
0
false
31,939,714
0
0
1
2
I've created a n new directory, a virtualenv and installed a django-toolbelt inside it. The django-version should be 1.8 but when I call 'django-admin.py version' it says 1.6. So when I start a new project it creates a 1.6. I thought virtualenv was supposed to prevent this. What am I doing wrong? Edit: I think it has to do with the PATH (?). Like it's calling the wrong django-admin version. I'm on Windows 7. Still don't know how to fix it.
Django-admin creates wrong django version inside virtualenv
47,748,881
3
1
1,969
0
python,django,django-admin,virtualenv
I came across this problem too. In the official document, I found that, in a virtual environment, if you use the command 'django-admin', it would search from PATH usually in '/usr/local/bin'(Linux) to find 'django-admin.py' which is a symlink to another version of django. This is the reason of what happened finally. So there are two methods to solve this problem: re-symlink your current version django-admin(site-packages/django/bin/django-admin.py) to 'usr/local/bin/django-admin' or 'usr/local/bin/django-admin.py' REMIND: This is a kind of global way so that it will effects your other django projects, so I recommend the second method cd to your_virtual_env/lib/python3.x/site-packages/django/bin/(of course you should activate your virtual environment), and then use 'python django-admin.py startproject project_name project_full_path' to create django project
0
0
0
0
2015-08-11T10:48:00.000
4
0.148885
false
31,939,714
0
0
1
2
I've created a n new directory, a virtualenv and installed a django-toolbelt inside it. The django-version should be 1.8 but when I call 'django-admin.py version' it says 1.6. So when I start a new project it creates a 1.6. I thought virtualenv was supposed to prevent this. What am I doing wrong? Edit: I think it has to do with the PATH (?). Like it's calling the wrong django-admin version. I'm on Windows 7. Still don't know how to fix it.
what is a robust way to execute long-running tasks/batches under Django?
31,952,520
1
1
1,698
0
python,django,batch-processing
I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things. Otherwise, just use cron as a driver to run periodic tasks. You can just let it run your script periodically and update the database, your UI component will poll the database with no conflict.
0
1
0
0
2015-08-11T21:31:00.000
1
1.2
true
31,952,327
0
0
1
1
I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin. Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore the systems using Django pages. Run time for the batches isn't all that long, but runs to minutes, tens of minutes potentially, not seconds. Run frequency is infrequent at best, I think you could spend days without needing a refresh. This is not celery's normal use case of long tasks which will eventually push the results back into the web UI via ajax and/or polling. It is more similar to a dev's occasional use of the django-admin commands, but this time intended for an end user. The user should be able to initiate a run of one or several of those batches when they want in order to refresh the calculations of a given external database (the target db is a parameter to the batch). Until the batches are done for a given db, the app really isn't useable. You can access its pages, but many functions won't be available. It is very important, from a support point of view that the batches remain easily runnable at all times. Dropping down to the VMs SSH would probably require frequent handholding which wouldn't be good - it is best that you could launch them from the Django webpages. What I currently have: Each batch is in its own script. I can run it on the command line (via if __name__ == "main":). The batches are also hooked up as celery tasks and work fine that way. Given the way I have written them, it would be relatively easy for me to allow running them from subprocess calls in Python. I haven't really looked into it, but I suppose I could make them into django-admin commands as well. The batches already have their own rudimentary status checks. For example, they can look at the calculated data and tell whether they have been run and display that in Django pages without needing to look at celery task status backends. The batches themselves are relatively robust and I can make them more so. This is about their launch mechanism. What's not so great. In Mac dev environment I find the celery/celerycam/rabbitmq stack to be somewhat unstable. It seems as if sometime rabbitmqs daemon balloons up in CPU/RAM use and then needs to be terminated. That mightily confuses the celery processes and I find I have to kill -9 various tasks and relaunch them manually. Sometimes celery still works but celerycam doesn't so no task updates. Some of these issues may be OSX specific or may be due to the DEBUG flag being switched for now, which celery warns about. So then I need to run the batches on the command line, which is what I was trying to avoid, until the whole celery stack has been reset. This might be acceptable on a normal website, with an admin watching over it. But I can't have that happen on a remote VM to which only the user has access. Given that these are somewhat fire-and-forget batches, I am wondering if celery isn't overkill at this point. Some options I have thought about: writing a cleanup shell/Python script to restart rabbitmq/celery/celerycam and generally make it more robust. i.e. whatever is required to make celery & all more stable. I've already used psutil to figure out rabbit/celery process are running and display their status in Django. Running the batches via subprocess instead and avoiding celery. What about django-admin commands here? Does that make a difference? Still needs to be run from the web pages. an alternative task/process manager to celery with less capability but also less moving parts? not using subprocess but relying on Python multiprocessing module? To be honest, I have no idea how that compares to launches via subprocess. environment: nginx, wsgi, ubuntu on virtualbox, chef to build VMs.
why does elastic beanstalk not update?
31,955,222
2
2
2,185
0
python,amazon-web-services,amazon-elastic-beanstalk,pyramid
Are you committing your changes before deploying? eb deploy will deploy the HEAD commit. You can do eb deploy --staged to deploy staged changes.
0
1
0
1
2015-08-12T02:14:00.000
1
0.379949
false
31,954,968
0
0
1
1
I'm new to the world of AWS, and I just wrote and deployed a small Pyramid application. I ran into some problems getting set up, but after I got it working, everything seemed to be fine. However, now, my deployments don't seem to be making a difference in the environment (I changed the index.pt file that my root url routed to, and it does not register on my-app.elasticbeanstalk.com). Is there some sort of delay to the deployments that I am unaware of, or is there a problem with how I'm deploying (eb deploy using the awsebcli package) that's causing these updates to my application to not show?
Django: Menu Controller
31,969,174
0
0
101
0
python,django,dynamic,controller
One solution is to refactor your one application into multiple applications inside your project. Each would have its own urls.py & views.py.
0
0
0
0
2015-08-12T13:43:00.000
3
0
false
31,967,031
0
0
1
1
I have a question regarding Django. I created a site and everything works as it is indented to work. The only problem I have is that first of all my urls.py and views.py files are getting quite bloated (I have one method for every page I have) and that i have for every site one template. I use {% extend basetemplate.html %} for making it at least a bit generic. However I find this attempt not really nice. Creating a method inside the urls.py and views.py in addition to create a template html file seems the wrong attempt. I already thought about building a big controller and did some googleing but i could not find what i was looking for. Is there something like a best practice to achieve that? How do you guys handle the amount of templates? Any advice would be more than welcome :)
Increasing INSERT Performance in Django For Many Records of HUGE Data
31,977,326
1
2
2,130
1
python,mysql,django,database,insert
This is out of django's scope really. Django just translates your python into on INSERT INTO statement. For most performance on the django layer skipping it entirely (by doing sql raw) might be best, even though python processing is pretty fast compared to IO of a sql-database. You should rather focus on the database. I'm a postgres person, so I don't know what config options mysql has, but there is probably some fine tuning available. If you have done that and there is still no increase you should consider using SSDs, SSDs in a RAID 0, or even a db in memory, to skip IO times. Sharding may be a solution too - splitting the tasks and executing them in parallel. If the inserts however are not time critical, i.e. can be done whenever, but shouldn't block the page from loading, I recommend celery. There you can queue a task to be executed whenever there is time - asynchronously.
0
0
0
0
2015-08-12T23:33:00.000
3
0.066568
false
31,977,138
0
0
1
3
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches. So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each record entry being a whopping 180KB. I've scaled my testing down to 2,000 inserts yet still cant get the running time down to a reasonable amount. 2,000 inserts currently takes approximately 120 seconds. So I've tried ALL of the following (and many combinations of each) to no avail: "Classic" Django ORM create model and .save() Single transaction (transaction.atomic()) Bulk_create Raw SQL INSERT in for loop Raw SQL "executemany" (multiple value inserts in one query) Setting SQL attributes like "SET FOREIGN_KEY_CHECKS=0" SQL BEGIN ... COMMIT Dividing the mass insert into smaller batches Apologizes if I forgot to list something, but I've just tried so many different things at this point, I can't even keep track ahah. Would greatly appreciate a little help here in speeding up performance from someone who maybe had to perform a similar task with Django database insertions. Please let me know if I've left out any necessary information!
Increasing INSERT Performance in Django For Many Records of HUGE Data
31,977,679
0
2
2,130
1
python,mysql,django,database,insert
You can also try to delete any index on the tables (and any other constraint), the recreate the indexes and constraints after the insert. Updating indexes and checking constraints can slow down every insert.
0
0
0
0
2015-08-12T23:33:00.000
3
0
false
31,977,138
0
0
1
3
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches. So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each record entry being a whopping 180KB. I've scaled my testing down to 2,000 inserts yet still cant get the running time down to a reasonable amount. 2,000 inserts currently takes approximately 120 seconds. So I've tried ALL of the following (and many combinations of each) to no avail: "Classic" Django ORM create model and .save() Single transaction (transaction.atomic()) Bulk_create Raw SQL INSERT in for loop Raw SQL "executemany" (multiple value inserts in one query) Setting SQL attributes like "SET FOREIGN_KEY_CHECKS=0" SQL BEGIN ... COMMIT Dividing the mass insert into smaller batches Apologizes if I forgot to list something, but I've just tried so many different things at this point, I can't even keep track ahah. Would greatly appreciate a little help here in speeding up performance from someone who maybe had to perform a similar task with Django database insertions. Please let me know if I've left out any necessary information!
Increasing INSERT Performance in Django For Many Records of HUGE Data
32,000,816
0
2
2,130
1
python,mysql,django,database,insert
So I found that editing the mysql /etc/mysql/my.cnf file and configuring some of the InnoDB settings significantly increased performance. I set: innodb_buffer_pool_size = 9000M 75% of your system RAM innodb_log_file_size = 2000M 20%-30% of the above value restarted the mysql server and this cut down 50 inserts from ~3 seconds to ~0.8 seconds. Not too bad! Now I'm noticing the inserts are gradually taking longer longer for big data amounts. 50 inserts starts at about 0.8 seconds but after 100 or so batches the average is up to 1.4 seconds and continues increasing. Will report back if solved.
0
0
0
0
2015-08-12T23:33:00.000
3
1.2
true
31,977,138
0
0
1
3
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches. So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each record entry being a whopping 180KB. I've scaled my testing down to 2,000 inserts yet still cant get the running time down to a reasonable amount. 2,000 inserts currently takes approximately 120 seconds. So I've tried ALL of the following (and many combinations of each) to no avail: "Classic" Django ORM create model and .save() Single transaction (transaction.atomic()) Bulk_create Raw SQL INSERT in for loop Raw SQL "executemany" (multiple value inserts in one query) Setting SQL attributes like "SET FOREIGN_KEY_CHECKS=0" SQL BEGIN ... COMMIT Dividing the mass insert into smaller batches Apologizes if I forgot to list something, but I've just tried so many different things at this point, I can't even keep track ahah. Would greatly appreciate a little help here in speeding up performance from someone who maybe had to perform a similar task with Django database insertions. Please let me know if I've left out any necessary information!
Secure Python client & server websockets
32,002,762
0
0
237
0
python,django,websocket,tastypie
You main question, if you should use HTTPS / WSS to increase security can be answered easily: Yes, you should. The other part of the question, which is a bit hidden, if the API key will be compromised when sent in clear text, depends on how you use the API key. Secure APIs will use a cryptographic hash over the key and the payload instead of the key in clear text.
0
0
0
0
2015-08-14T04:26:00.000
1
0
false
32,002,270
0
0
1
1
I'm creating an appstore for robot applications. The setup is as follows: Robot running Linux and Python API for appstore running Django Tastypie Web site with appstore communicating with the API through jQuery When a user installs an app on the web site, an API call to Django is made. Then I need Django to emit a push notification to the Robot informing him that new apps are ready to be installed. Then the Robot should make an API call to retrieve the new app and install it. I was thinking websockets would be the best way to solve this. I would need both the server and client to be written in Python. The complicate things a bit the API uses key authentication. Is it possible and secure to send the key from the API to the robot using websockets? Should I be using HTTPS and WSS exclusively? Given the myriad of websocket implementations out there, is there anyone you can recommend for this scenario?
Keep Django runserver alive when SSH is closed
32,011,468
5
5
7,057
0
python,django,ssh,server
Since runserver isn't intended to be ran in production/only for development there is no way to to this built in. You will need to use a tool like tmux/screen or nohup to keep the process alive, even if the spawning terminal closes.
0
1
0
0
2015-08-14T13:32:00.000
4
1.2
true
32,011,375
0
0
1
1
I haven't yet been able to get Apache working with my Django app, so until I do get it working, I'm using runserver on my Linux server in order to demo the app. The problem is that whenever I close the SSH connection to the server, runserver stops running. How can I keep runserver running when I say put my laptop to sleep, or lose internet connectivity? P.S. I'm aware that runserver isn't intended for production.
Pycharm Debugging Django runserver- Can't Connect from phone
32,078,355
0
0
491
0
python,django,visual-studio-2012,pycharm
I just noticed that my Pycharm configuration is using the loopback ip address 127.0.0.1 while the visual studio configuration is using the all ip 0.0.0.0. Switching my Pycharm configuration to use 0.0.0.0 solved my problem...
0
0
0
0
2015-08-14T15:46:00.000
1
1.2
true
32,013,929
0
0
1
1
I'm trying to connect to my development server (local django runserver) while running from pycharm. I have a custom build of an android app that points to my ip address and the phone is on the same network. When I use visual studio and python tools to run my django application my phone connects to my PC just fine. When I use Pycharm my phone can't make a connection to the server. Does anyone know what the difference might be between these two environments that might be causing this issue? I'm on windows 8 using the latest version of pycharm. Pycharm is working fine for testing locally I just can't get my phone to connect to django's web server. In pycharm I'm using a Django Server Run Configuration
How can I integrate Bootstrap with Python programming language and the Django framework?
32,014,505
0
1
7,722
0
python,django,twitter-bootstrap,web-applications
Bootstrap are mainly CSS and Javascript files that you can use from your web application html. You can use whatever backend you want with it.
0
0
0
0
2015-08-14T16:16:00.000
3
0
false
32,014,455
0
0
1
1
I would like to know what is the exact process to follow when building a web application using bootstrap python and django. Is bootstrap simply a template of HTML and CSS files which should be manipulated.
Jinja video and image files
32,084,944
0
0
48
0
google-app-engine-python
moving image and video static file handlers above the main.app fixed the issue
0
0
0
0
2015-08-15T01:19:00.000
1
1.2
true
32,020,591
0
0
1
1
I am new to Google App engine and I am using jinja templates for rendering html. My HTML page audio and video files and they are not getting rendered. Is there a way to get the images and video file working using jinja? Thanks for you help in advance.
Django code organisation
32,022,740
4
0
102
0
python,django
If you plan to scale your project, I would suggest moving it to a separate app. Generally speaking, generating PDFs based on an url hit directly is not the best thing to do performance-wise. Generating a PDF file is pretty heavy on you server, so if multiple people do it at the same time, the performance of your system will suffer. As a first step, just put it in a separate class, and execute that code from the view. At some point you will probably want to do some permission checks etc - that stays in the view, while generation of the PDF itself will be cleanly separated. Once you test your code, scale etc - then you can substitute that one line call in the view into putting the PDF generation in a queue and only pulling it once it's done - that will allow you to manage your computing powers better.
0
0
0
0
2015-08-15T05:55:00.000
2
1.2
true
32,022,024
0
0
1
2
I've recently started working with Django. I'm working on an existing Django/Python based site. In particular I'm implementing some functionality to create and display a PDF document when a particular URL is hit. I have an entry in the app's urls file that routes to a function in the views file and the PDF generation is working fine. However, the view function is pretty big and I want to extract the code out somewhere to keep my view as thin as possible, but I'm not sure of the best/correct approach. I'll probably need to generate other PDFs in due course so would it make sense to create a 'pdfs' app and put code in there? If so, should it go in a model or view? In a PHP/CodeIgniter environment for example I would put the code into a model, but models seem to be closely linked to database tables in Django and I don't need any db functionality for this. Any pointers/advice from more experienced Django users would be appreciated. Thanks
Django code organisation
32,022,118
2
0
102
0
python,django
Yes you can in principle do it in an app (the concept of reusable apps is the basis for their existence) However not many people do it/not many applications require it. It depends on how/if the functionality will be shared. In other words there must be a real benefit. The code normally goes in both the view/s and in the models (to isolate code and for the model managers)
0
0
0
0
2015-08-15T05:55:00.000
2
0.197375
false
32,022,024
0
0
1
2
I've recently started working with Django. I'm working on an existing Django/Python based site. In particular I'm implementing some functionality to create and display a PDF document when a particular URL is hit. I have an entry in the app's urls file that routes to a function in the views file and the PDF generation is working fine. However, the view function is pretty big and I want to extract the code out somewhere to keep my view as thin as possible, but I'm not sure of the best/correct approach. I'll probably need to generate other PDFs in due course so would it make sense to create a 'pdfs' app and put code in there? If so, should it go in a model or view? In a PHP/CodeIgniter environment for example I would put the code into a model, but models seem to be closely linked to database tables in Django and I don't need any db functionality for this. Any pointers/advice from more experienced Django users would be appreciated. Thanks
How to fix: "RuntimeWarning: Model was already registered."
57,300,254
2
19
8,104
0
django,ipython
Check in your models if you don't got a duplicate class model, sometimes when we make a rebase or merge in our existing branches, our code can be duplicated, i had the same problem, isn't a big deal.
0
0
0
0
2015-08-15T14:02:00.000
4
0.099668
false
32,025,480
0
0
1
4
Since upgrading Django, I've been getting this error in iPython when I do imports: RuntimeWarning: Model 'docket.search' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. I'm guessing this is some automatic feature of iPython, but is there an easy solution? Is this something I even need to solve?
How to fix: "RuntimeWarning: Model was already registered."
64,724,742
1
19
8,104
0
django,ipython
It is saying that you have already registered the model before Hence by deleting the second model or writing the code in the specified model is the solution for this .
0
0
0
0
2015-08-15T14:02:00.000
4
0.049958
false
32,025,480
0
0
1
4
Since upgrading Django, I've been getting this error in iPython when I do imports: RuntimeWarning: Model 'docket.search' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. I'm guessing this is some automatic feature of iPython, but is there an easy solution? Is this something I even need to solve?
How to fix: "RuntimeWarning: Model was already registered."
57,749,526
26
19
8,104
0
django,ipython
Exactly the same problem had happened to me. The problem was that I had defined a model twice! Removing one of them solved the problem.
0
0
0
0
2015-08-15T14:02:00.000
4
1
false
32,025,480
0
0
1
4
Since upgrading Django, I've been getting this error in iPython when I do imports: RuntimeWarning: Model 'docket.search' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. I'm guessing this is some automatic feature of iPython, but is there an easy solution? Is this something I even need to solve?
How to fix: "RuntimeWarning: Model was already registered."
32,769,570
1
19
8,104
0
django,ipython
I have gotten this error because of automatic imports I had in my __init__.py. I had some old code that imported by signals there, and moving that import code to AppConfig instead fixed it.
0
0
0
0
2015-08-15T14:02:00.000
4
0.049958
false
32,025,480
0
0
1
4
Since upgrading Django, I've been getting this error in iPython when I do imports: RuntimeWarning: Model 'docket.search' was already registered. Reloading models is not advised as it can lead to inconsistencies, most notably with related models. I'm guessing this is some automatic feature of iPython, but is there an easy solution? Is this something I even need to solve?
BS4 and BeautifulSoup error from: can't read /var/mail/BeautifulSoup
32,027,116
1
4
4,920
0
python,beautifulsoup,bs4
The way to run a Python program is to type the code into a text file, or use a Python IDE. The error message you are getting suggests that you are typing Python code at the shell prompt; but the shell understanda shell script commands, not Python. (Once you have code in a file, typing python filename.py in the shell will run the Python code in the file filename.py.)
0
0
0
0
2015-08-15T15:26:00.000
3
0.066568
false
32,026,201
0
0
1
1
From Beautiful import Beautiful immediately responds with error "from: can't read /var/mail/BeautifulSoup". Also tried with BS4 same result. Used synaptic package to uninstall and re-install BS4 and BeautifulSoup. Same result. Tried complete removal and had same result. Used Terminal and it showed that BS4 and BeautifulSoup are not installed. Using Python 2.7.6 Reviewed questions but only 2 respones and they did not help. Any suggestions?
Django loaddata and 'invalid model identifier' error
32,041,096
0
0
869
0
python,django,django-orm
You should be able to simply change the app name in the JSON and be fine.
0
0
0
0
2015-08-15T21:51:00.000
1
0
false
32,029,739
0
0
1
1
I have this dump.json with data from a Django-project that I'm trying to load (manage.py loaddata) into another project where I've set up the same model/fields etc. The problem is the "app" is not named the same in the two projects. So I get this 'invalid model identifier' -error. What to do?
How can I convert a python program into a web service?
32,031,097
0
0
1,455
0
php,python,django,web-services
You could use raw sockets in your Python and php programs to make them communicate through TCP locally. Make your Python program a TCP server with address 'localhost' and port number, for example, 5555, and then, in your php script, also using sockets, create a client code that sends the to be processed text as a TCP request to your Python script.
0
0
0
1
2015-08-16T01:12:00.000
2
0
false
32,030,932
0
0
1
1
I have a python program. It takes some text from a text file (A) as an input, do some text annotation and stores the annotated text as an output in another file (B). Now, my plan was to make this like a web service. I could do this using php and calling the python program from php. Specifically, my php code does this- --Takes text from a HTML textarea. --Saves the text into file A. --Runs the python program. --Load the output from file B and show the annotated text in the HTML textarea. Now, to do the text annotation, python program needs to load a model from another big file (C). I would say, loading time is 10 sec and annotating takes 2 sec. Each time, I have a new text in the HTML textarea, I need 12 sec to show the output. I want to minimize the overall time. I was thinking, if I could communicate from PHP with an already running python program, I could actually save 10 sec. Because, then python would just need to load the model file C once and it could apply the model on any text that PHP sends him and it could send the output to PHP too. Is there a way I can achieve this? Can django help here in anyway? Thank you for reading so much.
Does Dynamodb support Groupby/ Aggregations directly?
32,045,125
1
0
582
1
python,amazon-web-services,amazon-dynamodb
Amazon DynamoDB is a NoSQL database, so you won't find standard SQL capabilities like group by and average(). There is, however, the ability to filter results, so you will only receive results that match your criteria. It is then the responsibility of the calling app to perform grouping and aggregations. It's really a trade-off between the flexibility of SQL and the sheer speed of NoSQL. Plus, in the case of DynamoDB, the benefit of data being stored in three facilities to improve durability and availability.
0
0
0
0
2015-08-17T06:52:00.000
1
0.197375
false
32,044,338
0
0
1
1
I am new to Dynamodb and have a requirement of grouping the documents on the basis of a certain condition before performing other operations. From what i could read on the internet, i figured out that there is no direct way to group dynamodb documents. Can anyone confirm if thats true of help out with a solution if that is not the case?
list of tools used to do the Django development
32,064,980
0
0
774
0
python,django
Personally as IDE I use Pycharm, it's just great, has really an extensive support for Django. Besides I use Fabric for automated deployments. Not sure if you ask exactly about that. In case you do not know Django contains itself a lot of "tooling commands" available via manage.py
0
0
0
0
2015-08-18T05:14:00.000
2
1.2
true
32,064,022
0
0
1
1
I am new to Django and python. Please if you can help me in tools which are used for Django development work, it will be great help. Thank you in Advance
Is there an AWS solution of an alarm system for failed python scripts?
32,070,794
1
0
45
0
php,python,amazon-web-services,logging,alarm
You can schedule cron job to attain that.
0
0
0
1
2015-08-18T11:06:00.000
2
0.099668
false
32,070,697
0
0
1
1
I need an system that could check if many python scripts have run comletely. The scripts would scrape data and output it to a corresponding xml file. If the script fails there might be no xml file or error messages in the logs files. PHP files run the python scripts. Is there a simple solution using an AWS service that would trigger alarms when a python script is not functioning fully?
Where to place external python scripts in a Django project?
32,085,949
1
8
10,379
0
python,django,django-views
You can have additional files that have generic code in their own files and just import them when you need them for your views, or whatever. If they are only going to be used in one app, then just put them in that apps folder. If they are more generic then you can put them in a file or folder that isn't part of any particular app. Django does some magic with files and folders it expects to exist, but you can put other files wherever you want it works like any other Python project.
0
0
0
0
2015-08-19T03:06:00.000
3
0.066568
false
32,085,888
1
0
1
1
Where someone should place any external python script for her Django project? What is the more appropriate location (if any)? Shall she create a folder in the main Django project and put it there and add this to the python path or is there a better way to deal with this issue? The reason for external scripts is not to overload the views with code that can be better organized in script files and can serve more than one views.
Connecting to web app running on localhost on an Amazon EC2 from another computer
32,092,809
2
6
14,127
0
python,amazon-ec2,flask,web,localhost
You cannot connect to localhost on a remote machine without a proxy. If you want to test it you will need to change the binding to the public IP address or 0.0.0.0. You will then have to lock down access to your own IP address through the security settings in AWS.
0
1
0
0
2015-08-19T08:29:00.000
2
0.197375
false
32,090,306
0
0
1
1
currently I am working on a web app development and I am running my server on an Amazon ec2 instance. I am testing my (web app which uses Flask) by running the server on localhost:5000 as usual. However I don't have access to the gui hence I don't see my app and test it like I would do on a browser. I have a Mac OS X computer so my question is how can I see the localhost of Amazon EC2 from my mac's browser ?
ArrayList sent to server: Data retrieval
32,095,252
0
0
62
0
java,android,python,django,arraylist
When you convert the arrayList values to a string, the '[' will also be treated as string in the convesrsion. U may use a JSON kind of object using javascript and append it to the request parameters. In django, we can use dictionaries to parse the JSON data as key-value pairs.
0
0
0
0
2015-08-19T12:02:00.000
1
0
false
32,094,988
0
0
1
1
In my android application I have an ArrayList which is: [1, 2, 8] I am sending this array list in a job to the backed django view, where I need to process it further. So I am calling the toString() method to convert it to a string and then send it to the server. Inside the view on getting the parameter from the request and on trying to print what I have received I get: [1, 2, 8]. But on trying to get the 1st element, basically calling varialble[0] I am getting: [ and on calling variable[1] I am getting 1. I just want to extract all the numbers from the variable and use them for further processing. Where am I going wrong?
python requests module - set key to null
32,097,869
5
1
4,509
0
python,python-requests
There is no session-level JSON parameter, so the merging rules don't apply. In other words, the json keyword argument to the session.request() method is passed through unchanged, None values in that structure do not result in keys being removed. The same applies to data, there is no session-level version of that parameter, no merging takes place. If data is set to a dictionary, any keys whose value is set to None are ignored. Set the value to '' if you need those keys included with an empty value. The rule does apply when merging headers, params, hooks and proxies.
0
0
1
0
2015-08-19T14:03:00.000
1
1.2
true
32,097,768
1
0
1
1
from the requests documentation : Remove a Value From a Dict Parameter Sometimes you’ll want to omit session-level keys from a dict parameter. To do this, you simply set that key’s value to None in the method-level parameter. It will automatically be omitted. I need the data with key's value as None to take the Json value null instead of being removed. Is it possible ? edit : This seems to happen with my request data keys. While they are not session-level the behaviour of removing is still the same.
Django autoincrement IntergerField by rule
32,108,208
1
1
53
0
python,django,database
The best and most reliable way to do this is with a sql trigger That would completely eliminate the worries about simultaneous inserts. But overriding the save method is also perfectly workable. Explicitly declare a primary key field and choose integer for it. In your save method if the primary key is None that means you are saving a new record, query the database to determine what should be the new primary key, asign it and save. Wherever you call your save method you would need to have a atomic transaction and retry the save if it fails. BTW, you are starting for 0 each year. That's obviously going to be leading to conflicts. So you will have to prefix your primary key with the year and strip it out at the time you display it. (believe me you don't want to mess with composite primary keys in django)
0
0
0
0
2015-08-19T17:14:00.000
1
0.197375
false
32,101,737
0
0
1
1
I have this special case, where a customer requires a specific (legacy) format of booking numbers, the first one starts with the current year: 2015-12345 So basically every year I would have to start from 0 The other one is starting with a foreign-key: 7-123 So the first document created by every the user gets number 1, and so on. Unfortunately there will be long lists starting with this booking number, so fetching all the records and calculating the booking number is not really an option. I have also thought about overriding the save() method, reading and auto-incrementing manually, but what about simultaneous inserts?
django template inbuilt tag search
32,105,083
0
1
29
0
python,django
I did this [isinstance(node, ExtendsNode) for node in template.nodelist]
0
0
0
0
2015-08-19T20:21:00.000
1
0
false
32,104,943
0
0
1
1
I want to check wether my django template has an extends block or not. Is there an inbuilt function in django that can tell whether the template contains a particular tag or not ?
getting href-data using selenium
32,117,309
0
0
206
0
python-2.7,selenium
Use getText method if you want to to print 0.94
0
0
1
0
2015-08-20T08:30:00.000
1
0
false
32,113,290
0
0
1
1
I have the following html extract 0.94 I am trying to read the href value ie 0.94.I tried the following : answer = browser.find_element_by_class_name("res") print answer output = answer.get_attribute('data-href') print output The Result is as follows: None I tried various other methods, using find_element_by_xpath etc,but not able to get the desired value ie. 0.94 (as in this example). How can I get this value in the shortest way? Thanks in advance
Django Rest Framework __new__ missing 1 required positional argument
33,711,333
0
0
575
0
python,django,django-rest-framework
Turns out this was a local caching issue. It occurred when reloading a page with a GET request to my API and I guess the headers weren't in sync. The error went away when sending the max_age to 0, which is something I needed to do anyway.
0
0
0
0
2015-08-20T17:59:00.000
1
0
false
32,125,200
0
0
1
1
When programmatically accessing certain data from my Django Rest Framework, I occasionally get an error: new() missing 1 required positional argument: 'argument name' What's odd is that the error is not predictable, in that I may refresh and everything loads fine. So this leads me to believe it may be some kind of data race type situation, but I'll be honest in saying I don't really know where this new constructor is coming from. Can someone shed some light on how Django Rest Framework might be using the new constructor so I might have a better idea on where to track down the bug? (I assume it's a DRF issue since that's what I'm using to access the data, but if it's not that then I'm really lost)
How should I model large bodies of textual content in a django app?
32,128,663
1
0
133
0
python,django,search,model,storage
You can definitely store relatively large bodies of text in database. If you mean by performance there are two angles: Searching. You should not do then free form searches in database. You may use specific features of ElasticSearch-like tools. Serving large bodies of text. Unavoidable, naturally, if you want to present it, however you can use GZIP compression that will reduce the bandwidth drastically.
0
0
0
0
2015-08-20T21:15:00.000
2
0.099668
false
32,128,518
0
0
1
2
This is a question about best-practice for modelling a Django app. The project is a blog which will present articles written in something similar to Markdown or RST. I've checked out a few tutorials to give me some sort of starting point and so far they all store the body of the article in the model. This seems wrong: my understanding of modern database engines isn't the best but storing textfields of arbitrary lengths can't be good for performance. Three alternatives present themselves: Limit the article model to metadata and create a separate model to store the body of an article. At least only one table is a mess! Limit the article model to metadata and store the article body as a static file. Ask someone with more experience. Maybe storing the body in the model isn't so bad after all... at least it's easily searchable! How should I model this app? How would you make your solution searchable?
How should I model large bodies of textual content in a django app?
32,128,552
1
0
133
0
python,django,search,model,storage
Storing the body of the post in the database shouldn't be an issue. Most blog engines take this approach. It'll be faster than storing it in a separate model (if it's in a separate model, you'd have to do a JOIN to get the body), and likely faster than storing it as a file on the file system. You're not using the body as a primary key, so the length doesn't really matter.
0
0
0
0
2015-08-20T21:15:00.000
2
1.2
true
32,128,518
0
0
1
2
This is a question about best-practice for modelling a Django app. The project is a blog which will present articles written in something similar to Markdown or RST. I've checked out a few tutorials to give me some sort of starting point and so far they all store the body of the article in the model. This seems wrong: my understanding of modern database engines isn't the best but storing textfields of arbitrary lengths can't be good for performance. Three alternatives present themselves: Limit the article model to metadata and create a separate model to store the body of an article. At least only one table is a mess! Limit the article model to metadata and store the article body as a static file. Ask someone with more experience. Maybe storing the body in the model isn't so bad after all... at least it's easily searchable! How should I model this app? How would you make your solution searchable?
Python Blueprint duplicate Logs
32,181,469
1
0
291
0
python,flask
Your problem is almost certainly that you have set up 5 different logging handlers as well as 5 different loggers. Python's built-in logging system is a hierarchical logging system (unlike the normal loggers built for NodeJS, for example). All the loggers form a tree at run time, and the log messages bubble up the tree and are handled by the handlers attached to the tree. The normal handler registration registers the handlers at the root of the tree, so each handler sees messages from every logger (which is why your messages are created five times). The solution is to create one logger per blueprint, but not register any handler for the blueprint's logger. Instead, register one handler at the application level.
0
0
0
0
2015-08-21T07:22:00.000
1
1.2
true
32,134,565
0
0
1
1
I have 5 Flask apps which are running under Blueprint. Each app has its independent logger which writes to stdout. The problem is whenever any HTTP API is invoked, the log in that API is printed on screen 5 times, but the request is executed only once. How do I fix logger, so that each requested is printed only once ? Python 2.7.10 Flask 0.10.1
get a list of a specific attribute from a list of model objects in a Django template
32,151,156
1
0
43
0
python,django
You can't really do list comprehensions inside Django templates. You should do this in your view and pass the list in your context to the template.
0
0
0
0
2015-08-22T00:11:00.000
1
0.197375
false
32,150,986
0
0
1
1
Is there a way to get a list of a specific attribute from a list of model objects, {{ object_list }} using the Django Template Language? Similar to this in Python? [o.my_attr for o in object_list]
No post request after submitting a form when I want to download a PDF
32,155,740
0
0
44
0
python,selenium,urllib2
Is there no request at all, or a GET request? I suspect there is a GET request. In that case, did you turn Persist on in Firebug's Net tab? Possibly the POST request was hidden after redirects.
0
0
1
0
2015-08-22T08:30:00.000
1
0
false
32,154,052
0
0
1
1
I'm writing a script to download a pdf automatically. Firstly, I open the url manually, it will redirect to a login website. and I type my username and password, and click "submit". Then download will start directly. During this procedure, I check the firebug, I find there is no post while I click "submit". I'm not familiar with this behavior, that means the pdf(300K) is saved before I submit? If there is no post, then I must use some tool like selenium to simulate this "click"?
Python on Electron framework
62,152,039
3
54
77,344
0
python,frameworks,cross-platform,desktop-application,electron
With electron-django app I am developing I used pyinstaller to get my django app compiled, then just spawn its child process and it works, please notice pyinstaller may not recognize all modules or dist folder. there are plenty examples online on how to get a workaround for that filling the .specs file and amending the dist folder adding the files you may need. pyinstaller usually tells you what went wrong in the terminal. hope it helps
0
0
0
0
2015-08-22T17:12:00.000
4
0.148885
false
32,158,738
1
0
1
1
I am trying to write a cross-platform desktop app using web technologies (HTML5, CSS, and JS). I took a look at some frameworks and decided to use the Electron framework. I've already done the app in Python, so I want to know if is possible to write cross-platform desktop applications using Python on the Electron framework?
Resource interpreted as Stylesheet but transferred with MIME type application/x-css:
40,956,230
0
2
2,551
0
django,python-2.7,django-templates,django-views
You need to give STATIC URL AND STATIC_ROOT values in settings file of django project and then create static folder inside your App folder. Place all your css and js files inside this folder.Now add this tag in your template {% load static %} and give location of css like <link rel="stylesheet" type="text/css" href="{% static 'style.css' %}" /> Example STATIC_URL='/static/' STATIC_ROOT=os.path.join(BASE_DIR,"static")' STATICFILES_DIRS=(os.path.join(BASE_DIR,'Get_Report','static'),)
0
0
0
0
2015-08-23T09:21:00.000
1
0
false
32,165,061
0
0
1
1
I am trying to load css file in HTML template in python Django and getting this error Resource interpreted as Stylesheet but transferred with MIME type application/x-css: I am using Pydev , Eclipse, Django framework.
Trigger python script on raspberry pi from a rails application
32,172,061
1
2
61
0
python,ruby-on-rails,postgresql,raspberry-pi
There are many "easy" ways, depending on your skills. Maybe: "Write triggers, which are sending the notify on insert/update" is the hint you need?
0
0
0
1
2015-08-23T19:48:00.000
1
1.2
true
32,170,818
0
0
1
1
I have a rails application, and when there is an update to one of the rows in my database, I want to run a python script which is on a raspberry pi (example: lights up a LED when a user is created). I'm using PostgreSQL and have looked into NOTIFY/LISTEN channels, but can't quite figure that out. Is there an easy way to do this? The raspberry pi will not be on the same network as the rails application.
Django: best practise when heavily customizing 3rd party app
32,178,407
0
2
129
0
python,django,django-apps
I would consider leaving the 3rd party app and try to do customizing inside the project. If that isn't possible, and it requires lots of customizing, maybe there is an alternative to the app you are using? Other than that, I would go with your 1st option. But your worries are there for a reason. If you decide to make your own fork, you need to take care of bugs and fixes as well. However, with the 1st option I think it will be easier to merge the original into your fork. But don't forget about separation of concerns. Otherwise it will be very hard to maintain.
0
0
0
0
2015-08-24T08:10:00.000
2
0
false
32,177,366
0
0
1
2
Let's say I want to heavily customize a third-party Django app, such as django-postman (Add lots of new models, views as well as modifying those existing etc). What would be the best way to do this? Options I've considered: Fork the 3rd party repo. Clone locally outside of my django project. Do the updates, push them to the forked repo. Install my own fork into my venv (and add to my requirements.txt) for my django project. Just clone into a vendors folder of my django project, update the 3rd party app there, and then keep it in the same git repo as the django project. Either way, I am worried that will no longer be getting updates from the main 3rd party repo (bug fixes, new features etc), or if I merge into the fork (after changing lots) it could be a big headache. Am I thinking about this in the best way? Is there a smarter way? What do others typically do?
Django: best practise when heavily customizing 3rd party app
32,178,828
-1
2
129
0
python,django,django-apps
If changes that you are making are not changing way how 3rd party app is functioning, but it is more like adding new features or additional ways to that app, consider contacting with autor of that app to implement your changes into it. That way you will have lot less work when updating this application.
0
0
0
0
2015-08-24T08:10:00.000
2
-0.099668
false
32,177,366
0
0
1
2
Let's say I want to heavily customize a third-party Django app, such as django-postman (Add lots of new models, views as well as modifying those existing etc). What would be the best way to do this? Options I've considered: Fork the 3rd party repo. Clone locally outside of my django project. Do the updates, push them to the forked repo. Install my own fork into my venv (and add to my requirements.txt) for my django project. Just clone into a vendors folder of my django project, update the 3rd party app there, and then keep it in the same git repo as the django project. Either way, I am worried that will no longer be getting updates from the main 3rd party repo (bug fixes, new features etc), or if I merge into the fork (after changing lots) it could be a big headache. Am I thinking about this in the best way? Is there a smarter way? What do others typically do?
regarding Django philosphy of implementing project as reusable applications
32,213,209
1
0
49
0
python,django,django-apps
My sugestion is to create a third model, called ArtEvent and make this model points to Art and Event, this way you can create an especific app to manage events and then link everything. For example, when creating a new ArtEvent you redirects the user for the Event app, to enable him to create a new event. Then redirects again to the Art app with the created event, create a new ArtEvent and links those objects. In future lets suppose that you want to add events to another model, like User, if you follow the same strategy you can separate what is UserEvent specific, and maintain what is common between ArtEvent and UserEvent.
0
0
0
0
2015-08-24T18:14:00.000
1
1.2
true
32,188,979
0
0
1
1
I am implementing a project using Django. It's a site where people can view different Art courses and register. I am having trouble implementing the app as reusable applications. I already have a standalone App which takes care of all the aspect of Arts. Now I want to create another application where an admin create various events for the Arts in the system. conceptually these two should be a standalone apps. Event scheduling is pretty general use case and I want to implement in a way where it can be used for scheduling any kind of Event. In my case, those Events are Art related events. I don't want to put a foreign key to Art model in my Event model. how can I make it reusable so that it would work for scheduling Events related to any kind of objects.
Should a json file that changes every 5 minutes be hosted by Flask or nginx?
32,193,416
2
0
98
0
python,nginx,flask
Since generating the file appears to have no relation to the request / response cycle of a Flask app, don't use Flask to serve it. If it does require the Flask app to actively do something to it for every request, then do use Flask to serve it.
0
0
0
0
2015-08-24T23:25:00.000
1
1.2
true
32,193,277
0
0
1
1
I understand the concept that nginx should host my static files and I should leave Flask to serving the routes that dynamically build content. I don't quite understand where one draws the line of a static vs dynamic file, though. Specifically, I have some json files that are updated every 5 minutes by a background routine that Flask runs via @cron.interval_schedule and writes the .json to a file on the server. Should I be building routes in flask to return this content (simply return the raw .json file) since the content changes every five minutes, or should have nginx host the json files? Can nginx handle a file that changes every five minutes with it's caching logic?
Twisted + Django as a daemon process plus Django + Apache
32,235,411
1
0
143
1
python,django,sqlite,twisted,daemon
No there is nothing inherently wrong with that approach. We currently use a similar approach for a lot of our work.
0
1
0
0
2015-08-25T20:49:00.000
2
0.099668
false
32,213,796
0
0
1
1
I'm working on a distributed system where one process is controlling a hardware piece and I want it to be running as a service. My app is Django + Twisted based, so Twisted maintains the main loop and I access the database (SQLite) through Django, the entry point being a Django Management Command. On the other hand, for user interface, I am writing a web application on the same Django project on the same database (also using Crossbar as websockets and WAMP server). This is a second Django process accessing the same database. I'm looking for some validation here. Is anything fundamentally wrong to this approach? I'm particularly scared of issues with database (two different processes accessing it via Django ORM).
Python Web Scraping HTTP 400
32,217,810
0
0
611
0
python,http,web-scraping,scrapy
It could be a rate limiter. However a 400 error generally means that the client request was malformed and therefore rejected by the server. You should start investigating this first. When your requests start failing, exit your program and immediately start it again. If it starts working, you know that you aren't being rate-limited and that there is in fact something wrong with how your requests are formed later on.
0
0
1
0
2015-08-26T04:01:00.000
2
0
false
32,217,773
0
0
1
1
I'm doing a web scrape with Python (using the Scrapy framework). The scrape works successfully until it gets about an hour into the process and then every request comes back with a HTTP400 error code. Is this just likely to be a IP based rate limiter or scrape detection tool? Any advice on how I might investigate the root cause further?
Call python script from Jira while creating an issue
32,234,002
1
1
1,311
0
python,jira
Take a look at JIRA webhooks calling a small python based web server?
0
0
0
1
2015-08-26T15:07:00.000
2
0.099668
false
32,230,294
0
0
1
1
Let say I'm creating an issue in Jira and write the summary and the description. Is it possible to call a python script after these are written that sets the value for another field, depending on the values of the summary and the description? I know how to create an issue and change fields from a python script using the jira-python module. But I have not find a solution for using a python script while editing/creating the issue manually in Jira. Does anyone have an idea of how I manage that?
change static some static files location from /static/file.js to /file.js
32,244,567
3
1
163
0
python,django,django-staticfiles
My first choice in this situation would be to fix whatever is stopping you from putting it into /static/. I can't imagine any half-decent third-party plugin would demand that the files be in the root; there must be some way to configure it to work from a subdirectory. If there isn't, I'd fork the project and add the option, then try to get them to merge it back. I realise you've probably already explored this option, but can you give us some more details about the plugin you're trying to use, and the reason it needs to go into the root? This really would be the best solution. If you really must have the file in the root, and want to keep it as part of your django project, I'd try symlinking the files into the public root. This would mean it would be available in both locations; I can't see why that would be a problem, but you do specify "ONLY" in the root and I'm sure you have your reasons; in that case, perhaps you could configure your web server to redirect from /static/filename.js to /filename.js? Lastly, you technically could change the settings STATIC_URL and STATIC_ROOT to point at the root directory, but that sounds like a pretty terrible idea to me. If you've got this far and still need to do it, it would be far better to take the file out of your django project altogether and just manually place it in your web root.
0
0
0
0
2015-08-26T18:19:00.000
2
1.2
true
32,233,938
0
0
1
1
I've a website running on Django, Heroku. I need to add few static JavaScript files for a third-party plugin. My newly added files are available at domain.com/static/filename.js. I need them to be available at domain.com/filename.js. How to make ONLY the newly added Javascript files available at domain.com/filename.js? If the info is not sufficient please ask which code is needed in the comments.
arbitrary gql filters and sorts without huge index.yaml
32,281,354
0
0
57
0
python,google-app-engine,google-cloud-datastore,gql
It seems like Google Cloud SQL would do what I need, but since I'm trying not to spend any money on this project and GCS doesn't have a free unlimited tier, I've resorted to querying by my filter and then sorting the results myself.
0
1
0
0
2015-08-27T00:40:00.000
1
1.2
true
32,238,896
0
0
1
1
I've written a tiny app on Google App Engine that lets users upload files which have about 10 or so string and numeric fields associated with them. I store the files and these associated fields in an ndb model. I then allow users to filter and sort through these files, using arbitrary fields for sorting and arbitrary fields or collections of fields for filtering. However, whenever I run a sort/filter combination on my app that I didn't run on the dev_appserver before uploading, I get a NeedIndexError along with a suggested index, which seems to be unique for every combination of sort and filter fields. I tried running through every combination of sort/filter field on the appserver, generating a large index.yaml file, but at some point the app stopped loading altogether (I wasn't monitoring whether this was a gradual slowdown or a sudden breaking). My questions are as follows. Is this typical behavior for the GAE datastore, and if not what parts of my code would be relevant for troubleshooting this? If this is typical behavior, is there an alternative to the datastore on GAE that would let me do what I want?
Tornado websocket pings
32,245,768
0
0
1,215
0
python,websocket,tornado
The on_close event can only be triggered when the connection is closed. You can send a ping and wait for an on_pong event. Timouts are typically hard to detect since you won't even get a message that the socket is closed.
0
1
1
0
2015-08-27T09:13:00.000
1
0
false
32,245,227
0
0
1
1
I'm running a Python Tornado server with a WebSocket handler. We've noticed that if we abruptly disconnect the a client (disconnect a cable for example) the server has no indication the connection was broken. No on_close event is raised. Is there a workaround? I've read there's an option to send a ping, but didn't see anyone use it in the examples online and not sure how to use it and if it will address this issue.
Django: MongoDB engine for Django 1.8
32,260,215
0
0
210
1
python,django,mongodb
If you are using mongoengine, there is no need of django-nonrel.You can directly use django latest versions.
0
0
0
0
2015-08-27T21:50:00.000
1
0
false
32,260,031
0
0
1
1
Anybody know of any currently worked on projects that wire up MongoDB to the most recent version of Django? mongoengine's Django module github hasn't been updated in 2 years (and I don't know if I can use its regular module with Django) and django-nonrel uses Django 1.6. Anybody tried using django-nonrel with Django 1.8?
Can't find a way to deal with Google Drive API 403 Rate Limit Exceeded
32,260,966
0
1
884
0
google-api,google-drive-api,google-api-python-client
I believe that this is a limit that google sets to stop people spamming the service and tying it up. It doesn't have anything to do with your app itself but is set on the Google server side. If the Google server receives over a particular number of requests within a certain time this is the error you get. There is nothing you can do in your app to overcome this. You can talk to Google about it and usually paying for Google licenses ect can allow you much higher limits before being restricted.
0
1
0
0
2015-08-27T23:13:00.000
1
0
false
32,260,884
0
0
1
1
I have a huge amount of users and files in a Google Drive domain. +100k users, +10M of files. I need to fetch all the permissions for these files every month. Each user have files owned by themselves, and files shared by other domain users and/or external users (users that don't belong to the domain). Most of the files are owned by domain users. There is more than 7 millions of unique files owned by domain users. My app is a backend app, which runs with a token granted by the domain admin user. I think that doing batch requests is the best way to do this. Then, I configured my app to 1000 requests per user, in google developer console. I tried the following cases: 1000 requests per batch, up to 1000 per user -> lots of user rate limits 1000 requests per batch, up to 100 per user -> lots of rate limit errors 100 requests per batch, up to 100 per user -> lots of rate limit errors 100 requests per batch, up to 50 per user -> lots of rate limits errors 100 requests per batch, up to 10 per user -> not errors anymore I'm using quotaUser parameter to uniquely identify each user in batch requests. I checked my app to confirm that each batch was not going to google out of its time. I checked also to see if each batch have no more than the limit of file_id configured to fetch. Everything was right. I also wait each batch to finish before sending the next one. Every time I see a 403 Rate Limit Exceeded, I do an exponential backoff. Sometimes I have to retry after 9 steps, which is 2**9 seconds waiting. So, I can't see the point of Google Drive API limits. I'm sure my app is doing everything right, but I can't increase the limits to fetch more permissions per second.
how to know what page or call is being when in a JSP page when submited
32,278,473
0
1
49
0
javascript,python,ajax,jsp,jsp-tags
The developers console ( F12 in Chrome and Firefox) is a wonderful thing. Check the Network or Net tab. There you can see all the requests between your browser and your server.
0
0
1
0
2015-08-28T19:24:00.000
1
0
false
32,278,459
0
0
1
1
I have a JSP page called X.JSP (contains few radio button and submit button), when i hit the submit button in X.JSP, the next page is displayed Y.JSP?xxxx=1111&yyyy=2222&zzzz=3333 how to know what page or service or ajax call is being made when i hit the submit button in X.JSP page. xxxx=1111&yyyy=2222&zzzz=3333 these are generated after i click the submit button in X.JSP Currently i am using python to script. i select a radio button and post the form. i am not able to get the desired O/P. how do I what page or service or ajax call is being made when i hit the submit button in X.JSP page, so that i can directly hit that page or is there any better way to solve this
Scrapy/Python: How to get JOBDIR setting from inside of spider?
36,967,238
0
0
562
0
python,scrapy
From inside the spider: self.crawler.settings.get("JOBDIR")
0
0
0
0
2015-08-29T09:59:00.000
1
0
false
32,284,815
0
0
1
1
I'm running scrapy like this scrapy crawl somespider -s JOBDIR=crawls/somespider-1 -a input_data=data (For maintaining the Job state) When something unexpected happens (eg. Connection lost) A CloseSpider exception is raised and the spider is later scheduled to run as a cron job I usually pass **kwargs inside __init__ to the new spider crawl However JOBDIR is'nt found inside **kwargs Is there any way i can access this value from inside the spider?
Crawling and finding keywords for images without any "alt" attribute
32,286,161
0
0
278
0
python,image,web-crawler
If there is no alt attribute in a tag, or it is empty, check fo attribute name, if not name, check for id. Well, id, when .asp or .aspx for instance, doesn't have to make sense. But, well, as a last resort, use src attribute by getting just the filename without an extension. Sometimes attribute class can also be used, but, well, I don't recomend it. Even id can be very much deceiving. You will have trouble with JS imposed images, of course, but even that can be solved with a lot of time and will. As for precautions, what exactly do you mean? Check whether src is really an image or what?
0
0
1
0
2015-08-29T12:30:00.000
1
1.2
true
32,286,059
0
0
1
1
I am writing an image crawler that scrapes the images from a Web page. This done by finding the img tag on the Web page. But recently I noticed, some img tags don't have an alt attribute in it. Is there any way to find the keywords for that particular image? And are there any precautions for crawling the websites for images?
Embed one html file inside other report html file as hyperlink in python
32,305,050
0
0
78
0
python,html,hyperlink
You need to one of: Attach A.html as well as report.html, Post A.html to a shared location such as Google drive and modify the link to point to it, or Put the content of A.html into a hidden <div> with a show method.
0
0
1
1
2015-08-31T05:57:00.000
1
0
false
32,304,781
0
0
1
1
I am trying to embed a A.html file inside B.html(report) file as a hyperlink. Just to be clear, both html files are offline and available only on my local machine. and at last the report html file will be sent over email. The email recipient will not be having access to my local machine. So if they click on hyperlink in report.html, they will get "404 - File or directory not found". Is there any way to embed A.html inside report.html, so that email recipient can open A.html from report.html on their machine
Access my web app with CNAME
32,329,235
1
0
1,566
0
pythonanywhere
The most likely issue is that you don't have a web app at the domain that you're trying to access. For instance, if you've added the CNAME to www.mydomain.com, you must have a web app at www.mydomain.com. The fact that you're getting a "coming soon" page suggests that the CNAME is correctly set up to go to PythonAnywhere.
0
0
0
0
2015-08-31T09:47:00.000
2
0.099668
false
32,308,397
0
0
1
1
I have an app deployed on pythonanywhere and setup to use a custom domain. I'm in the process of getting the domain and I wanted to ask if there is a way to access my application via the CNAME webapp-xxxxxx.pythonanywhere.com which has been provided by pythonanywhere. Currently trying to access it takes me to the coming soon page. Thank you.
Is there a way to fetch multiple urls(chunks) from a web server with one GET request?
32,314,458
0
0
96
0
python,webserver,python-requests
I don't know specifically Python APIs but, for sure, HTTP specification does not allow a single GET request to fetch multiple resources. To each request always corresponds one and only one resource in response. This is intrinsic of the protocol. In some situations you have to do many request to obtain a single resource, or a part of it, as happens with range requests. But also in this case every request has only one response which is finally used by the client to assemble the complete final resource.
0
0
1
0
2015-08-31T12:33:00.000
1
0
false
32,311,385
0
0
1
1
I am using python requests right now but if there is a way to do this then it would be a game changer... Specifically i want to download a bunch of pdf's from one web site. I have the urls to the pages i want. Can i grab more then one at a time?
WebSockets connection refused
32,324,682
0
0
2,943
0
python,django,nginx,websocket
Just needed to change the port... May be this will help somebody.
0
1
0
0
2015-08-31T12:37:00.000
2
0
false
32,311,470
0
0
1
2
I have a Django app, with a real-time chat using tornado, redis and WebSockets. Project is running on the ubuntu server. On my local server everything is working good, but doesn't work at all on production server. I get an error WebSocket connection to 'ws://mysite.com:8888/dialogs/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED privatemessages.js:232 close dialog ws I have tried to change nginx configuration, settings.py, tried to open the 8888 port, but still no result.
WebSockets connection refused
32,313,073
0
0
2,943
0
python,django,nginx,websocket
Seems to be you are using WebSockets as a separate service, so try to add the Access-control-origins add_header Access-Control-Allow-Origin *;
0
1
0
0
2015-08-31T12:37:00.000
2
0
false
32,311,470
0
0
1
2
I have a Django app, with a real-time chat using tornado, redis and WebSockets. Project is running on the ubuntu server. On my local server everything is working good, but doesn't work at all on production server. I get an error WebSocket connection to 'ws://mysite.com:8888/dialogs/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED privatemessages.js:232 close dialog ws I have tried to change nginx configuration, settings.py, tried to open the 8888 port, but still no result.
Not receiving string messages from SQS
32,312,184
0
0
34
0
java,python,amazon-sqs
You Can manually check by visiting your aws console, whether your message are in proper format. Verify that.
0
0
0
0
2015-08-31T13:04:00.000
1
0
false
32,311,986
0
0
1
1
I am working with SQS. I am sending messages from a java code and receiving it from a python script. I am sending json object in string form using JSONObject.toString(). Sometimes python script receive the proper string but sometimes it get the message in following format: ���'��eq��z��߭��N��n6�N��~��~��m=���v+���Myӟ=���e�M�ߟv׎�۽y�����8��w��;�M��N�۞�㾹뾷�n���7�}7�o=��4۽����߾v��6��<�}7�}4�ν��=���߾{��}�n6���߭��^������~���|�]��N��~��κ�����y�������^��}��M��θ��:�^�����_|߮6��5�^�q��z�ږǫiخ�����n�Wږǭʗ�9�������F���8�����4�N�u��q�������_o�<���o�Zo�<�n�뗷
Is it possible to Bulk Insert using Google Cloud Datastore
33,367,328
7
6
3,726
1
python,mysql,google-cloud-datastore
There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything. You could always write a quick script using a local queue that parallelizes the work. The basic gist would be: Queuing script pulls data out of your MySQL instance and puts it on a queue. (Many) Workers pull from this queue, and try to write the item to Datastore. On failure, push the item back on the queue. Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads. Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k). If you have properties (which will be indexed by default), this cost goes up substantially. The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties). A single "new entity write" is equivalent to: + 2 (1 x 2 write ops fixed cost per new entity) + 10 (5 x 2 write ops per indexed property) = 12 "operations" per entity. So your actual cost to load this data is: 120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00
0
1
0
0
2015-08-31T16:47:00.000
3
1.2
true
32,316,088
0
0
1
1
We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore. Eventually we would move all our data there, however initially focusing on the archived data as a test. Our language of choice is Python, and have been able to transfer data from mysql to the datastore row by row. We have approximately 120 million rows to transfer and at a one row at a time method will take a very long time. Has anyone found some documentation or examples on how to bulk insert data into cloud datastore using python? Any comments, suggestions is appreciated thank you in advanced.
Text Encoding for Kindle with Python
64,088,459
0
1
171
0
python,encoding,kindle,latin1
Use <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> I previously used <meta charset="UTF-8" />, which did not seem to work.
0
0
1
1
2015-08-31T17:13:00.000
1
0
false
32,316,480
0
0
1
1
Basicly, I'm crawling text from a webpage with python using Beautifulsoup, then save it as an HTML and send it to my Kindle as a mail attachement. The problem is; Kindle supports Latin1(ISO-8859-1) encoding, however the text I'm parsing includes characters that are not a part of Latin1. So when I try to encode text as Latin1 python gives following error because of the illegal characters: UnicodeEncodeError: 'latin-1' codec can't encode character u'\u2019' in position 17: ordinal not in range(256) When I try to encode it as UTF-8, this time script runs perfectly but Kindle replaces some incompatible characters with gibberish.
Multiple authentication app support
34,567,070
0
0
308
0
python,django,django-rest-framework,django-rest-auth
djoser supports only Basic Auth and the token auth by Django Rest Framework. What you can do is to make use of login and logout from Django OAuth Toolkit and then djoser views such as like register, password reset.
0
0
0
0
2015-09-02T01:10:00.000
1
0
false
32,343,180
0
0
1
1
When developing a Django project, many third party authentication packages are available, for example: Django OAuth Toolkit, OAuth 2.0 support. Djoser, provides a set of views to handle basic actions such as registration, login, logout, password reset and account activation. Currently, I just want to support basic actions registration, login and so on. So Djoser could be my best choice. But if I want to support OAuth 2.0 later, I will have two tokens, one is from Djoser, and another is from Django OAuth Toolkit. I just got confused here, how to handler two tokens at the same time? Or should I just replace Djoser with Django OAuth Toolkit, if so, how to support basic actions such as registration?
How to use htpasswd For CKAN Instance
32,395,525
0
0
75
0
python,apache,tomcat6,.htpasswd,ckan
You are using nginx, arent you? So you can make the athentification with nginx with just adding two lines to one file and creating a password file. In /etc/nginx/sites-available/ckan add following lines: auth_basic "Restricted"; auth_basic_user_file /filedestination; then create create a file at your filedestination with following content: USERNAME:PASSWORD The password must be in md5. Have fun with ckan!
0
0
0
1
2015-09-02T06:44:00.000
1
0
false
32,346,261
0
0
1
1
I have included Location header in My Virtual Host file. AuthType Basic AuthName "Restricted" AuthUserFile /etc/httpd/.htpasswd require valid-user Also created user to access the domain. but the user i have created using htpasswd is not allow other user to make any activity in CKAN Instance. anyone Have an idea..Please let me know
How to read a part of amazon s3 key, assuming that "multipart upload complete" is yet to happen for that key?
32,352,584
3
1
1,069
1
python,amazon-web-services,file-upload,amazon-s3,boto
There is no API in S3 to retrieve a part of a multi-part upload. You can list the parts but I don't believe there is any way to retrieve an individual part once it has been uploaded. You can re-upload a part. S3 will just throw away the previous part and use the new one in it's place. So, if you had the old and new versions of the file locally and were keeping track of the parts yourself, I suppose you could, in theory, replace individual parts that had been modified after the multipart upload was initiated. However, it seems to me that this would be a very complicated and error-prone process. What if the change made to a file was to add several MB's of data to it? Wouldn't that change your boundaries? Would that potentially affect other parts, as well? I'm not saying it can't be done but I am saying it seems complicated and would require you to do a lot of bookkeeping on the client side.
0
0
1
0
2015-09-02T08:59:00.000
1
1.2
true
32,348,812
0
0
1
1
I'm working on aws S3 multipart upload, And I am facing following issue. Basically I am uploading a file chunk by chunk to s3, And during the time if any write happens to the file locally, I would like to reflect that change to the s3 object which is in current upload process. Here is the procedure that I am following, Initiate multipart upload operation. upload the parts one by one [5 mb chunk size.] [do not complete that operation yet.] During the time if a write goes to that file, [assuming i have the details for the write [offset, no_bytes_written] ]. I will calculate the part no for that write happen locally, And read that chunk from the s3 uploaded object. Read the same chunk from the local file and write to read part from s3. Upload the same part to s3 object. This will be an a-sync operation. I will complete the multipart operation at the end. I am facing an issue in reading the uploaded part that is in multipart uploading process. Is there any API available for the same? Any help would be greatly appreciated.
Why can't I import modules in a Django project?
32,362,402
0
1
2,926
0
python,django,import,django-dev-server
I found the culprit, or at least a culprit. I had omitted (in my .bashrc) the "export ", and now I'm on to another problem.
0
0
0
0
2015-09-02T19:41:00.000
3
0
false
32,361,764
0
0
1
1
I am trying to pick up an old Django project and my immediate goal is to see what I can get running on my computer on the development server. I get: Inner Sanctum ~/pragmatometer $ python manage.py runserver Traceback (most recent call last): File "manage.py", line 10, in execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 261, in fetch_command commands = get_commands() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 107, in get_commands apps = settings.INSTALLED_APPS File "/Library/Python/2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__ self._setup(name) File "/Library/Python/2.7/site-packages/django/conf/__init__.py", line 49, in _setup self._wrapped = Settings(settings_module) File "/Library/Python/2.7/site-packages/django/conf/__init__.py", line 132, in __init__ % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'pragmatometer.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named pragmatometer.settings Here is some command line output: Inner Sanctum ~/pragmatometer $ /bin/pwd /Users/jonathan/pragmatometer Inner Sanctum ~/pragmatometer $ echo $PYTHONPATH /Users/jonathan Inner Sanctum ~/pragmatometer $ python Python 2.7.10 (default, Jul 14 2015, 19:46:27) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pragmatometer Traceback (most recent call last): File "", line 1, in ImportError: No module named pragmatometer >>> import pragmatometer.settings Traceback (most recent call last): File "", line 1, in ImportError: No module named pragmatometer.settings >>> What should I be doing that I'm not? (Or, as it was an older project, should I just start with a fresh new project?) Thanks,
manage.py loaddata hangs when loading to remote postgres
32,385,965
1
2
353
1
python,django,django-south
I had no solution, so I ran loaddata locally and used pg_dump and ran the dump with pqsl -f and restored the data.
0
0
0
0
2015-09-02T20:19:00.000
1
0.197375
false
32,362,384
0
0
1
1
I am trying to migrate django models from sqlite to postgres. I tested it locally and now trying to do the samething with remote database. I dumped the data first then started the application which created the tables in remote database. Finally I am trying to loaddata but it looks like hanged and no errors. Is there a way to get verbose ? Or I am not sure how to diagnose this issue. It just 199M size file and when I test locally loaddata works in few minutes.
Django server is still running after CONTROL-C
32,367,413
3
2
6,725
0
python,django
Without seeing your script, I would have to say that you have blocking calls, such as socket.recv() or os.system(executable) running at the time of the CTRL+C. Your script is stuck after the CTRL+C because python executes the KeyboardInterrupt AFTER the the current command is completed, but before the next one. If there is a blocking function waiting for a response, such as an exit code, packet, or URL, until it times out, you're stuck unless you abort it with task manager or by closing the console. In the case of threading, it kills all threads after it completes its current command. Again, if you have a blocking call, the thread will not exit until it receives its response.
0
0
0
0
2015-09-03T04:58:00.000
3
0.197375
false
32,367,279
0
0
1
2
I start Django server with python manage.py runserver and then quit with CONTROL-C, but I can still access urls in ROOT_URLCONF, why?
Django server is still running after CONTROL-C
53,162,008
-1
2
6,725
0
python,django
just type exit(), that is what I did and it worked
0
0
0
0
2015-09-03T04:58:00.000
3
-0.066568
false
32,367,279
0
0
1
2
I start Django server with python manage.py runserver and then quit with CONTROL-C, but I can still access urls in ROOT_URLCONF, why?
Debugging django remotely
32,372,423
0
0
689
0
python,django,remote-debugging
You cannot output it to the console. Since the process is not called from a console, you cannot see the stdout in a console. You can only redirect the output to a file and read the file. If at all you want the logs in the console, then you have to call the django server from console. i.e python manage.py runserver, which should only be used for development time, as this server is not good to be used in production
0
0
0
0
2015-09-03T09:30:00.000
2
0
false
32,371,896
0
0
1
2
I'm developing Django server on Ubuntu OS. Since there is no browser on that machine, I can only debug the server remotely. So I just configure it with Apache and WSGI, and now I can access it through machine public IP. Then I want to record logs in some views for debugging, if I output the log to a file, I can see it in the file, but if I want to output it to the console, I just get confused here, where is the console? since I didn't launch it with python manage.py runserver manually, currently running server process was launched by WSGI automatically. Of course, I can just stop the process launched by WSGI, and re-launch it with python manage.py runserver manually. If so, I can't access it through machine public IP. So how can I see logs in the console in putty
Debugging django remotely
32,372,593
3
0
689
0
python,django,remote-debugging
Firstly, you shouldn't be developing on the server. Do that locally and debug in the usual way there. If you're debugging production issues, you will indeed need to use the log files. But it's pretty simple to see those in the console; you can do tail -f /var/log/my_log_file.log and the console will show the log as it is being written.
0
0
0
0
2015-09-03T09:30:00.000
2
1.2
true
32,371,896
0
0
1
2
I'm developing Django server on Ubuntu OS. Since there is no browser on that machine, I can only debug the server remotely. So I just configure it with Apache and WSGI, and now I can access it through machine public IP. Then I want to record logs in some views for debugging, if I output the log to a file, I can see it in the file, but if I want to output it to the console, I just get confused here, where is the console? since I didn't launch it with python manage.py runserver manually, currently running server process was launched by WSGI automatically. Of course, I can just stop the process launched by WSGI, and re-launch it with python manage.py runserver manually. If so, I can't access it through machine public IP. So how can I see logs in the console in putty
Why there is need to push django migrations to version control system
32,376,288
7
6
1,373
0
python,django,migration,django-migrations
Migrations synchronize the state of your database with the state of your code. If you don't check in the migrations into version control, you lose the intermediate steps. You won't be able to go back in the version control history and just run the code, as the database won't match the models at that point in time. Migrations, like any code, should be tested, at the very least on a basic level. Even though they are auto-generated, that's not a guarantee that they will work 100% of the time. So the safe path is to create the migrations in your development environment, test them, and then push them to the production environment to apply them there.
0
0
0
0
2015-09-03T12:46:00.000
3
1
false
32,376,092
0
0
1
3
This is a common practice that people working on django project usually push migrations to the version control system along with other code. My question is why this practice is so common? Why not just push the updated models and everyone generate migrations locally. This approach can reduce the effort for resolving migrations conflicts too.
Why there is need to push django migrations to version control system
32,376,183
2
6
1,373
0
python,django,migration,django-migrations
Firstly, migrations in version control allows you to run them in production. Secondly, migrations are not always automatically generated. For example, if you add a new field to a model, you might write a migration to populate the field. That migration cannot be re-created from the models. If that migration is not in version control, then no-one else will be able to run it.
0
0
0
0
2015-09-03T12:46:00.000
3
0.132549
false
32,376,092
0
0
1
3
This is a common practice that people working on django project usually push migrations to the version control system along with other code. My question is why this practice is so common? Why not just push the updated models and everyone generate migrations locally. This approach can reduce the effort for resolving migrations conflicts too.
Why there is need to push django migrations to version control system
32,376,337
9
6
1,373
0
python,django,migration,django-migrations
If you didn't commit them to a VCS then what would happen is people would make potentially conflicting changes to the model. When finally ready to deploy, you would still need django to make new migrations that would then merge everybodys changes together. And this just creates an additional unnecessary step that can introduce bugs. You also are assuming everybody will always be able to work on an up to date version of the code which isn't always possible when you start working on branches that are not ready to be merged into mainline.
0
0
0
0
2015-09-03T12:46:00.000
3
1.2
true
32,376,092
0
0
1
3
This is a common practice that people working on django project usually push migrations to the version control system along with other code. My question is why this practice is so common? Why not just push the updated models and everyone generate migrations locally. This approach can reduce the effort for resolving migrations conflicts too.
GAE/python site is no longer handling requests prefaced with 'www.'
32,404,416
0
0
26
0
python,google-app-engine,google-cloud-datastore
Here's how I remedied this: I went to console.developers.google.com > project > hockeybias-hrd > appengine > settings > domains > add In 'step 2' on that page I put the 'www' for the subdomain in the textbox which enabled the Add button. I clicked on the 'Add' button and the issue was solved. I will note that this was the second time I have been 'head-faked' by google's use of greyed-out text to mean something other than 'disabled'... 'www' was the default value in the subdomain textbox - BUT it was greyed-out AND the 'Add'button was disabled right next to it. So, I did not initially think I could enter a value there.
0
1
0
0
2015-09-03T16:16:00.000
1
0
false
32,380,760
0
0
1
1
If a user enters ‘hockeybias.com’ into his/her browser as a URL to get to my hockey news aggregation site, the default page comes up correctly. It has in the past and does so today. However, as of this summer, if someone uses ‘www.hockeybias.com’ the user will get the following error message: Error: Not Found The requested URL / was not found on this server. This is a relatively new issue for me as ‘www.hockeybias.com’ worked fine in the past. The issue seems to have come up after I migrated from the ‘Master/Slave Datastore’ version of GAE to the ‘High Replication Datastore’ (HRD) version of GAE earlier this summer. The issue occurred while the site used python2.5. And I migrated the site to python2.7 this morning and am still having the issue.
Consume web service with DJANGO
32,386,137
0
0
2,079
0
python,django,web-services,soap
You don't need to build a web API if all you want to do is consume someone else's API. Just retrieve the data - with something like requests - and use it in your Django app.
0
0
0
0
2015-09-03T21:40:00.000
1
0
false
32,385,881
0
0
1
1
I have a django application to show some data from my database (ORACLE), but now I need to show some data from a web service. I need to build a form based in the request of the web service, and show the response of the web service. I googling to expose my app as a web service and send and retrieve XML data. But I am very confused and I don't know where to start or which django package to use(PyXML,DJANGO REST). I am not sure if I need to build a web API or I can consume the web service without the web api. Can someone give some advices to achieve this task.
Can we make many views.py in Django as a Controller?
32,390,343
4
1
602
0
python,django
Instead of multiple views.py, You can divide your application into individual applications within your project. Like separate applications for userRegistration, Contact, ArticleControl. this way your code will look much cleaner. And in case of any bug you will be able to debug that specific application easily.
0
0
0
0
2015-09-04T04:40:00.000
1
0.664037
false
32,390,195
0
0
1
1
Iam new to Django Framework and just started learning django 1.8. In other framework Like Laravel,Rails there we can make different controller file. for example UserController.php,ContactController.php etc. I think in Django, views.py is similar to Controller and In django ,in views.py i have a long line of code of more than 300 lines and i want to make my code clean by making seperate views.py for userRegistration,Contact,ArticleControl etc. My question is how can i achieve this ie:making many views.py like controller
Pip freeze for only project requirements
53,797,785
22
67
48,165
0
python,pip
I have tried both pipreqs and pigar and found pigar is better because it also generates information about where it is used, it also has more options.
0
0
0
0
2015-09-04T04:49:00.000
10
1
false
32,390,291
1
0
1
3
When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior. I have, however, done something wrong as this now includes things like Django in projects that have no business with Django. How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv.
Pip freeze for only project requirements
70,058,507
0
67
48,165
0
python,pip
I just had the same issue, here's what I've found to solve the problem. First create the venv in the directory of your project, then activate it. For Linux/MacOS : python3 -m venv ./venv source myvenv/bin/activate For Windows : python3 -m venv .\venv env\Scripts\activate.bat Now pip freeze > requirements.txt should only takes the library used in the project. NB: If you have already begin your project you will have to reinstall all the library to have them in pip freeze.
0
0
0
0
2015-09-04T04:49:00.000
10
0
false
32,390,291
1
0
1
3
When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior. I have, however, done something wrong as this now includes things like Django in projects that have no business with Django. How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv.
Pip freeze for only project requirements
65,285,557
1
67
48,165
0
python,pip
if you are using linux then do it with sed pip freeze | sed 's/==.*$/''/' > requirements.txt
0
0
0
0
2015-09-04T04:49:00.000
10
0.019997
false
32,390,291
1
0
1
3
When I run pip freeze > requirements.txt it seems to include all installed packages. This appears to be the documented behavior. I have, however, done something wrong as this now includes things like Django in projects that have no business with Django. How do I get requirements for just this project? or in the future how do I install a package with pip to be used for this project. I think I missed something about a virtualenv.