Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Gallery in Photologue can have only one Photo
31,394,483
0
0
184
1
python,django,python-3.4,django-1.7,photologue
I had exactly the same problem. I suspected some problem with django-sortedm2m package. To associate photo to gallery, it was using SortedManyToMany() from sortedm2m package. For some reason, the admin widget associated with this package did not function well. (I tried Firefox, Chrome and safari browser). I actually did not care for the order of photos getting uploaded to Gallery, so I simply replaced that function call with Django's ManyToManyField(). Also, I noticed that SortedManyToMany('Photo') was called with constant string Photo. Instead it should be called with SortedManyToMany(Photo) to identify Photo class. Although it did not resolve my problem entirely. So I used default ManyToMany field and it is showing all the photos from Gallery.
0
0
0
0
2015-03-08T13:57:00.000
3
0
false
28,927,247
0
0
1
2
after I put the photologue on the server, I have no issue with uploading photos. the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only. The only way to add photos to a gallery is by adding them manually to photologue_gallery_photos table in the database :( anyone kows how to solve it?
Gallery in Photologue can have only one Photo
32,932,624
0
0
184
1
python,django,python-3.4,django-1.7,photologue
I guess your problem is solved by now, but just in case.. I had the same problem. Looking around in the logs, I found it was caused by me not having consolidated the static files from sortedm2m with the rest of my static files (hence the widget was not working properly).
0
0
0
0
2015-03-08T13:57:00.000
3
0
false
28,927,247
0
0
1
2
after I put the photologue on the server, I have no issue with uploading photos. the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only. The only way to add photos to a gallery is by adding them manually to photologue_gallery_photos table in the database :( anyone kows how to solve it?
Asana API querying by assignee_status
28,948,207
1
2
269
0
python,asana
You actually can't filter by assignee_status at all - if you pass the parameter it is silently ignored. We could change it so that unrecognized parameters result in errors, which would help make this clearer.
0
0
1
0
2015-03-09T17:11:00.000
1
0.197375
false
28,947,894
0
0
1
1
When Querying just for tasks that are marked for today in python: client.tasks.find_all({ 'assignee_status':'upcoming','workspace': 000000000,'assignee':'me' ,'completed_since':'now'}, page_size=100) I get a response of all tasks the same as if I would not have included assignee_status client.tasks.find_all({'workspace': 000000000,'assignee':'me' ,'completed_since':'now'}, page_size=100) The workspace space has around 5 task that are marked for today. Thank you, Greg
Turning off IntelliJ Auto-save
66,222,468
1
78
59,393
0
java,python,intellij-idea,pycharm,ide
If there are any file watchers active (Preferences>Tools>File Watchers), make sure to check their Advanced Options. Disable any Auto-save files to trigger the watcher toggles. This option supersedes the Autosave options from Preferences>Appearance & Behaviour>System Settings.
0
0
0
1
2015-03-09T18:31:00.000
8
0.024995
false
28,949,290
0
0
1
2
I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic. As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something. This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it?
Turning off IntelliJ Auto-save
37,813,276
8
78
59,393
0
java,python,intellij-idea,pycharm,ide
I think the correct answer was given as a comment from ryanlutgen above: The beaviour of "auto-saving" your file is not due to the auto-save options mentioned. IJ saves all changes to your build sources to automatically build the target. This can be turned of in: Preferences -> Build,Execution,Deployment -> Compiler -> Make project automatically. Note: now have to initiate the project build manually (e.g. by using an appropriate key-shortcut) (All other "auto-save" options just fine-tune the build in auto-save behaviour.)
0
0
0
1
2015-03-09T18:31:00.000
8
1
false
28,949,290
0
0
1
2
I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic. As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something. This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it?
Freezing application-specific dependencies in Python, virtualenv and pip
28,953,236
1
0
509
0
python,flask,virtualenv
Make a blank virtualenv. Try to run your program. If there is an import error, install the relevant package, then go to (2) again. You now have a virtualenv with just the packages that are required. Freeze that.
0
0
0
0
2015-03-09T21:41:00.000
2
1.2
true
28,952,285
1
0
1
2
How can I have a clean virtualenv for my flask application that contains nothing else than the dependencies of application needs? I am using Ubuntu and I have a Flask application and when I run command pip freeze > requirements.txt the requirements file gets unnecessary files also This leads to a problem when uploading it on heroku. How do I resolve this?
Freezing application-specific dependencies in Python, virtualenv and pip
44,601,365
0
0
509
0
python,flask,virtualenv
Another easy way of doing this would be using the pipreqs. So what it basically does is it generates pip requirements.txt file based on imports of any project. Install pipreqs pip install pipreqs Then pipreqs /path/to/project You will have your requirements.txt file generated in your project path.
0
0
0
0
2015-03-09T21:41:00.000
2
0
false
28,952,285
1
0
1
2
How can I have a clean virtualenv for my flask application that contains nothing else than the dependencies of application needs? I am using Ubuntu and I have a Flask application and when I run command pip freeze > requirements.txt the requirements file gets unnecessary files also This leads to a problem when uploading it on heroku. How do I resolve this?
Using `django-allauth` to let users sign in with Twitter and tweet through the app
28,975,387
0
0
120
0
python,django,twitter,django-allauth
I see now that I simply needed to define my app as "read & write" in the Twitter admin UI.
0
0
0
1
2015-03-09T21:49:00.000
1
0
false
28,952,404
0
0
1
1
I'm trying to use django-allauth for Twitter sign-in in my Django app. I notice that when I do the sign-in process, Twitter says that this app will NOT be able to post tweets. I do want to be able to post tweets. How do I add this permission?
How to change web2py connection timeout on pythonanywhere?
28,963,419
3
2
387
0
web2py,pythonanywhere
We don't have a good general solution for this. Our timeout is pretty long (3 minutes, I think). In general it's not a good idea to keep your users waiting with a loading page for minutes because they're going to assume that something went wrong. Your best bet is probably to break the big task into smaller chunks and do each of the chunks in a separate request, then you can show your users a progress meter that updates as each request completes.
0
0
0
0
2015-03-10T03:50:00.000
1
0.53705
false
28,955,873
0
0
1
1
I am hosting a web2py application on PythonAnywhere. My problem is that the application is bound to take few minutes to respond(because of data processing or non-optimized implementation). During this time the page times out. I get a message from PythonAnywhere that something went wrong and my application is taking more time than usual. I want the framework to wait until the web2py function finishes(even if it takes minutes). Is it a setting I need to change in web2py or is it something that I need to change in PythonAnywhere? Thanks and Regards!
Getting or deleting cache entries in Flask with the key starting with (or containing) a substring
28,968,025
1
2
811
0
python,caching,flask,memcached,flask-cache
It's not supported because Memcache is designed to be a distributed hash. There's no index of keys stored to search in. Ideally you should know what suffixes a key may have. If not, you could maintain an index yourself in a special key for the user. Like user_id + '_keys' which contains a list of keys. This way you can cycle key by key and delete all the cache for the user. You can override the .set function to manage this new key.
0
0
0
0
2015-03-10T10:14:00.000
1
1.2
true
28,960,995
0
0
1
1
I'm trying to delete all entries in the cache store that contain (in this case start with) a substring of the cache key, but I don't see any easy way of doing this. I'm using Memcache as backend. If I understand the code correctly, I need to pass the full cache key when calling delete or delete_many. Is there any other way of doing this? I'll explain what I'm trying to do in case there is a better way: I need to clear the cache for certain users when they modify their settings. Clearing the cache with clear() will remove the cache entries for all the users, which are some 110K, so I don't want to use that. I am generating key_prefix with the ID of the user, the request's path, and other variables. The cache keys always start with the ID of the authenticated user. So ideally I would use something like delete_many(user_id + ".*")
celery beat schedule: run task instantly when start celery beat?
30,854,981
-1
12
5,063
0
python,celery,celerybeat
The best idea is create an implementation which schedules the task itself after completing the task. Also, create an entrance lock so the task cannot be executed multiple times per moment. Trigger the execution once. In this case, you don't need a celerybeat process the task is guaranteed to execute
0
1
0
0
2015-03-10T10:39:00.000
3
-0.066568
false
28,961,517
0
0
1
1
If I create a celery beat schedule, using timedelta(days=1), the first task will be carried out after 24 hours, quote celery beat documentation: Using a timedelta for the schedule means the task will be sent in 30 second intervals (the first task will be sent 30 seconds after celery beat starts, and then every 30 seconds after the last run). But the fact is that in a lot of situations it's actually important that the the scheduler run the task at launch, But I didn't find an option that allows me to run the task immediately after celery starts, am I not reading carefully, or is celery missing this feature?
Create a dynamic admin site
29,432,774
0
1
84
0
python,django,django-models,django-admin
I was using Django 1.6 which did not support overriding the get_fields method. Updated to 1.7 and this method worked perfectly.
0
0
0
0
2015-03-10T15:31:00.000
2
1.2
true
28,967,747
0
0
1
1
I want to create a dynamic admin site, that based on if the field is blank or not will show that field. So I have a model that has a set number of fields, but for each individual entry will not contain all of the fields in my model and I want to exclude based on if that field is blank. I have a unique bridge identifier, that correlates to each bridge, and then all of the various different variables that describe the bridge. I have it set up now that the user will go to a url with the unique bridgekey and then this will create an entry of that bridge. So (as i am testing on my local machine) it would be like localhost/home/brkey and that code in my views.py that corresponds to that url is However, not every bridge is the same and I have a lot more variables that I would like to include in my model but for now I am just testing on two : prestressed_concrete_deck and reinforced_concrete_coated_bars. What I want is to dynamically create the admin site to not display the prestressed_concrete_deck variable if that field is blank. So instead of displaying all of the variables on the admin site, I want to only display those variables if that bridge has that part, and to not display anything if the field is blank. Another possible solution to the problem would be to get that unique identifier over to my admins.py. I cant figure out either how to get that individual key over as then I could query in the admins.py. If i knew how to access the bridgekey, I could just query in my admins.py dynamically. So how would I access the brkey for that entry in my admins.py (Something like BridgeModel.brkey ?) I have tried several different things in my admin.py and have tried the comments suggestion of overwriting the get_fields() method in my admin class, but I am probably syntactically wrong and I am kind of confused what the object it takes exactly is. Is that the actual entry? Or is that the individual field?
How to implement a priority queue using SQS(Amazon simple queue service)
57,454,595
2
14
27,088
0
python,boto,priority-queue,amazon-sqs
By "when a msg fails", if you meant "processing failure" then you could look into Dead Letter Queue (DLQ) feature that comes with SQS. You can set the receive count threshold to move the failed messages to DLQ. Each DLQ is associated with an SQS. In your case, you could make "max receive count" = 1 and you deal with that message seperately
0
0
1
0
2015-03-10T17:29:00.000
4
0.099668
false
28,970,289
0
0
1
2
I have a situation when a msg fails and I would like to replay that msg with the highest priority using python boto package so he will be taken first. If I'm not wrong SQS queue does not support priority queue, so I would like to implement something simple. Important note: when a msg fails I no longer have the message object, I only persist the receipt_handle so I can delete the message(if there was more than x retries) / change timeout visibility in order to push him back to queue. Thanks!
How to implement a priority queue using SQS(Amazon simple queue service)
28,973,859
20
14
27,088
0
python,boto,priority-queue,amazon-sqs
I don't think there is any way to do this with a single SQS queue. You have no control over delivery of messages and, therefore, no way to impose a priority on messages. If you find a way, I would love to hear about it. I think you could possibly use two queues (or more generally N queues where N is the number of levels of priority) but even this seems impossible if you don't actually have the message object at the time you determine that it has failed. You would need the message object so that the data could be written to the high-priority queue. I'm not sure this actually qualifies as an answer 8^)
0
0
1
0
2015-03-10T17:29:00.000
4
1.2
true
28,970,289
0
0
1
2
I have a situation when a msg fails and I would like to replay that msg with the highest priority using python boto package so he will be taken first. If I'm not wrong SQS queue does not support priority queue, so I would like to implement something simple. Important note: when a msg fails I no longer have the message object, I only persist the receipt_handle so I can delete the message(if there was more than x retries) / change timeout visibility in order to push him back to queue. Thanks!
Best way to import a Python list into an HTML table?
28,973,712
2
1
1,914
0
python,html,python-2.7
store everything in a database eg: sqlite,mysql,mongodb,redis ... then query the db every time you want to display the data. this is good for changing it later from multiple sources. store everything in a "flat file": sqlite,xml,json,msgpack again, open and read the file whenever you want to use the data. or read it in completly on startup simple and often fast enough. generate a html file from your list with a template engine eg jinja, save it as html file. good for simple hosters There are some good python webframeworks out there some i used: Flask, Bottle, Django, Twisted, Tornado They all more or less output html. Feel free to use HTML5/DHTML/Java Script. You could use a webframework to create/use an "api" on the backend, which serves json or xml. Then your java script callback will display it on your site.
0
0
1
0
2015-03-10T19:11:00.000
2
1.2
true
28,972,157
0
0
1
1
I wrote a script that scrapes various things from around the web and stores them in a python list and have a few questions about the best way to get it into a HTML table to display on a web page. First off should my data be in a list? It will at most be a 25 by 9 list. I’m assuming I should write the list to a file for the web site to import? Is a text file preferred or something like a CSV, XML file? Whats the standard way to import a file into a table? In my quick look around the web I didn’t see an obvious answer (Major web design beginner). Is Javascript this best thing to use? Or can python write out something that can easily be read by HTML? Thanks
Django context processors and a Django project with mulitple applications
28,981,950
2
2
168
0
python,django
Hm, when context processor introduces variable into context then this variable is available in all project's templates. So you don't need to add variables to each of context processors, single processor will do the job.
0
0
0
0
2015-03-11T08:41:00.000
1
1.2
true
28,981,883
0
0
1
1
I have a Django project in which I have 4 applications. I am able to use custom processors (of course each app has its own context processor) to pass application level common variables to templates. But, when I need to pass the same context variable to all templates in all apps (variables common to all applications), I am just adding these context variables to each of the context processors individually. Is there any other way to pass a context variable to all templates in all apps, without having need to add it to each context processor?
OpenERP, Aptana - debugging Python code, breakpoint not working
29,017,479
1
0
371
0
python,openerp,aptana,odoo
It could happen if you run your server in "run" mode and not "debug" mode. If you are in "run" mode the breakpoints would be skipped. In Aptana, go to the "run" -> "debug" to run it in debug mode.
0
0
0
0
2015-03-12T18:05:00.000
1
1.2
true
29,017,156
0
0
1
1
I have created Odoo v8 PyDev project in Aptana. When I run the openerp server from Aptana, and set a breakpoint in my file product_nk.py, the program does not stop at this break point although I navigated to the Odoo web pages where the functionality is linked to the code with breakpoint. What am I possibly missing in the setup and what I need to do to have the program stop at the set breakpoint in Python code?
two factor authentication doesn't work when it is access from applicaiton
29,021,959
0
0
88
0
python,session,authentication,two-factor-authentication
Fact is that I forget REST is stateless. Can't use session within two web services calls.
0
0
1
0
2015-03-12T19:45:00.000
1
0
false
29,018,927
0
0
1
1
I have two applications, one has the web api other and application use it to authenticate it itself. How 2FA implemented in my application is, first get the username and password then authenticate it. After authenticate it I send the username, session key . If I get the correct mobile passcode , username and session key back, application authenticate it second time. Now the problem is It works, when I use postman chrome plugin to test the 2FA. However if I use the second application to authenticate it fails. When I debug through the code I found, it breaks at session variables. I get Key error. I assume that the session is empty when I try to authenticate second time from the application. I am confused why it works from Postman plugin but not from the second application.
Python vs NodeJS web applications from a security point of view
29,039,868
1
3
2,173
0
python,django,node.js,security,express
Try using Python Flask for small projects. I'm assuming it's small because node.js is usually used for real-live-updates and realtime chatting and other javascript based apps. The advantage is polling/broadcasts from server to clients rather than 100s of individual requests being handled. Security-wise, javascript apps can be abused without even using tools if they spam your servers but that is something you should handle on the web server or simply make sure spams are controlled and blocked if being repeated in an inhumane speed.
0
0
0
0
2015-03-13T18:49:00.000
2
0.099668
false
29,039,798
0
0
1
2
I am planning to build a web application which is security critical and I need to decide what technology to use on the back end. The options I am considering are Python (mostly with the Django framework) and NodeJS (possibly with express.js). From the security point of view I would like to know the pros and cons of using each of these technologies.
Python vs NodeJS web applications from a security point of view
29,040,520
1
3
2,173
0
python,django,node.js,security,express
Disclaimer, I'm not super expert on the topic, but I have worked a bit with both Node and Django. I'd say that it pretty much depends what you're doing and how you set everything up, but with Django you're pretty much forced to set it up on Apache/Gunicorn (w/ NGinx) so you have that extra layer there that you can use to have an additional layer of security, and Django has a lot of built in packages to help with authentication / users / etc. But honestly it boils down to how well structured your application is. I'd personally prefer python for building a secure application as for me it's easier to wrap my head around OOP logic in python moreso than trying to structure all your callbacks correctly in node.
0
0
0
0
2015-03-13T18:49:00.000
2
0.099668
false
29,039,798
0
0
1
2
I am planning to build a web application which is security critical and I need to decide what technology to use on the back end. The options I am considering are Python (mostly with the Django framework) and NodeJS (possibly with express.js). From the security point of view I would like to know the pros and cons of using each of these technologies.
Simple REST API not based on a particular predefined model
29,043,714
0
0
27
0
python,django,rest
I think the key is to use the models differently. If you use onetomany or foreignkey references in your model construction you can more dynamically link different types of data together, then access that from the parent object. For example, for your user, you could create a basic user model and reference that in many other models such as interests, occupation, and have those models store very dynamic data. When you have the root user model object, you can access it's foreign key objects by either iterating through the dictionary of fields returned by the object or accessing the foreign key references directly with model.reference_set.all()
0
0
0
0
2015-03-13T22:07:00.000
1
0
false
29,042,787
0
0
1
1
Well, I do my first steps with Django and Django REST framework. The problem I face is that all examples throughout the whole Internet are based on hard-coded models. But the whole concept of models frustrates me a little bit, because I'm used to deal with different data which comes from numerous sources (various relational databases and nosql - all that stuff). So, I do not want to stick to a particular model with a fixed number of predefined fields, but I want to specify them just at the moment when a user goes to a particular page of my app. Let's say I have a table or a collection in one of my databases, which stores information about users - it has any kinds of fields (not just email, name and likewise - all those fields as in all those examples throughout the web). So when a user goes to /users/ I connect to my datebase, get my table, set my cursor and populate my resultant dictionary with all rows and all fields I need. And REST API does all the rest. So, I need a "first-step" example wich starts from data, not from a model: you have a table "items" in your favorite database, when a user goes to /items/, he or she gets all data from that table. To make such simplistic api, you should do this and this... I need this kind of example.
Cron job on google cloud managed virtual machine
29,050,842
1
0
1,243
1
python,google-app-engine,cron,virtual-machine,google-compute-engine
The finest resolution of a cron job is 1 minute, so you cannot run a cron job once every 10 seconds. In your place, I'd run a Python script that starts a new thread every 10 seconds to do your MySQL work, accompanied by a cronjob that runs every minute. If the cronjob finds that the Python script is not running, it would restart it. (i.e., the crontab line would look like * * * * * /command/to/restart/Python/script). Worse-case scenario you'd miss 5 runnings of your MySQL worker threads (a 50 seconds' duration).
0
1
0
0
2015-03-14T01:06:00.000
2
1.2
true
29,044,322
0
0
1
1
I have a python script that queries some data from several web APIs and after some processing writes it to MySQL. This process must be repeated every 10 seconds. The data needs to be available to Google Compute instances that read MySQL and perform CPU-intensive work. For this workflow I thought about using GCloud SQL and running GAppEngine to query the data. NOTE: The python script does not run on GAE directly (imports pandas, scipy) but should run on a properly setup App Engine Managed VM. Finally the question: is it possible and would it be reasonable to schedule a cron job on a GApp Managed VM to run a command invoking my data collection script every 10 seconds? Any alternatives to this approach?
Best way to store auth credentials on fabric deploys?
29,063,107
0
0
75
0
python,fabric
We have a local credential YAML file that contains all these, fab read the credentials from it and use them during the deployment only.
0
0
0
1
2015-03-15T15:04:00.000
1
0
false
29,062,125
0
0
1
1
I am trying to learn how to quickly spin up a digital ocean / ec2 server to temporarily run a python worker script (for parallel performance gains). I can conceptually grasp how to do everything except how / where to store certain auth credentials. These would be things like: git username / pass to access private repos AWS auth credentials to access an SQS queue database credentials etc. Where do I store this stuff when I deploy via a fabric script? A link to a good tutorial would be very helpful.
Why does my web2py app keep logging me out
29,313,802
0
0
120
0
python,web2py
Your comment hints at the answer: When you log into the admin session, when you refresh your website, it is now accessed through the admin session, which has no client user logged in. One solution is to use different browsers for admin and a different browser for client.
0
0
0
0
2015-03-16T12:05:00.000
1
0
false
29,076,384
0
0
1
1
I recently deployed a web2py app, and am going through the debugging phase. Part of the app includes an auth.wiki, which mostly works great. Last night I added several pages to the wiki with no problems. However, today, whenever I navigate to the wiki or try to edit a page, I'm immediately logged out. Any suggestions? I can't interact with the wiki if I'm not logged in... EDIT: It's not just the wiki, I keep getting logged out of the whole site. Other users do not have this problem. It continues even when I select "remember me for 30 days" on login.
How I can get user input from browser using python
37,997,045
0
0
754
0
python,web-development-server
For that you would need a web framework like Bottle or Flask. Bottle is a simple WSGI based web framework for Python. Using either of these you may write simple REST based APIs, one for set and other for get. The "set" one could accept data from your client side and store it on your database where as your "get" api should return the data by reading it from your DB. Hope it helps.
0
0
1
0
2015-03-16T22:50:00.000
1
0
false
29,088,344
0
0
1
1
I am in the middle of my personal website development and I am using python to create a "Comment section" which my visitors could leave comments at there in public (which means, everybody can see it, so don't worry about the user name registration things). I already set up the sql database to store those data but only thing I haven't figured out yet was how to get the user input (their comments) from the browser. So, is there any modules in python could do that? (Like, the "Charfield" things in django, but unfortunately I don't use django)
How we can get List of urls after crawling website from scrapy in costom python script?
29,092,568
0
0
343
0
python,python-2.7,web-crawler,scrapy
You can use a file to pass the urls from scrapy to your python script. Or you can print the urls with a mark in your scrapy, and use your python script to catch the stdout of your scrapy.Then parse it to list.
0
0
1
0
2015-03-17T06:01:00.000
2
0
false
29,092,291
0
0
1
1
I am working with a script where i need to crawl websites, need to crawl only base_url site. Anyone who has pretty good idea how i can launch scarpy in custom python scripts and get urls link in list?
Connect MySQL Workbench with Django in Eclipse in a mac
29,493,720
1
0
1,235
1
python,mysql,django,eclipse,pydev
I am a mac user. I have luckily overcome the issue with connecting Django to mysql workbench. I assume that you have already installed Django package created your project directory e.g. mysite. Initially after installation of MySQL workbench i have created a database : create database djo; Go to mysite/settings.py and edit following piece of block. NOTE: Keep Engine name "django.db.backends.mysql" while using MySQL server. and STOP the other Django MySQL service which might be running. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'djo', # Or path to database file if using sqlite3. # The following settings are not used with sqlite3: 'USER': 'root', 'PASSWORD': '****', # Replace **** with your set password. 'HOST': '127.0.0.1', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP. 'PORT': '3306', # Set to empty string for default. } } now run manage.py to sync your database : $ python mysite/manage.py syncdb bash-3.2$ python manage.py syncdb Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_content_type Creating table django_session Creating table django_site You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'ambershe'): root Email address: [email protected] /Users/ambershe/Library/Containers/com.bitnami.django/Data/app/python/lib/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal. passwd = fallback_getpass(prompt, stream) Warning: Password input may be echoed. Password: **** Warning: Password input may be echoed. Password (again): **** Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s)
0
0
0
0
2015-03-17T14:56:00.000
1
1.2
true
29,102,422
0
0
1
1
I am new to this so a silly question I am trying to make a demo website using Django for that I need a database.. Have downloaded and installed MySQL Workbench for the same. But I don't know how to setup this. Thank you in advance :) I tried googling stuff but didn't find any exact solution for the same. Please help
Commands not working in ScrapyProject
29,121,164
0
0
40
0
python-2.7,scrapy,virtualenv,virtualenvwrapper
You need to pip install all the set-up within the virtualenv.
0
0
0
0
2015-03-18T05:02:00.000
1
0
false
29,114,480
0
0
1
1
I create a virtualenv name as ScrapyProject. when I use scrapy command or pip command it does not work but when I enter the python command it works. Here is how he shows me. (ScrapyProject) C:\Users\Jake\ScrapyProject>scrapy (ScrapyProject) C:\Users\Jake\ScrapyProject>pip (ScrapyProject) C:\Users\Jake\ScrapyProject>python python2.7.6 etc. Here is how my paths are in virtualenv. C:\Users\Jake\ScrapyProject\Scripts;Then some windows paths then some python paths There is not extra spaces between them I am sure! and here is how my python paths look;C:\python27;C:\Python27\Lib\site-packages Can anybody help me If he needs some extra information, I will give you? I totally didnot understand it!
nginx + uwsgi + virtual environment. What goes inside?
29,134,999
3
2
445
0
python,nginx,flask,virtualenv,uwsgi
First of all, Nginx never goes in Virtualenv. It is a os service and has nothing to do with python. It only serves webrequests and knows how to pass them to a upstream service (like uwsgi). Second; don't put things in virtualenv that don't need seperate versions. Uwsgi is quite stable now, so you will almost never need separate versions; so don't put it in venv. Third; when you plan for production deployment, keep things as simple as possible. Any added complexity will only make the chance of failure higher. So do not put venv on your prod servers until your absolutely need it. And even then you are probably putting to much stuff on that server. Keep your servers single minded. I find it easier to use multiple machines (especially with cloud services like AWS) that each have one purpose than to cram everything on one big machine (where one screwball process can eat all the mem for everybody else) Forth; When you do need more python projects/services it is better to separate them with things like docker, since then they are better maintainable and better isolated from the rest.
0
0
0
0
2015-03-18T22:33:00.000
1
1.2
true
29,133,963
0
0
1
1
Seems like a simple question but I cannot find it addressed anywhere. Every tutorial I look at does things slightly differently, and I'm pretty sure I've seen it done both ways. In my development environment, python, flask, and all other dependencies of my application go inside the Virtual Environment. When configuring a production environment, do Nginx and uWSGI go inside the virtual environment? Thanks!
Auto update cache in Django
29,141,865
2
1
537
0
python,django,caching
You can write a so called warm up script. This is just a script that opens the URLs you want to habe in the cache. Run this script as a periodic task. The simplest Version would be a shell script with curl statements in it tha is periodicly executed by cron. The intervall with which you call it depends on your cache settings. If you configured to habe pages 10 minutes in cache calling the script every 10 minutes makes sure everything is always in the cache.
0
0
0
0
2015-03-19T09:57:00.000
1
0.379949
false
29,141,492
0
0
1
1
I have one problem when I using cache in Django. Is it possible to load the page and update cache auto? I don't want my first user wait when cache needs to be updated. Thank you.
Registering output to stdout through wcf in IronPython?
29,182,774
0
0
36
0
wcf,ironpython
Resolved like this : I have a Memory stream on the server which compiles and executes the code then reads the data from the stdout and sends it back to the client.
0
0
0
0
2015-03-20T09:40:00.000
1
1.2
true
29,163,495
0
0
1
1
Is it possible to register the console output of a client through a server. I'm assuming this can be done through a NetworkStream? Right now, I register the output of a desktop app to stdout through the SetOutput method provided inside Runtime.IO of IronPython. This method accepts a Stream as an arugment but the problem is how can I send that data back to the client through a stream from wcf?
Orientation error for images uploaded to GAE (GCS + get_serving_url)
29,222,762
-1
1
218
0
google-app-engine,google-cloud-storage,google-app-engine-python
I think it happens because when get_serving_url service resize the image, it always resize the image from the longest side of the Image, keeping the aspect ration same. If you have a image of 1600x2400, then the resize image is 106x160 to keep the aspect ratio is same. In your case one of the image is 306x408 (which is correct)as Image is resized from the height, and the other image is 360x270 (in which orientation change) the image is resized from width. I think in the later-one the orientation is changed just to keep the aspect ratio same.
0
1
0
0
2015-03-23T04:31:00.000
1
-0.197375
false
29,203,302
0
0
1
1
We are developing an image sharing service using GAE. Many users have reported since last week that "portrait images are oriented in landscape". We found out that from a specific timing, the specification of images uploaded and distributed through GAE has changed. So the specs seem to have changed around 3/18 03:25(UTC) . The "orientation" of Exif is not properly applied. We are using GAE/Python. We save images uploaded by the users to GoogleCloudStorage, then use the URL we get with get_serving_url to distribute them. Is this problem temporal? Also, is it possible to return to the specs before 3/18 03:22(UTC)?
How to make most common OpenID logins look the same?
30,144,643
0
0
21
0
oauth,openid,python-social-auth
You could simply use the yahoo site's favicon. It's already squared and gets as close to official as it can get.
0
0
0
1
2015-03-23T11:20:00.000
1
0
false
29,208,955
0
0
1
1
I want my users to be able to login using all common OpenIds. But there seems to be a forest of clauses on how to use logos from google, facebook, yahoo and twitter. Actually I'd prefer a wider button with the text on it, but it seems that all these buttons don't have the aspect ratio. And some of them I am not allowed to scale. Also some pages switched to javascript buttons which seems incredibly stupid, because that causes the browser to download all the javascripts and the page looks like 1990 where you can watch individual elements loading. So finally I surrendered and went for the stackoverflow approach of using square logos with the text "login with" next to them. But now I can't find an official source for a square yahoo icon. The question: Can you recommend any articles or do you have any tipps on how to make OpenId look uniform? And: Have you seen an official source for a square Icon that I'm allowed to use? PS: I'm using python-social-auth for Django.
django translation doesn't work but translation in templates works
33,277,961
0
1
3,112
0
python,django,internationalization,translation,gettext
I think the problem lies in your MIDDLEWARE_CLASSES. The thing is, there are some middlewares that might change your request, including a language prefix. Especially, when you use AJAX calls for querying extra template data, translated by ugettext, gettext, etc.
0
0
0
0
2015-03-24T10:08:00.000
3
0
false
29,229,735
0
0
1
2
I try to translate my django site to another languages but translation in python doesn't work. But translation in templates using trans tag, works as expected. I have tried ugettext, gettext, gettext_lazy and ugettext_lazy, and every time I got original untranslated strings. My sources all in utf-8 encoding, original strings in Ukrainian language
django translation doesn't work but translation in templates works
33,302,047
0
1
3,112
0
python,django,internationalization,translation,gettext
The ugettext_lazy will not work if string contains non Latin symbols. So in my case the original strings must be the Unicode objects.
0
0
0
0
2015-03-24T10:08:00.000
3
0
false
29,229,735
0
0
1
2
I try to translate my django site to another languages but translation in python doesn't work. But translation in templates using trans tag, works as expected. I have tried ugettext, gettext, gettext_lazy and ugettext_lazy, and every time I got original untranslated strings. My sources all in utf-8 encoding, original strings in Ukrainian language
How to write a stress test script for asynchronous processes in python
29,233,194
0
0
988
0
python,django,web-applications,stress-testing,djcelery
I think there is no need to write your own stress testing script. I have used www.blitz.io for stress testing. It is set up in minutes, easy to use and it makes beautiful graphs. It has a 14 day trial so you just can test the heck out of your system for 14 days for free. This should be enough to find all your bottlenecks.
0
1
0
0
2015-03-24T10:36:00.000
1
0
false
29,230,333
0
0
1
1
I have a web application running on django, wherein an end user can enter a URL to process. All the processing tasks are offloaded to a celery queue which sends notification to the user when the task is completed. I need to stress test this app with the goals. to determine breaking points or safe usage limits to confirm intended specifications are being met to determine modes of failure (how exactly a system fails) to test stable operation of a part or system outside standard usage How do I go about writing my script in Python, given the fact that I also need to take the offloaded celery tasks into account as well.
How does django call the .py files at root directory?
29,238,435
2
0
73
0
python,django
That is not a part of a standard Django project and is not automatically called by Django itself. I guess it's just a standalone file that the developer created, to run from the command line to populate the db.
0
0
0
0
2015-03-24T16:38:00.000
1
1.2
true
29,238,171
0
0
1
1
I am working on a Django project that someone else started, and I'm new to Django, so I'm not sure about the workflow. I see that there is a file called load_fixtures.py in the root directory (so the file is a sibling of manage.py) , but I don't understand why (and when) is that file called. When? Does it get called only during syncdb? Why? i.e. which file includes / calls it? Is it called just because it's in the root directory? Or does a line from settings.py include it? Thanks in advance !
IE11 stuck at initial start page for webdriver server only when connection to the URL is slow
39,231,646
0
0
2,136
0
python,selenium,selenium-webdriver,webdriver,internet-explorer-11
Use remote driver with desired cap (pageLoadStrategy) Release notes from seleniumhq.org. Note that we have to use version 2.46 for the jar, iedriverserver.exe and python client driver in order to have things work correctly. It is unclear why 2.45 does not work given the release notes below. v2.45.0.2 Updates to JavaScript automation atoms. Added pageLoadStrategy to IE driver. Setting a capability named pageLoadStrategy when creating a session with the IE driver will now change the wait behavior when navigating to a new page. The valid values are: "normal" - Waits for document.readyState to be 'complete'. This is the default, and is the same behavior as all previous versions of the IE driver. "eager" - Will abort the wait when document.readyState is 'interactive' instead of waiting for 'complete'. "none" - Will abort the wait immediately, without waiting for any of the page to load. Setting the capability to an invalid value will result in use of the "normal" page load strategy.
0
0
1
0
2015-03-24T19:35:00.000
3
0
false
29,241,353
0
0
1
2
For the IE webdriver, it opens the IE browsers but it starts to load the local host and then stops (ie/ It never stated loading ). WHen the browser stops loading it shows the msg 'Initial start page for webdriver server'. The problem is that this does not occur every time I execute the test case making it difficult to identify what could be the cause of the issue. What I have noticed is when this issue occurs, the url will take ~25 secs to load manually on the same machine. When the issue does not occur, the URL will load within 3secs. All security setting are the same (protected Mode enabled across all zone) enhance protected mode is disabled IE version 11 the URL is added as a trusted site. Any clue why it does not load the URL sometimes?
IE11 stuck at initial start page for webdriver server only when connection to the URL is slow
62,954,013
0
0
2,136
0
python,selenium,selenium-webdriver,webdriver,internet-explorer-11
It hasn't been updated for a while, but recently I had very similar issue - IEDriverServer was eventually opening page under test, but in most cases just stuck on Initial page of WebDriver. What I found the root cause (in my case) was startup setting of IE. I had Start with tabs from the last session enabled, when changed back to Start with home page driver started to work like a charm, opening page under test in 100% of tries.
0
0
1
0
2015-03-24T19:35:00.000
3
0
false
29,241,353
0
0
1
2
For the IE webdriver, it opens the IE browsers but it starts to load the local host and then stops (ie/ It never stated loading ). WHen the browser stops loading it shows the msg 'Initial start page for webdriver server'. The problem is that this does not occur every time I execute the test case making it difficult to identify what could be the cause of the issue. What I have noticed is when this issue occurs, the url will take ~25 secs to load manually on the same machine. When the issue does not occur, the URL will load within 3secs. All security setting are the same (protected Mode enabled across all zone) enhance protected mode is disabled IE version 11 the URL is added as a trusted site. Any clue why it does not load the URL sometimes?
Google App Engine Faceted Search in production has to be enabled / activated?
29,302,041
1
1
223
0
python,google-app-engine,faceted-search
Facet value cannot be empty string. You can workaround it by not including facets with empty values or have a special value for your empty facets. The local implementation of faceted search (python) currently accepts empty facets that is a bug and will be fixed.
0
1
0
0
2015-03-24T20:01:00.000
2
1.2
true
29,241,797
0
0
1
1
I've just created locally on my machine a perfectly running faceted search service using Google App Engine Faceted Search, written in Python. As soon as I deploy to our production server, it throws an error during the index creation, specifically when the code try to execute index.put(docs), where docs is an array of [max 100] search.Document. The error is: "PutError: one or more put document operations failed: Value is empty" I tried then to step back to the the previous version of my service, which was working like a charm until then. I removed all the new search.TextField added and I removed the facets=[search.AtomFacet(...)] from the search.Document constructor keywords. It started working again. Then, baby step forward again, I've added all the fields I needed, but still no facets=[] in the constructor. It worked.As soon as I added again facets=[search.AtomFacet(name='propName', value=doc.propName if doc.propName else '')] then the error appeared again. Whilst locally on my machine, it works perfectly. Is there any setting / configuration we need to enable on production server to have this feature? Thank you
why is Java Processbuilder 4000 times slower at running commands then Python Subprocess.check_output
29,289,687
0
2
1,183
0
java,python,performance,subprocess,processbuilder
It seems like python doesn't spawn subprocess. Which is why it was faster. I am sorry with confusion. thank you
0
1
0
1
2015-03-24T22:03:00.000
1
0
false
29,243,748
0
0
1
1
I was trying to write a wrapper for a third party C tool using Java processbuilder. I need to run this process builder millions of times. But, I found something weird about the speed. I already have a wrapper for this third party tool C tool for python. In python, the wrapper uses the python subprocess.check_output. So, I ran the java wrapper 10000 times with same command. Also, ran the python wrapper 10000 time with same command. With python, my 10000 tests ran in about 0.01 second. With java processbuilder, it ran in 40 seconds. Can someone explain why I am getting large difference in speed between two languages? You try this experiment with a simple command like "time".
Django: Run process on server that updates data periodically
29,244,430
1
0
868
0
python,django
If this script is for a specific app, you could make a [app]/scripts/ directory. If it's for the project as a whole, you could make a scripts directory in the project root directory. Then you would use a task scheduler, like cron if you're on *nix, to run that script however often you'd like.
0
0
0
0
2015-03-24T22:48:00.000
3
0.066568
false
29,244,337
0
0
1
1
I have a Django web sever, on this server, I'd like to run a process (let's say every couple hours) that updates an app's database. To be more specific, my Django site hosts a large list of words that are combed from Google trends and I want to run a process on the server that updates that data periodically. I already have created this process and can run it off my machine, I don't have it on the server yet, where in a Django project may I integrate this process?
Matplotlib & Dynamic Web Page
29,267,045
0
0
424
0
javascript,jquery,python,django,matplotlib
For providing a web service you will probably want to reorganize your application code such that you can share the matplotlib things between a wxPython application and a separate web services application. (So, IOW, move the non wx application code to a separate set of packages that can be shared with your new web application code, for example.) Once that is done then can do things like have your new web application use the shared code, have matplotlib generate plots as PNG files and serve those PNGs to the browser. Etc.
0
0
0
0
2015-03-25T06:39:00.000
1
0
false
29,249,041
0
0
1
1
I have been using WxPython, Matplotlib, Numpy, Scipy quiet a time and familiar with developing Web Page. Is it possible that using WxPython, Matplotlib I can create or render a dynamic web page via Apapche - WxPython has its own output window which I do not want to make use. My aim is to create a dynamic web page using JS / Jquery where users can enter values in text boxes, select via drop down list etc (all HTML UI items) - then a plot is rendered in the same web page. The user over and over again selects or modifies the values and the graph gets modified accordingly.
Best way to integrate your Django project homepage
29,264,294
0
0
43
0
python,django
Along with the homepage, you're going to encounter other things that don't really fit into an application, such as shared utils, base templates, about page, contact page, etc. For these things it's generally best to put them into an application that is general to the project, I name my general application "main".
0
0
0
0
2015-03-25T18:52:00.000
1
1.2
true
29,264,142
0
0
1
1
What would be considered the best practice for integrating your website's homepage in your Django project? Should you make a new application, naming it "homepage" and place the view for it in there? Or is it considered a waste of space to create an entire application just for that? Or should you just stick it somewhere in one of your applications randomly? Or is there some other better option I'm not seeing?
Flask with HBase
32,148,555
1
1
1,213
0
python,hadoop,solr,flask,hbase
I'm looking into this as well. I've found the "happybase" python module. That should help connecting Python Flask with HBase.
0
0
0
0
2015-03-25T23:53:00.000
1
0.197375
false
29,268,732
0
0
1
1
I am designing an API service for a fairly big data set. The data is currently stored in HDFS and we(BAs) usually query it from Hive. In the end, we have several tables that we want to expose to customers in the format of API, the API might also be used in the future to backup frontend app. I am a Python programmer and I have used Flask before. However, what is the correct technology combo to build an API service that can scale well? I heard some people mentioned "HBase + Solr Cloud" will be the solution. Any suggestion will be super helpful and I will delete this post if think this is not programing related. (I am also open to PaaS, IaaS like AWS, googlecloud if they actually have a decent package already.)
Django application on server with no root access
29,275,740
1
0
615
0
python,django,web-hosting
You shouldn't be installing as root anyway. Use a virtualenv in your allocated directory, and install everything in there. You should be using a virtualenv in development as well, so that you can create a requirements file as part of your codebase which can then be used to rebuild everything in production. If the host doesn't include mod_wsgi, there are plenty of alternatives these days. Mostly I'm using gunicorn for my projects; it's easy to set up and you can run it yourself, with Apache or nginx simply working as a reverse proxy to send requests there. However, that does depend on the ability to run long-running processes on your host; if they don't allow that (some don't), it's a problem. An alternative might be FCGI though, although at this point I'd probably be looking for an alternative host.
0
0
0
0
2015-03-26T09:57:00.000
1
0.197375
false
29,275,491
0
0
1
1
I am about to deploy my Django application on the production server. Currently, I am facing a bad thing: the production server is a shared web hosting platform (on OVH). So, the first problem I have is that I cannot install django myself. Also, I am not sure about mod_wsgi being installed (or installable) on Apache. My question is: is there a workaround to installing django? If so, where do I find some documentation to do it? Thanks.
Is django thread blocked while sending email?
29,284,405
1
3
822
0
python,django,multithreading,email,celery
I confirm the thread handling the request will be blocked until the email is sent. In a typical Django setup one thread is created per request.
0
0
0
1
2015-03-26T16:39:00.000
2
0.099668
false
29,284,106
0
0
1
2
I have a django-rest-framework API, and am trying to understand how sending email from it works. Suppose I'm using django.core.mail.backends.smtp.EmailBackend as email backend to send emails. Sending of an email is quite slow, and I'm wondering, if the django main thread will be blocked somehow during that time so that the other APIs would be unusable during it? Is that true? Would it be a good call to send the email in a background process created by celery for example?
Is django thread blocked while sending email?
29,284,457
3
3
822
0
python,django,multithreading,email,celery
Yes. Django thread is blocked for that particular user. You might want to use Celery along with Rabbit Mq for sending mail in background.
0
0
0
1
2015-03-26T16:39:00.000
2
1.2
true
29,284,106
0
0
1
2
I have a django-rest-framework API, and am trying to understand how sending email from it works. Suppose I'm using django.core.mail.backends.smtp.EmailBackend as email backend to send emails. Sending of an email is quite slow, and I'm wondering, if the django main thread will be blocked somehow during that time so that the other APIs would be unusable during it? Is that true? Would it be a good call to send the email in a background process created by celery for example?
App level settings in Django
29,288,008
4
0
837
0
python,django,django-settings,django-apps
Make a global settings.py, and specify which settings are needed in the apps documentation. You should not make app-specific settings.py files, unless they have some unrelated, internal use. You might want to use something like getattr(settings, 'SOME_SETTING_NAME', 'default value') to fetch the option with a default. If you want to have separate settings for prod/staging/dev, then you'll want to make a settings/ folder with an __init__.py file that imports based on an ENV variable.
0
0
0
0
2015-03-26T20:14:00.000
1
0.664037
false
29,287,966
0
0
1
1
What is the convention about including app-level settings in Django? For e.g. I am writing a Django app that has a view that makes an API call to another webservice, gets its response, modifies it and returns it back. Now, I would like to have a settings variable that will hold the value of this API URL (and depending upon whether Django web server is running in stage/dev/prod, the API URL will differ). Should I create a settings.py in app directory? Or modify the project's settings.py? If I do later, then app no longer remains pluggable/portable. Is that okay? Please provide a rationale for you response (i.e. please explain why it may be a good idea to include the setting in global settings.py even though it reduces the portability of the app). Thanks.
how can I uninstall scrapy
29,298,450
2
2
9,158
0
python-2.7,scrapy
Look in /usr/local/bin/python2.6/dist-packages or /usr/local/bin/python2.6/site-packages and remove any scrapy directories (and files) you find there. Next time, use "pip install" instead of easy_install, so you can use "pip uninstall" to uninstall.
0
0
0
0
2015-03-27T08:22:00.000
1
0.379949
false
29,296,153
1
0
1
1
I have installed scrapy0.14 through easy_install scrapy, but now I find that scrapy0.24.5 is more useful, I hope to unstall the old scrapy and install new scrapy by pip, how can I uninstall the old one?
How to use custom authentication with the login: required attribute in app.yaml ( Google app engine, python )
29,311,398
2
0
648
0
python,google-app-engine,authentication,yaml
Essentially, you have the following alternatives: either give up on static file / dir serving directly from App Engine infrastructure (transparently to your application), or give up on using your custom user class for authentication. I suspect you'll pick the first alternative, serving all files from your app (at least, all files that must be kept secret from all but authorized users) -- that "just" costs more resources (and possibly slightly increases latency for users), but lets you implement whatever functionality you require. The advantage of serving static files/dirs directly with the static_files: &c directives in app.yaml is that your app does not actually get involved -- App Engine's infrastructure does it all for you, which saves you resources and possibly makes things faster for users (better caching/CDN-like delivery). But if your app does not actually get involved, then how could any code you wrote for custom auth possibly be running?! That would be a logical contradiction... If you're reluctant to serve static files from your app specifically because they're very large, then you can get the speed fully back (and then some), and some resource savings back too, by serving the URL from your app, but then, after authentication, going right on to Google Cloud Storage for it to actually do the serving. More generally, a mix of files you don't actually need to keep secret (place those in static_dir &c app.yaml directives), ones that are large enough to warrant serving from Cloud Storage, and ones your app can best serve directly, can let you optimize along all fronts -- while keeping full control of your custom auth wherever it matters!
0
1
0
0
2015-03-27T15:29:00.000
1
1.2
true
29,304,395
0
0
1
1
On Google app engine I use a custom user class with methods. ( Not the class and functions provided by webapp2 ) However, I still need to block users from accessing certain static directory url's with html pages behind them. The current solution I have is that the user authentication happens after the user visits the page, but they still see the entire page loaded for a moment. This looks bad and is not very secure. How can I use a custom authentication option with the login : required attribute in the YAML file? So that users are immediately redirected ( before landing on the page ) when they are not logged in.
Openshift custom env vars not available in Python
29,305,330
1
0
37
0
python,openshift,django-1.7
You probably just need to stop & start (not restart) your application via the rhc command line so that your python environment can pick them up.
0
1
0
0
2015-03-27T16:14:00.000
1
1.2
true
29,305,308
0
0
1
1
I'm trying to get a Python 2.7, Django 1.7 web gear up and running. I have hot_deploy activated. However, after setting my required env vars (via rhc), and I see them set in the gear ('env | grep MY_VAR' is OK), when running the WSGI script the vars are NOT SET. os.environ['MY_VAR'] yields KeyError. Is this somehow related to hot_deploy?
Architecture of a Django project
29,322,694
1
1
889
0
python,django,django-apps
Unsurprisingly, the general recommendation would be to put your view code in views.py, your model code in models.py and your form code in forms.py. You have the ability to put code more or less wherever you want it, but you are better off sticking with these recommendations as a beginner. Since you want to be sure that an added user isn't already in the database that would best be handled in the view code, but there's nothing in principle wrong with using a model method to check new save()s for duplication. It's a matter of whether the functionality is required anywhere else. Matters of application architecture can be difficult for newcomers. The recommendations in the book "Two Scoops of Django" embody many best practices.
0
0
0
0
2015-03-28T20:48:00.000
1
1.2
true
29,322,442
0
0
1
1
I have just started learning Django and I am having confusion regarding the architecture of a django project. Basically what I want to know is the recommended way to design a django application ie: what type of code do I put in the models file, the views file and where do I write the validators etc. As an example, suppose that while creating a registration form to add a new user I want to make sure that the user does not register with a username that is already present in the database. As per my observation there are three ways to do it. I could define a method in the models.py file and call it after getting data from the form. I could define a method in the views.py file and call the method. I could write a custom validator or a clean method in the forms.py file. As a beginner I am confused as per what approach would be best. So a basic set of rules to follow that can help me decide what type of code is written where will greatly help me. Thanks
Virtualenv, Django and PyCharm. File structure
29,325,014
1
0
473
0
python,django,virtualenv,pycharm
virtualenv is not just a list of dependencies! It actually has all the modules under its umbrella. Think of a virtualenv as a space which isolates all the packages used by your project from the rest of the other packages that were installed previously or at a later time.Yes, there is an option to have the virtualenv make use of packages that are "outside" of the environment but that's just an option. The main purpose of having an virtualenv is to enable the user to use packages versions of his choice and keep them isolated from the rest of the space. Usually, the list of packages belonging to a specific virtualenv are captured into a file, requirements.txt. If you want to run the project on a different machine or share it with someone, having requirements.txt will make it easy to recreate the environment via pip install -r requirement.txt from within virtualenv
0
0
0
0
2015-03-29T02:20:00.000
2
0.099668
false
29,324,947
1
0
1
1
I am newbie using VirtualEnv and recently try to create one using PyCharm. During the process, PyCharm ask me to specify the project location, application name and VirtualEnv name and location. My doubt is, after I specify the name and location of the VirtualEnv the location of the Django project files must be inside the VirtualEnv? or it's possible to have the VirtualEnv files in a different location than the Django project files? Maybe I am not understanding the purpose of the VirtualEnv. Perhaps, VirtualEnv it's just a list of the dependencies of my project, Python version, Django version, Pip version, Jinja2 version and all other required files, but not necessarily the Django application files (the website that is being developed). Thanks in advance.
How to use external Auth system in Pyramid
32,814,226
0
1
201
0
python,authentication,pyramid
There are two broad ways to do integrate custom auth with Pyramid: - write your own authentication policy for Pyramid (I haven't done this) - write your own middleware to deal with your auth issues, and use the RemoteUserAuthenticationPolicy in Pyramid (I have done this) For the second, you write some standard wsgi middleware, sort out your custom authentication business in there, and then write to the wsgi env. Pyramid authorization will then work fine, with the Pyramid auth system getting the user value from the wsgi env's 'REMOTE_USER' setting. I personally like this approach because it's easy to wrap disparate apps in your middleware, and dead simple to turn it off or swap it out. While not really the answer to exactly what you asked, that might be a better approach than what you're trying.
0
0
0
1
2015-03-30T08:38:00.000
1
1.2
true
29,341,688
0
0
1
1
Context My app relies on external service for authentication, python API has function authentiate_request witch takes request instance as a param, and returns result dict: if auth was successful, dict contains 3 keys: successful: true username: alice cookies: [list of set-cookie headers required to remember user] if unsuccessful: successful: false redirect: url where to redirect user for web based auth Now, call to this function is relatively expensive (is does HTTP POST underneath). Question I'm new to Pyramid security model, and I'm struggling how to use existing/properly write AuthenticationPolicy for my app, so it uses my auth service, and does not call it's API more than once per session (In auth success scenario)?
Connection with boto to AWS hangs when running with crond
29,407,280
1
1
774
0
python,amazon-web-services,amazon-ec2,cron,boto
The entire issue appeared to be HTTP_PROXY environment variable. The variable was set in /etc/bashrc and all users got it this way but when cron jobs ran (as root) /etc/bashrc wasn't read and the variable wasn't set. By adding the variable to the configuration file of crond (via crontab -e) the issue was solved
0
0
1
0
2015-04-01T16:23:00.000
2
0.099668
false
29,395,946
0
0
1
1
I have a very basic python script which uses boto to query the state of my EC2 instances. When I run it from console, it works fine and I'm happy. The problem is when I want to add some automation and run the script via crond. I notices that the script hangs and waits indefinitely for the connection. I saw that boto has this problem and that some people suggested to add some timeout value to boto config file. I couldn't understand how and where, I added manually /etc/boto.cfg file with the suggested timeout value (5) but it didn't help. With strace you can see that this configuration file is never being accessed. Any suggestions how to resolve this issue?
ImportError: No module named 'django' when in virtualenv
29,404,146
1
0
1,424
0
python,django
You have to install Django inside the virtualenv. sudo command will give you the global package so I guess django already installed in global. Activate virtualenv then pip install django will resolve your issue.
0
0
0
0
2015-04-02T01:20:00.000
2
0.099668
false
29,403,497
0
0
1
2
new to python and django and getting the ImportError when I run python manage.py runserver. I figured the problem was that django was not installed in the site_packages of the python version running in the virtualenv. I ran the command under sudo "sudo python manage.py runserver" and it works. So all is good. Can someone explain to a noob what I did wrong in installing django or setting up the virtualenv.
ImportError: No module named 'django' when in virtualenv
29,404,154
1
0
1,424
0
python,django
Did you remember to activate the virtual environment. Virtual environments never use the sudo command because nothing is being installed in the machines local library. To activate the virtual environment you open up terminal and type source /virtualenv/bin/activate.
0
0
0
0
2015-04-02T01:20:00.000
2
1.2
true
29,403,497
0
0
1
2
new to python and django and getting the ImportError when I run python manage.py runserver. I figured the problem was that django was not installed in the site_packages of the python version running in the virtualenv. I ran the command under sudo "sudo python manage.py runserver" and it works. So all is good. Can someone explain to a noob what I did wrong in installing django or setting up the virtualenv.
python-twitter - best location for Oauth keys in Django?
29,502,136
0
1
136
0
python,django,security,twitter-oauth
Many other libraries ask you to put your API Keys in settings.py, this is also useful if you want to use them in different application within your project.
0
0
0
1
2015-04-03T15:33:00.000
1
0
false
29,435,173
0
0
1
1
I've just started using python-twitter with django hosted on OpenShift and need to use Oauth. At the moment it's just on the dev server. Before I put it live, I was wondering if there's a "best" place to store my token / secret info? Right now I just have them in my views.py file but would it be safer to store them in settings.py and access them from there?
How many recipients can be added to BCC in python django
70,995,528
0
0
188
0
python-2.7,email,smtp,gmail,django-1.6
BCC limit for Gmail: 500 in any 24 hours period. If you want to send emails in bulk, you will need to request it as 500 per request.
0
0
0
1
2015-04-06T05:28:00.000
1
0
false
29,465,822
0
0
1
1
How many maximum number of recipients can be added at a time to BCC field while sending a bulk e-mail? I'm using python Django framework and gmail, smtp for sending mail.
Can I cause a put to fail from the _pre_put_hook?
29,473,764
5
1
247
0
python,google-app-engine,google-cloud-datastore,app-engine-ndb
_pre_put_hook is called immediately before NDB does the actual put... so if an exception is raised inside of _pre_put_hook, then the entire put will fail
0
0
1
0
2015-04-06T11:55:00.000
1
1.2
true
29,470,767
0
0
1
1
I'm using a pre put hook to fetch some data from an api before each put. If that api does not respond, or is offline, I want the request to fail. Do I have to write a wrapper around a put() call, or is there some way so that we can still type My_model.put() and just make it fail?
Selenium moving to absolute positions
29,564,991
0
3
1,318
0
python,html,selenium
The problem in my case was that I was not waiting for the element to load. At least I assume that's what the problem is because if I allow selenium to wait for the element instead and then click on it, it works.
0
0
1
0
2015-04-07T10:05:00.000
2
1.2
true
29,488,957
0
0
1
2
I'm using the python package to move the mouse in some specified pattern or just random motions. The first thing I tried is to get the size of the //html element and use that to make the boundaries for mouse movement. However, when I do this the MoveTargetOutOfBoundsException rears its head and displays some "given" coordinates (which were not anywhere near the input. The code I used: origin = driver.find_element_by_xpath('//html') bounds = origin.size print bounds ActionChains(driver).move_to_element(origin).move_by_offset(bounds['width'] - 10, bounds['height'] - 10).perform() So I subtract 10 from each boundary to test it and move to that position (apparently the move_to_element_by_offset method is dodgy). MoveTargetOutOfBoundsException: Message: Given coordinates (1919, 2766) are outside the document. Error: MoveTargetOutOfBoundsError: The target scroll location (17, 1798) is not on the page. Stacktrace: at FirefoxDriver.prototype.mouseMoveTo (file://... The actual given coordinates were (1903-10=1893, 969-10=989). Any ideas?
Selenium moving to absolute positions
29,489,027
0
3
1,318
0
python,html,selenium
Two possible problems: 1) There could be scroll on the page, so before clicking you should have scroll into element view 2) Size is given without browser elements respect, and in real world you should substitute about 20 or 30 to have original size (you could test test that values)
0
0
1
0
2015-04-07T10:05:00.000
2
0
false
29,488,957
0
0
1
2
I'm using the python package to move the mouse in some specified pattern or just random motions. The first thing I tried is to get the size of the //html element and use that to make the boundaries for mouse movement. However, when I do this the MoveTargetOutOfBoundsException rears its head and displays some "given" coordinates (which were not anywhere near the input. The code I used: origin = driver.find_element_by_xpath('//html') bounds = origin.size print bounds ActionChains(driver).move_to_element(origin).move_by_offset(bounds['width'] - 10, bounds['height'] - 10).perform() So I subtract 10 from each boundary to test it and move to that position (apparently the move_to_element_by_offset method is dodgy). MoveTargetOutOfBoundsException: Message: Given coordinates (1919, 2766) are outside the document. Error: MoveTargetOutOfBoundsError: The target scroll location (17, 1798) is not on the page. Stacktrace: at FirefoxDriver.prototype.mouseMoveTo (file://... The actual given coordinates were (1903-10=1893, 969-10=989). Any ideas?
how to embed standalone bokeh graphs into django templates
32,680,856
2
31
15,838
0
django,python-2.7,django-templates,bokeh
It must put {{the_script|safe}} inside the head tag
0
0
0
0
2015-04-08T07:53:00.000
5
0.07983
false
29,508,958
0
0
1
1
I want to display graphs offered by the bokeh library in my web application via django framework but I don't want to use the bokeh-server executable because it's not the good way. so is that possible? if yes how to do that?
Is there a way to rerun an Upgrade Step in Plone?
29,515,070
9
3
161
0
python,python-2.7,plone,plone-4.x
Go to portal_setup (from ZMI), then: go in the "Upgrades" tab select your profile (the one where you defined the metadata.xml) From here you commonly can run upgrade step not yet ran. In your case click on the "Show" button of "Show old upgrades".
0
0
0
0
2015-04-08T11:21:00.000
1
1.2
true
29,513,201
0
0
1
1
I've got a Plone 4.2.4 application and from time to time I need to create an Upgrade Step. So, I register it in the configure.zcml, create the function to invoke and increase the profile version number in the metadata.xml file. However, it might happen that something goes not really as expected during the upgrade process and one would like to rerun the Upgrade with the corrected Upgrade Step. Is there a way to rerun the Upgrade Step or do I always need to increase the version and create new Upgrade Step to fix the previous one?
Whats the best way to present a flask interface to ongoing backround task?
29,534,134
7
11
1,176
0
python,multithreading,flask,multiprocessing,multitasking
These kind of long polling jobs are best achieved using sockets, they don't really fit the Flask/WSGI model as this is not geared to asynchronous operations. You may want to look at twisted or tornado. That said your back-end process that reads/writes to telnet could be running in a separate thread that may or may not be initiated from a HTTP request. Once you kick off a thread from the flask app it won't block the response. You can just read from the data store it writes to by occasionally polling the Flask app for new data. This could be achieved client-side in a browser using javascript and timeouts, but it's a bit hacky.
0
0
0
0
2015-04-09T08:08:00.000
3
1
false
29,533,144
0
0
1
1
I have a long running process that continuously reads from a telnet port and may occasionally write to it. Sometimes I want to send an HTTP request to it to fetch the info its read since the last time I asked. Sometimes I may send an HTTP request to write certain data to another telnet port. Should I do this with 2 threads and if so should I use a mutex or an instruction queue. How do you do threading with flask anyway? Should I use multiprocessing? Something else? The reason I ask this is I had problem with a similar problem(but serial ports instead of telnet port and directly in the app instead of a local/remote HTTP service) and ended up with the non data reading thread somehow almost never running even when I inserted tons of sleep calls. I ended up re-writing it from mutex to queues and then to using multiprocesing w/ queues. Edit: The telnet ports are connections to an application which communicates(mainly reads debug data) with hardware(a printer). The flask HTTP service I want to write would be accessed by test running against the printer(either on the same machine or a different machine then the HTTP service), none of this involves a web browser!
Django: Just started learning django should i use django or jinja2 templates
29,535,357
4
1
203
0
python,django,templates,jinja2
My suggestion is to use the built-in one. This way you'll save some time at the beginning having a possibility to learn Django internals first.
0
0
0
0
2015-04-09T09:49:00.000
2
1.2
true
29,535,168
0
0
1
1
I have just started learning django (with some non web python experience). I see there are at least two templates engines: default django and jinja2. I see they are quite similar about syntax. Which one i better for beginner? which one is more perspective? Many thanks, Tomasz
Scraping / Data extraction of shipping price not on product page (only available on trolley)
29,540,245
1
0
363
0
python,web-scraping,data-extraction
You shouldn't try to fetch information about delivery price from a cart or any other pages, because like you see it depends on cart amount or other conditions on e-commerce site. It means only one right way here is to emulate this rules/conditions when you try to calculate total price of an order on your side. Do it like this and you'll avoid too many problems with the correct calculations of delivery prices.
0
0
1
0
2015-04-09T13:14:00.000
1
0.197375
false
29,539,555
0
0
1
1
I have a python script that extracts product data from an ecommerce website. However, one essential piece of information is missing from the page - delivery cost. This is not provided on any of the product pages, and is only available when you add the product to the shopping basket in order to test how much the product costs to deliver. Complexity is also added due to different delivery rules - e.g free delivery on orders over £100, different delivery prices for different items, or a flat rate of shipping for multiple products. Is there a way that I can easily obtain this delivery cost data? Are there any services that anyone knows of through which I can obtain this data more easily, or suggestions on a script that I could use? Thanks in advance.
django-admin bad interpreter: Permission Denied
29,539,856
0
0
496
0
python,linux,django,suse
It sounds like the python interpreter is what you don't have permission for. Do you have permission to run python?
0
0
0
0
2015-04-09T13:25:00.000
2
0
false
29,539,795
0
0
1
2
I'm trying to use django on a Suse server to use it in production with apache and mod_python but I'm finding some problems. I have installed python 2.7.9 (the default version was 2.6.4) and django 1.7. I had some problem with the installation but they are now solved. My current problem is that when I try to execute django-admin I get this error: -bash: /usr/local/bin/django-admin: .: bad interpreter: Permission denied I have searched through the web but I have not found a solution. I have tried to make the file executable: sudo chmod +x django-admin and the problem remains equal. Any idea? Thanking you in advance.
django-admin bad interpreter: Permission Denied
29,540,690
0
0
496
0
python,linux,django,suse
had you try add your user to the group with persmission for execute python? you can look the file /etc/passwd . In these file you can describe each permission.
0
0
0
0
2015-04-09T13:25:00.000
2
0
false
29,539,795
0
0
1
2
I'm trying to use django on a Suse server to use it in production with apache and mod_python but I'm finding some problems. I have installed python 2.7.9 (the default version was 2.6.4) and django 1.7. I had some problem with the installation but they are now solved. My current problem is that when I try to execute django-admin I get this error: -bash: /usr/local/bin/django-admin: .: bad interpreter: Permission denied I have searched through the web but I have not found a solution. I have tried to make the file executable: sudo chmod +x django-admin and the problem remains equal. Any idea? Thanking you in advance.
Using Django's collectstatic with boto S3 throws "Error 32: Broken Pipe" after a while
39,135,308
0
12
1,924
0
python,django,amazon-s3,boto,collectstatic
Old question but to fix this easily i just added the Environment variable "AWS_DEFAULT_REGION" with the region i was using (eg "ap-southeast-2"). This work locally (windows) and in AWS EB
0
0
0
0
2015-04-10T02:16:00.000
4
0
false
29,552,242
0
0
1
2
I'm using boto with S3 to store my Django site's static files. When using the collectstatic command, it uploads a good chunk of the files perfectly before stopping at a file and throwing "Error 32: Broken Pipe." When I try to run the command again, it skips over the files it has already uploaded and starts at the file where it left off, before throwing the same error without having uploaded anything new.
Using Django's collectstatic with boto S3 throws "Error 32: Broken Pipe" after a while
43,571,560
0
12
1,924
0
python,django,amazon-s3,boto,collectstatic
I also had the problem only with jquery.js, probably because it is too big like @Kyle Falconer mentions. It had nothing to do with region in my case. I "solved" it by copying the file locally to the S3 bucket where it needed to be.
0
0
0
0
2015-04-10T02:16:00.000
4
0
false
29,552,242
0
0
1
2
I'm using boto with S3 to store my Django site's static files. When using the collectstatic command, it uploads a good chunk of the files perfectly before stopping at a file and throwing "Error 32: Broken Pipe." When I try to run the command again, it skips over the files it has already uploaded and starts at the file where it left off, before throwing the same error without having uploaded anything new.
Understanding how to integrate Backbone JS with a Python crud app
29,569,455
0
0
313
0
javascript,python,django,backbone.js,frontend
Backbone is a really nice frontend framework that allows you to work with websockets and the like. Backbone is a frontend framework that does not need "direct" access to the db. It will work with django models using either django rest framework, or websockets(redis, tornado, etc.). Backbone models can be just representations of data from the backend, it's how backbone deals with the serialized data passed from the server. It's MCRVT (Model collection routes views template). TL:DR; Backbone models are representations of restful resources.
0
0
0
0
2015-04-10T19:06:00.000
1
0
false
29,568,952
0
0
1
1
I'm building a crud app using Python and Django. I want to use Backbone to access the api, grab the user data and render it accordingly. I'm new to Backbone and had some questions on a high level: 1) What advantage does Backbone provide that I can't do with regular Javascript? 2) I understand that Models and Collections make up the M in the MVC layer in Backbone, but lets say I have a User model written in Python, would I still need Models and Collections in Backbone or just Collections? Are the Models of Backbone the same as the Models in Python? I need help understanding the difference and uses in a simple way.
Is it possible to run java command line app from python in AWS EC2?
29,572,952
0
1
211
0
java,python,amazon-web-services,amazon-ec2
Since you're just making a command line call to the Java app, the path of least resistance would just be to make that call from another server using ssh. You can easily adapt the command you've been using with subprocess.call to use ssh -- more or less, subprocess.call(['ssh', '{user}@{server}', command]) (although have fun figuring out the quotation marks). As an aside on those lines, I usually find using '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' stabilizes scripted SSH calls in my environment. The more involved thing will be setting up the environments to properly run the components you need. You'll need to set up ssh configs so that your django app can ssh over and set up -- probably with private key verification. Then, you'll need to make sure that your EC2 security groups are set up to allow the ssh access to your java server on port 22, where sshd listens by default. None of this is that hairy, but all the same, it might be stabler to just wrap your Java service in a HTTP server that your Django app can hit. Anyway, hope this is helpful.
0
1
0
0
2015-04-11T00:22:00.000
1
0
false
29,572,608
0
0
1
1
I am working on some machine learning for chemical modelling in python. I need to run a java app (from command line through python subprocess.call) and a python webserver. Is this possible on AWS EC2? I currently have this setup running on my mac but I am curious on how to set it up on aws. Thanks in advance!
Python: how to host a websocket and interact with a serial port without blocking?
29,577,603
1
1
1,800
0
python,websocket,event-handling,serial-port,blocking
Simply start a subprocess that listens to the serial socket and raises an event when it has a message. Have a separate sub-process for each web port that does the same.
0
0
1
0
2015-04-11T11:21:00.000
2
0.099668
false
29,577,287
0
0
1
1
I am busy developing a Python system that uses web-sockets to send/received data from a serial port. For this to work I need to react to data from the serial port as it is received. Problem is to detect incoming data the serial port needs to queried continuously looking for incoming data. Most likely a continuous loop. From previous experiences(Slow disk access + heavy traffic) using Flask this sounds like it could cause the web-sockets to be blocked. Will this be the case or is there a work around? I have looked at how NodeJS interact with serial ports and it seems much nicer. It raises an event when there is incoming data instead of querying it all the time. Is this an option in Python? Extra Details: For now it will only be run on Linux.(Raspbian) Flask was my first selection but I am open to other Python Frameworks. pyserial for serial connection.(Is the only option I know of)
How to get precise division?
29,591,000
0
0
395
0
python,int,division,python-2.x,floating-point-precision
104101/2.0 104101/float(2) Or use Python 3
0
0
0
0
2015-04-12T15:14:00.000
1
1.2
true
29,590,978
0
0
1
1
I'm trying to get precise division with Python without success. 104101/2 gives 52050 whereas I need 52050.5 I also tried "%0.2f" % (104101/2) which is giving me '52050.00'. Javascript equivalent works. Any idea what's wrong with me?
Webapp2 redirect 404 error
29,591,301
1
2
237
0
python,google-app-engine,redirect,webapp2
Redirect takes a URL. You probably want to self.redirect("/") but without knowing your URL mappings, that's just a guess.
0
1
0
0
2015-04-12T15:34:00.000
1
0.197375
false
29,591,189
0
0
1
1
I'm having troubles with redirect function. When I call it with self.redirect("/index.html") the server goes to http://localhost:10080/index.html and alert 404 page not found. Log: HTTP/1.1" 304 - INFO 2015-04-12 12:32:39,029 module.py:737] default: "POST /subscribe HTTP/1.1" 302 - INFO 2015-04-12 12:32:39,046 module.py:737] default: "GET /index.html HTTP/1.1" 404 154 INFO 2015-04-12 12:32:39,223 module.py:737] default: "GET /favicon.ico HTTP/1.1" 304 - INFO 2015-04-12 12:32:39,296 module.py:737] default: "GET /favicon.ico HTTP/1.1" 304 -
Django circleci and the pillow library
29,603,469
1
0
287
0
python,django,unit-testing,pillow,circleci
Have you specified the Python version in your circle.yml? If Python version is not specified, the virtualenv might not get created for you.
0
0
0
1
2015-04-12T21:31:00.000
2
1.2
true
29,594,889
1
0
1
1
Can anyone point me to why this error keeps showing up during circleci testing? Neither Pillow nor PIL could be imported: No module named Image python manage.py test returned exit code 1 For the record, I followed every resource I had in terms of installation instructions for pillow. Can anyone PLEASE help me? I'm getting desperate.
Custom user model in Django?
29,596,796
1
0
138
0
python,django,django-models,django-orm
Also all 3rd-party packages rely on get_user_model(), so looks like if I don't use custom user model, all your relations should go to User, right? But I still can't add methods to User, so if User has friends relation, and I want to add recent_friends method, I should add this method to UserProfile. I have gone down the "one-to-one" route in the past and I ended up not liking the design of my app at all, it seems to me that it forces you away from SOLID. So if I was you I would rather subclass AbstractBaseUser or AbstractUser. With AbstractBaseUser you are provided just the core implementation of User and then you can extend the model according to your requirements. Depending on what sort of 3rd-party packages you are using you might need more than just the core implementation: if that's the case just extend AbstractUser which lets you extend the complete implementation of User.
0
0
0
0
2015-04-12T22:27:00.000
2
1.2
true
29,595,382
0
0
1
2
I know how to make custom user models, my question is about style and best practices. What are the consequences of custom user model in Django? Is it really better to use auxiliary one-to-one model? And for example if I have a UserProfile models which is one-to-one to User, should I create friends relationship (which would be only specific to my app) between UserProfile or between User? Also all 3rd-party packages rely on get_user_model(), so looks like if I don't use custom user model, all your relations should go to User, right? But I still can't add methods to User, so if User has friends relation, and I want to add recent_friends method, I should add this method to UserProfile. This looks a bit inconsistent for me. I'd be glad if someone experienced in Django could give a clear insight.
Custom user model in Django?
29,595,803
1
0
138
0
python,django,django-models,django-orm
I would definitely recommend using a custom user model - even if you use a one-to-one with a profile. It is incredibly hard to migrate to a custom user model if you've committed to the default user model, and there's almost always a point where you want to add at least some custom logic to the user model. Whether you use a profile or further extend the user model should then be based on all considerations that usually apply to your database structure. The right™ decision depends on the exact details of your profile, which only you know.
0
0
0
0
2015-04-12T22:27:00.000
2
0.099668
false
29,595,382
0
0
1
2
I know how to make custom user models, my question is about style and best practices. What are the consequences of custom user model in Django? Is it really better to use auxiliary one-to-one model? And for example if I have a UserProfile models which is one-to-one to User, should I create friends relationship (which would be only specific to my app) between UserProfile or between User? Also all 3rd-party packages rely on get_user_model(), so looks like if I don't use custom user model, all your relations should go to User, right? But I still can't add methods to User, so if User has friends relation, and I want to add recent_friends method, I should add this method to UserProfile. This looks a bit inconsistent for me. I'd be glad if someone experienced in Django could give a clear insight.
How to restrict a webpage to only one user(Browser Tab)
29,596,782
1
0
124
0
python,session,websocket,serial-port
Specify a variable like serial_usage which has an initial value False. When a new client connected to your WebSocket server, check the serial_usage variable. If serial port is not being used at that moment (serial_usage == False), make the connection happen, set serial_usage True. When the client disconnects, set serial_usage variable False. If serial port is being used by another client (serial_usage == True), you can show an error page and prevent the new connection.
0
0
1
0
2015-04-13T01:24:00.000
1
0.197375
false
29,596,604
0
0
1
1
I am building a Python Flask webpage that uses websockets to connect to a single serial port(pySerial). The webpage will collect a list of commands to be executed(user input) and send that to the serial port via websockets. The problem I am facing is that as soon as the webpage has been opened multiple times commands can be sent at any time and might get run out of order.
Error when try to convert order in invoice
29,665,261
1
0
727
0
python,odoo,odoo-8
It's because you have specified that mode_reglement this field is mandatory in account.invoice so you have to provide value for that or set default value for that field in account.invoice model. You should provide mandatory fields on UI because user can create invoice from other options as well.
0
0
0
0
2015-04-15T15:02:00.000
1
1.2
true
29,653,803
0
0
1
1
I have a problem when i try to convert order in invoice just after choice that i want to invoice (all/percent/ect ...) in a custom field "mode_reglement" I have no problem in my development environment but in production : The trace : > Odoo Server Error Traceback (most recent call last): File > "/var/www/odoo/openerp/http.py", line 530, in handle_exception return > super(JsonRequest, self)._handle_exception(exception) File > "/var/www/odoo/openerp/http.py", line 567, in dispatch result = > self._call_function(**self.params) File > "/var/www/odoo/openerp/http.py", line 303, in _call_function return > checked_call(self.db, *args, **kwargs) File > "/var/www/odoo/openerp/service/model.py", line 113, in wrapper return > f(dbname, *args, **kwargs) File "/var/www/odoo/openerp/http.py", line > 300, in checked_call return self.endpoint(*a, **kw) File > "/var/www/odoo/openerp/http.py", line 796, in call return > self.method(*args, **kw) File "/var/www/odoo/openerp/http.py", line > 396, in response_wrap response = f(*args, **kw) File > "/var/www/odoo/openerp/addons/web/controllers/main.py", line 953, in > call_button action = self._call_kw(model, method, args, {}) File > "/var/www/odoo/openerp/addons/web/controllers/main.py", line 941, in > _call_kw return getattr(request.registry.get(model), method)(request.cr, request.uid, *args, **kwargs) File > "/var/www/odoo/openerp/api.py", line 241, in wrapper return > old_api(self, *args, **kwargs) File > "/var/www/odoo/openerp/addons/sale/wizard/sale_make_invoice_advance.py", > line 175, in create_invoices res = sale_obj.manual_invoice(cr, uid, > sale_ids, context) File "/var/www/odoo/openerp/api.py", line 241, in > wrapper return old_api(self, *args, **kwargs) File > "/var/www/odoo/openerp/addons/sale/sale.py", line 455, in > manual_invoice self.signal_workflow(cr, uid, ids, 'manual_invoice') > File "/var/www/odoo/openerp/api.py", line 241, in wrapper return > old_api(self, *args, **kwargs) File "/var/www/odoo/openerp/models.py", > line 3527, in signal_workflow result[res_id] = > workflow.trg_validate(uid, self._name, res_id, signal, cr) File > "/var/www/odoo/openerp/workflow/_init__.py", line 85, in trg_validate > return WorkflowService.new(cr, uid, res_type, res_id).validate(signal) > File "/var/www/odoo/openerp/workflow/service.py", line 91, in validate > res2 = wi.validate(signal) File > "/var/www/odoo/openerp/workflow/instance.py", line 75, in validate > wi.process(signal=signal, force_running=force_running, stack=stack) > File "/var/www/odoo/openerp/workflow/workitem.py", line 120, in > process ok = self._split_test(activity['split_mode'], signal, stack) > File "/var/www/odoo/openerp/workflow/workitem.py", line 248, in > _split_test self._join_test(t0, t1, stack) File "/var/www/odoo/openerp/workflow/workitem.py", line 257, in _join_test > WorkflowItem.create(self.session, self.record, activity, inst_id, > stack=stack) File "/var/www/odoo/openerp/workflow/workitem.py", line > 95, in create workflow_item.process(stack=stack) File > "/var/www/odoo/openerp/workflow/workitem.py", line 116, in process if > not self._execute(activity, stack): File > "/var/www/odoo/openerp/workflow/workitem.py", line 187, in _execute > id_new = self.wkf_expr_execute(activity) File > "/var/www/odoo/openerp/workflow/workitem.py", line 313, in > wkf_expr_execute return self.wkf_expr_eval_expr(activity['action']) > File "/var/www/odoo/openerp/workflow/workitem.py", line 291, in > wkf_expr_eval_expr result = eval(line, env, nocopy=True) File > "/var/www/odoo/openerp/tools/safe_eval.py", line 314, in safe_eval > return eval(c, globals_dict, locals_dict) File "", line 1, in > File "/var/www/odoo/openerp/api.py", line 239, in wrapper return > new_api(self, *args, **kwargs) File "/var/www/odoo/openerp/api.py", > line 546, in new_api result = method(self._model, cr, uid, self.ids, > *args, **kwargs) File "/var/www/odoo/openerp/addons/sale_stock/sale_stock.py", line 143, in > action_invoice_create res = > super(sale_order,self).action_invoice_create(cr, uid, ids, > grouped=grouped, states=states, date_invoice = date_invoice, > context=context) File "/var/www/odoo/openerp/api.py", line 241, in > wrapper return old_api(self, *args, **kwargs) File > "/var/www/odoo/openerp/addons/sale/sale.py", line 559, in > action_invoice_create res = self._make_invoice(cr, uid, order, il, > context=context) File "/var/www/odoo/openerp/api.py", line 241, in > wrapper return old_api(self, *args, **kwargs) File > "/var/www/odoo/openerp/addons/sale/sale.py", line 432, in > _make_invoice inv_id = inv_obj.create(cr, uid, inv, context=context) File "/var/www/odoo/openerp/api.py", line 241, in wrapper return > old_api(self, *args, **kwargs) File > "/var/www/odoo/openerp/addons/mail/mail_thread.py", line 377, in > create thread_id = super(mail_thread, self).create(cr, uid, values, > context=context) File "/var/www/odoo/openerp/api.py", line 241, in > wrapper return old_api(self, *args, **kwargs) File > "/var/www/odoo/openerp/api.py", line 336, in old_api result = > method(recs, *args, **kwargs) File "/var/www/odoo/openerp/models.py", > line 4043, in create record = self.browse(self._create(old_vals)) File > "/var/www/odoo/openerp/api.py", line 239, in wrapper return > new_api(self, *args, **kwargs) File "/var/www/odoo/openerp/api.py", > line 462, in new_api result = method(self._model, cr, uid, *args, > **kwargs) File "/var/www/odoo/openerp/models.py", line 4181, in _create tuple([u2 for u in updates if len(u) > 2]) File "/var/www/odoo/openerp/sql_db.py", line 158, in wrapper return f(self, > *args, **kwargs) File "/var/www/odoo/openerp/sql_db.py", line 234, in execute res = self._obj.execute(query, params) ValueError: "ERREUR: > une valeur NULL viole la contrainte NOT NULL de la colonne \xab > mode_reglement \xbb " while evaluating u'action_invoice_create() ' A similar problem appear when i want to do a credit side in a manual invoice (while mode_reglement field are not empty and not null): The operation cannot be completed, probably due to the following: - deletion: you may be trying to delete a record while other records still reference it - creation/update: a mandatory field is not correctly set [object with reference: mode_reglement - mode.reglement] My code : class mode_reglement(osv.osv): _name = 'cap.mode_reglement' _columns = { 'name' : fields.char('Nom',required=True), } class account_invoice(osv.osv): _inherit = 'account.invoice' _columns = { 'mode_reglement':fields.many2one('cap.mode_reglement','Mode de reglement',auto_join=True,required=True), }
Make a plot visible in IE
33,106,115
2
1
1,486
0
python,internet-explorer,bokeh
I ran into a similar issue with a Bokeh figure at work not showing in Internet Explorer, but the figure worked fine in other browsers. For me the problem seemed to be that intranet sites were shown in Compatability View (I must admit I don't know what Compatability View means...). The fix was to choose the options icon in the upper right corner, then Compatibility View Settings and then remove the checkmark at Display intranet sites in Compatibility View. After close and reload the figure appeared.
0
0
0
0
2015-04-16T08:44:00.000
3
0.132549
false
29,669,631
0
0
1
1
I've been making plots in Bokeh, they work fine in Chrome, but I just get blank pages in IE. I thought this was because my company uses IE8 by default, but we've now been upgraded to IE11 and I see the same problem. The IE debug console reports that the page targets document mode 7, so it may be an issue with the metadata in the page header. Is there a way to make Bokeh output plots with the correct metadata for IE?
Shopify Python API GET Metafields for a Product
43,496,363
0
3
1,518
0
python,shopify
product = shopify.Product.find(pr_id) metafields = product.metafields()
0
0
1
0
2015-04-16T18:25:00.000
4
0
false
29,683,036
0
0
1
1
Is there a way to GET the metafields for a particular product if I have the product ID? Couldn't find it in the docs.
My Python program generates an HTML page; how do I display a .jpg that's in the same directory?
29,713,745
0
0
61
0
python,html
No, your python process does not care about the JPG at all. It just generates html asking the browser to fetch the JPG. Then it's the browser, fetching the JPG by making another request to the webserver. Therefore it is very likely that your python script needs to live in a different directory than the JPG. Have a look on your web server log. You should see two requests. One for the HTML and one for the JPG.
0
0
0
0
2015-04-18T03:10:00.000
2
0
false
29,713,533
0
0
1
1
The generated HTML page works fine for text. But does not find the file and displays the alt text instead. I know the HTML works because "view source" in the browser can be copied into a file, which then works locally when the .jpg is in the same directory. On the remote site, the .jpg file is in the same directory as the Python program that generated the HTML, and this is the directory where the Python process is running. Clearly this process is looking for the file (it shows the alt); how do I find where it is looking, so I can put the file there? I would rather have a local reference than an absolute one elsewhere on the Web, to improve performance.
using materialized views or alternatives in django
55,397,481
14
13
6,549
1
python,sql-server,django,database,postgresql
You can use Materialized view with postgres. It's very simple. You have to create a view with query like CREATE MATERIALIZED VIEW my_view as select * from my_table; Create a model with two option managed=false and db_name=my_view in the model Meta like this MyModel(models.Model): class Meta: managed = False db_table='my_view' Simply use powers of ORM and treat MyModel as a regular model. e.g. MyModel.objects.count()
0
0
0
0
2015-04-18T11:54:00.000
2
1
false
29,716,972
0
0
1
1
I need to use some aggregate data in my django application that changes frequently and if I do the calculations on the fly some performance issues may happen. Because of that I need to save the aggregate results in a table and, when data changes, update them. Because I use django some options may be exist and some maybe not. For example I can use django signals and a table that, when post_save signal is emitted, updates the results. Another option is materialized views in postgresql or indexed views in MSSQL Server, that I do not know how to use in django or if django supports them or not. What is the best way to do this in django for improving performance and accuracy of results.
Django/Python - Updating the database every second
29,722,220
2
3
782
0
python,django
One of the possible solutions would be to use separate daemonized lightweight python script to perform all the in-game business logic and left django be just the frontend to your game. To bind them together you might pick any of high-performance asynchronous messaging library like ZeroMQ (for instance to pass player's actions to that script). This stack would also have a benefit of a frontend being separated and completely agnostic of a backend implementation.
0
0
0
0
2015-04-18T19:35:00.000
2
0.197375
false
29,721,897
0
0
1
1
I'm working on creating a browser-based game in Django and Python, and I'm trying to come up with a solution to one of the problems I'm having. Essentially, every second, multiple user variables need to be updated. For example, there's a currency variable that should increase by some amount every second, progressively getting larger as you level-up and all of that jazz. I feel like it's a bad idea to do this with cronjobs (and from my research, other people think that too), so right now I'm thinking I should just create a thread that loops through all of the users in the database that performs these updates. Am I on the right track here, or is there a better solution? In Django, how can I start a thread the second the server starts? I appreciate the insight.
How to set proxy in scrapy shell, not in settings.py
29,725,359
3
2
2,036
0
python,scrapy
Go to that directory where your project is located and from there execute scrapy shell command. That would do the trick.
0
0
0
0
2015-04-19T02:29:00.000
1
1.2
true
29,725,130
0
0
1
1
I had set proxy in settings, but I want to test some in scrapy shell, so what should I do?
Magento 1.9 Custom Menu Extension Use in django
29,727,058
0
0
65
0
php,python-2.7,magento-1.9
Ok. Great. I found the issue. I had not noticed another javascript line at the end of my html file's tag. so I am home and dry. Cheers.
0
0
0
0
2015-04-19T07:02:00.000
1
0
false
29,726,821
0
0
1
1
I am trying to use the Magento Custom Menu extension in a django project. I have modified the menucontent.phtml, but yet my menu items are not reflecting the appropriate captions. Does anyone know how does the extension work to generate the menu?
Is there a way to get current HTML from browser in Python?
29,791,337
0
0
307
0
javascript,python,html,web-scraping
There's wget on the robot, you could use it... (though I'm not sure I understand where is really the problem...)
0
0
1
0
2015-04-19T17:58:00.000
2
0
false
29,733,653
0
0
1
1
I am currently working on a HTML presentation, that works well, but I need the presentation to be followed simultaneously with a NAO robot who reads a special html tag. I somehow need to let him know, which slide I am on, so that he can choose the correct tag. I use Beautiful Soup for scraping the HTML, but it does so from a file and not from a browser. The problem is, there is javascript running behind, assigning various classes to specific slides, that tell the current state of the presentation. And I need to be able to access those, but in the default state of the presentation they are not present and are added asynchronously throughout the process of the presentation. Hopefully, my request is clear. Thank you for your time
Create separate Django Admin sites and separate permissions
29,738,237
1
1
114
0
python,django
Under Site Administration: Add a new "Group"; then select which permissions you want to give any user assigned to that group. After creating the new group, then assign the user to that group.
0
0
0
0
2015-04-20T01:41:00.000
1
0.197375
false
29,738,029
0
0
1
1
In Django, i have extend the AdminSite class into a UserAdminSite class. But there is a problem: Any Admin who has access to the UserAdminSite would also have a "is_staff = True" status and could therefore access the default admin site for the entire website. How do I separate the access permission for these two admin sites?
How to Set Scrapy Auto_Throttle Settings
29,783,461
1
1
5,769
0
python,web-scraping,scrapy
set DOWNLOAD_DELAY = some_number where some_number is the delay (in seconds) you want for every request and RANDOMIZE_DOWNLOAD_DELAY = False so it can be static.
0
0
0
0
2015-04-20T17:08:00.000
4
0.049958
false
29,754,112
0
0
1
2
My use case is this: I have 10 spiders and the AUTO_THROTTLE_ENABLED setting is set to True, globally. The problem is that for one of the spiders the runtime WITHOUT auto-throttling is 4 days, but the runtime WITH auto-throttling is 40 days... I would like to find a balance and make the spider run in 15 days (3x the original amount). I've been reading through the scrapy documentation this morning but the whole thing has confused me quite a bit. Can anyone tell me how to keep auto-throttle enabled globally, and just turn down the amount to which it throttles?
How to Set Scrapy Auto_Throttle Settings
33,189,665
1
1
5,769
0
python,web-scraping,scrapy
Auto_throttle is specifically designed so that you don't manually adjust DOWNLOAD_DELAY. Setting the DOWNLOAD_DELAY to some number will set an lower bound, meaning your AUTO_THROTTLE will not go faster than the number set in DOWNLOAD_DELAY. Since this is not what you want, your best bet would be to set AUTO_THROTTLE to all spiders except for the one you want to go faster, and manually set DOWNLOAD_DELAY for just that one spider without AUTO_THROTTLE to achieve whatever efficiency you desire.
0
0
0
0
2015-04-20T17:08:00.000
4
0.049958
false
29,754,112
0
0
1
2
My use case is this: I have 10 spiders and the AUTO_THROTTLE_ENABLED setting is set to True, globally. The problem is that for one of the spiders the runtime WITHOUT auto-throttling is 4 days, but the runtime WITH auto-throttling is 40 days... I would like to find a balance and make the spider run in 15 days (3x the original amount). I've been reading through the scrapy documentation this morning but the whole thing has confused me quite a bit. Can anyone tell me how to keep auto-throttle enabled globally, and just turn down the amount to which it throttles?
Can an arbitrary Python program have its dependencies inlined?
31,281,691
0
1
86
0
python
Since your goal is to be cross-architecture, any Python program which relies on native C modules will not be possible with this approach. In general, using virtualenv to create a target environment will mean that even users who don't have permission to install new system-level software can install dependencies under their own home directory; thus, what you ask about is not often needed in practice. However, if you wanted to do things that are consider evil / bad practices, pure-Python modules can in fact be bundled into a script; thus, a tool of this sort would be possible for modules with only native-Python dependencies! If I were writing such a tool, I might start the following way: Use pickle to serialize content of modules on the "sending" side In the loader code, use imp.create_module() to create new module objects, and assign unpickled objects to them.
0
0
0
0
2015-04-20T20:07:00.000
1
0
false
29,757,386
1
0
1
1
In the JavaScript ecosystem, "compilers" exist which will take a program with a significant dependency chain (of other JavaScript libraries) and emit a standalone JavaScript program (often, with optimizations applied). Does any equivalent tool exist for Python, able to generate a script with all non-standard-library dependencies inlined? Are there other tools/practices available for bundling in dependencies in Python?
Django Back-End Design Advice
29,759,759
0
4
239
0
python,django,backend
So I'm thinking this through, and one possibility I have come up with is to build databases (in mysql in my case) supported by Django that represent the data I am interested in from these data sources. I could then override the model methods that query from/save changes to the mysql model to make calls to an external python class I write to interact directly with the data source and respective mysql database. So, for example, in a query call, I could override the django method for doing so and prepend an operation to check if the mysql records are "stale" before calling super - if so, request an update to them before continuing. In an update operation, I could append (post-update to mysql table), an operation to request that the external class update the external source. This is a kind of roundabout way of doing it, but it does allow me to keep the app itself all within the django framework and if, in the future, modules are well implemented that provide a direct back-end interface to these sources, I can swap out the workaround with the direct interface easy enough. Thoughts? Criticisms?
0
0
0
0
2015-04-20T20:50:00.000
2
0
false
29,758,142
0
0
1
1
I have a tool I use at work, written in python, that I want to port into the Django framework to make writing a web-based management interface more seamless. I've been through the django tutorials and have a pretty solid understanding of how to write a basic django app with your own database (or databases). The dilemma I've run into with this particular project is that I am referencing multiple data sources that: May or may not actually be SQL databases, and some do not have any implementation as a django back-end (LDAP and Google Admin SDK for example). Are third party data sources for which the overall "model" may change without notice, I have no control over this... Though the portions of their 'model' that I will be accessing will likely never change. So my question is: Should I even be thinking about these external data sources as a django 'model'? Or am I better off just writing some separate interface classes for dealing with those data sources? I can see the possibility of writing in a new 'db engine' to handle communications with these data sources so from the actual app implementation I can call all the usual methods like I am querying any database. Ideally, the core of the app I am writing needs to not care about the implementation details of each datasource that it connects to - I want to make it as pluggable as possible so implementation of new datasource types in the future doesn't involve much if any modification to the core code. I want to know if that is the 'accepted' way of doing it though - or if, for custom situations like this, you would work around using the django back-end and just implement your own custom solution for querying information out of those data sources. I hope this question is clear enough... If not, ask me for whatever specifics you need. Thanks!
Scrapy crawl blocked with 403/503
30,086,257
1
1
3,256
0
python,web-scraping,scrapy
It appears that the primary problem was not having cookies enabled. Having enabled cookies, I'm having more success now. Thanks.
0
0
1
0
2015-04-20T21:15:00.000
4
1.2
true
29,758,554
0
0
1
2
I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled. Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it?
Scrapy crawl blocked with 403/503
72,128,238
0
1
3,256
0
python,web-scraping,scrapy
I simply set AutoThrottle_ENABLED to True and my script was able to run.
0
0
1
0
2015-04-20T21:15:00.000
4
0
false
29,758,554
0
0
1
2
I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled. Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it?
Install Python Flask without using pip
63,067,551
-1
4
16,641
0
python,flask
You can try with wheel pacakges .
0
0
0
0
2015-04-21T06:36:00.000
3
-0.066568
false
29,764,655
1
0
1
1
How do I install Python Flask without using pip? I do not have pip, virtualenv nor easy_install. The context of this question is that I am on a tightly controlled AIX computer. I cannot install any compiled code without going through several layers of management. However, I can install python modules. Python 2.7 is installed. I have some existing python code that generates a report. I want to make that report available on a web service using Flask. I am using bottle, but I am going to want to use https, and support for https under Flask seems much more straight forward. I would like to put the flask library (and its dependencies) into my project much like bottle is placed into the project. What I tried: I downloaded the flask tarball and looked at it. It had quite a bit of stuff that I did not know what to do with. For instance, there was a makefile.
How do I make a loading page while I am processing something in the backend with Django?
29,777,880
2
3
917
0
javascript,python,django,web-applications
You should indeed make two views, one to only return the page showing the loading UI and one to perform the long task. The second view will be called using an AJAX request made from the "loading" page. The response from the AJAX request will notify your "loading" page that it is time to move on. You need to make sure the AJAX request's duration won't exceed the timeout of your server (with ~10 seconds, you should be fine).
0
0
0
0
2015-04-21T15:00:00.000
2
1.2
true
29,775,956
0
0
1
1
I have a main view function for my application. After logging in successfully this main view method is called and is expected to render the template. But I have to perform some calculations in this view method [I am checking certain conditions about the user by making facebook graph api request.] Thus it takes 2~4 seconds to load. How do I show this loading scene since the template is rendered by return statement and thus is executed only when the process is complete. Should I make 2 views , one for showing loading and the other one for calculating and keep making AJAX request to other view method to check if the process is complete or not ?
How to add a gallery in photologue?
31,057,437
0
0
244
0
python,django,photologue
In the admin panel, you also need to: Create a gallery. Choose which photos are a part of which galleries.
0
0
0
0
2015-04-22T06:23:00.000
2
0
false
29,789,325
0
0
1
1
I installed photologue correctly in my project (blog) and I can add images in admin panel, but how to display them on my main page?