Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Where should virtualenvs go in production?
| 23,259,806 | 2 | 9 | 922 | 0 |
python,django,python-3.x,virtualenv,production
|
Here are my thoughts:
Arguments for grouping in a common folder
Cleaner management of multiple venvs on a given machine. Good tools to support checking which are available, adding new ones, purging old ones, etc.
More sensible (and more space-efficient) when sharing one or more venvs across more than one project
Allows the use of some nice features like autocompletion of venv names
Arguments for keeping with the project
Clear relationship between the venv and the project. Eliminates any ambiguity and less error-prone since there's little chance of running the wrong venv for a project (which is not always immediately evident).
Makes more sense when there is a one-to-one relationship between venvs and projects
May be the preferred approach when working in teams from separate accounts.
More straightforward when deploying across identical hosts; (just rsync the whole project). Nothing stopping you from doing this with a venv in a common folder, but it feels more natural to deploy a single tree.
Easier to sandbox the whole application.
I tend to prefer the former for more experimental / early-stage work, and the latter for projects that are deployed.
| 0 | 0 | 0 | 0 |
2014-04-23T23:36:00.000
| 1 | 1.2 | true | 23,257,123 | 0 | 0 | 1 | 1 |
When using virtualenv (or virtualenvwrapper), the recommended practice is to group all your virtual environments together ... for example in ~/.virtualenvs
BUT, I've noticed in reading a number of articles on deploying Django applications, that the recommendation seems to be to put your virtual environments somewhere under the root of the individual web application ... for example in /srv/www/example.com/venv.
My questions are:
Why?
Would it matter if I went one way or the other?
And is one way recommended over another?
|
on google app engine are how are StructuredProperties updated?
| 23,277,351 | 1 | 0 | 153 | 0 |
google-app-engine,python-2.7,google-cloud-datastore,datamodel
|
StructuredPropertys belong to the entity that contains them - so your assumption that
updating a single StructuredProperty will invalidate the memcache is correct.
LocalStructuredProperty is the same behavior - the advantage however is that each
property on a LocalStructuredProperty is obfuscated into a binary storage - the datastore
has no idea about the structure of a LocalStructuredProperty. (There is probably a deserialization
computational cost attributed to these properties - but that depends a lot on the amount
of data they contain, I imagine.)
To contrast, StructuredProperty actually makes its child properties available for
Query indexing in most cases - allowing you to perform complicated lookups.
Keep in mind - you should be calling put() for the containing entity, not for each
StructuredProperty or LocalStructuredProperty - so you should be seeing a single RPC
call for updating that parent entity - regardless of the number of repeated properties exist.
I would advise using StructuredProperty that contain ndb.IntegerProperty(repeated=True), rather
than making 'parallel lists' of integers and floats - that adds more complexity to your python
model, and is exactly the behavior that ndb.StructuredProperty strives to replace.
| 0 | 1 | 0 | 0 |
2014-04-24T09:38:00.000
| 1 | 1.2 | true | 23,265,183 | 0 | 0 | 1 | 1 |
I am considering ways of organizing data for my application.
One data model I am considering would entail having entities where each entity could contain up to roughly 100 repeated StructuredProperties. The StructuredProperties would be mostly read and updated only very infrequently. My question is - if I update any of those StructuredProperties, will the entire entity get deleted from Memcache and will the entire entity be reread from the ndb? Or is it just the single StructuredProperty that will get reread? Is this any different with LocalStructuredProperty?
More generally, how are StructuredProperties organized internally? In situations where I could use multiple Float or Int properties - and I am using a StructuredProperty instead just to make my model more readable - is this a bad idea? If I am reading an entity with 100 StructuredProperties will I have to make 100 rpc calls or are the properties retrieved in bulk as part of the original entity?
|
What is the best way to update the UI when a celery task completes in Django?
| 23,290,106 | 1 | 3 | 761 | 0 |
python,django,asynchronous,celery,django-celery
|
Note that polling means you'll be keeping the request and connection open. On web applications with large amount of hits, this will waste a significant amount of resource. However, on smaller websites the open connections may not be such a big deal. Pick a strategy that's easiest to implement now that will allow you to change it later when you actually have performance issues.
| 0 | 0 | 0 | 0 |
2014-04-24T22:59:00.000
| 2 | 0.099668 | false | 23,281,357 | 0 | 0 | 1 | 1 |
I want the user to be able to click a button to generate a report, show him a generating report animation and then once the report finishes generating, display the word success on the page.
I am thinking of creating a celery task when the generate report button is clicked. What is the best way for me to update the UI once the task is over? Should I constantly be checking via AJAX calls if the task has been completed? Is there a better way or third party notification kind of app in Django that helps with this process?
Thanks!
Edit: I did more research and the only thing I could find is three way data bindings with django-angular and django-websocket-redis. Seems like a little bit of an overkill just for this small feature. I guess without web sockets, the only possible way is going to be constantly polling the backend every x seconds to check if the task has completed. Any more ideas?
|
Opening a pdf and reading in tables with python pandas
| 23,285,666 | 6 | 34 | 82,529 | 0 |
python,pdf,pandas
|
this is not possible. PDF is a data format for printing. The table structure is therefor lost. with some luck you can extract the text with pypdf and guess the former table columns.
| 0 | 0 | 0 | 0 |
2014-04-25T05:24:00.000
| 7 | 1 | false | 23,284,759 | 0 | 1 | 1 | 2 |
Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function?
|
Opening a pdf and reading in tables with python pandas
| 41,133,523 | 3 | 34 | 82,529 | 0 |
python,pdf,pandas
|
Copy the table data from a PDF and paste into an Excel file (which usually gets pasted as a single rather than multiple columns). Then use FlashFill (available in Excel 2016, not sure about earlier Excel versions) to separate the data into the columns originally viewed in the PDF. The process is fast and easy. Then use Pandas to wrangle the Excel data.
| 0 | 0 | 0 | 0 |
2014-04-25T05:24:00.000
| 7 | 0.085505 | false | 23,284,759 | 0 | 1 | 1 | 2 |
Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function?
|
Pass Data From Python To Html Tag
| 23,297,917 | 1 | 0 | 117 | 0 |
python,html,django
|
Part of your page that contains the paragraph tags is a piece of JavaScript that contains a timer.
Every once in a while it does an Ajax request to get the data with regard to "what's going on now in the system".
If you use the Ajax facilites of JQuery, which is probably the easiest, you can pass a JavaScript callback function that will be called if the request is answered. This callback function receives the data served by Django as response to the asynchroneous request. In the body of this callback you put the code to fill your paragraph.
Django doesn't have to "know" about Ajax, it just serves the required info from a different URL than your original page containing the paragraph tags. That URL is part the your Ajax request from the client.
So it's the client that takes the initiative. Ain't no such thing as server push (fortunately).
| 0 | 0 | 0 | 0 |
2014-04-25T09:14:00.000
| 1 | 1.2 | true | 23,288,911 | 0 | 0 | 1 | 1 |
I am developing a project on Python using Django. The project is doing lot of work in the background so i want to notify users what's going on now in the system. For this i have declared a p tag in HTML and i want to send data to it.
I know i can do this by templates but i am little confused as 5 functions need to pass the status to the p tag and if i use render_to_response() it refreshes the page every time a status is passed from the function
Anyone please tell me how to do this in the correct way
|
Deploying Pyramid application on AWS EC2
| 24,533,996 | 1 | 2 | 1,592 | 0 |
python,amazon-web-services,amazon-ec2,pyramid
|
I would suggest to run two instances and use Elastic Load Balancer.
Never run anything important on a single EC2 instance, EC2 instances are not durable, they can suddenly vanish, taking whatever data you had stored on it.
Everything else should work as in Pyramid Cookbook description.
| 0 | 0 | 0 | 1 |
2014-04-25T16:35:00.000
| 3 | 0.066568 | false | 23,298,546 | 0 | 0 | 1 | 2 |
I have been given a task to complete: Deploy my pre-existing Pyramid application onto our EC2 Linux server. I would like to do this with a minimal amount of stress and error, especially considering am I totally new to AWS.
What I have done so far:
Setup the EC2 instance which I can SSH into.
Locally develop my Pyramid application
And, we version control the application with GitHub.
We are using: Pyramid (latest), along with Python 2.7.5 and Postgresql (via SQLAlchemy and Alembic.)
What is a basic, high-level list of steps to ensure that my application is deployed appropriately?
Where, if at all, does something like Elastic Beanstalk come into play?
And, considering my project is currently in a Git repo, what steps or considerations must be taken to accommodate this?
I'm not looking for opinions on how to tweak my setup or anything like that. I am looking for a non-debatable, comprehensible set of steps or considerations to deploy my application in the most basic form. This server is for development purposes only, so I am not looking for a full-blown solution.
I have researched this topic for Django projects, and frankly, I am a bit overwhelmed with the amount of different possible options. I am trying to boil this situation down to its minimal components.
I appreciate the time and help.
|
Deploying Pyramid application on AWS EC2
| 23,324,088 | 2 | 2 | 1,592 | 0 |
python,amazon-web-services,amazon-ec2,pyramid
|
Deploying to an EC2 server is just like deploying to any other Linux server.
If you want to put it behind a load balancer, you can do which is fully documented.
You can also deploy to Elastic Beanstalk. Where as EC2 is a normal Linux sever, Beanstalk is more like deploying to an environment, you just push all your git changes into an S3 repo, your app then gets built and deployed onto beanstalk.
Meaning no server setups, no configuration (other than the very basics) and all new changes you push to S3, get built and update each version of your app that may have been launched on beanstalk.
You don't want to host your database server on EC2, use Amazons RDS database server, dead simple and takes about two minutes to setup and configure.
As far as file storage goes, move everything to S3.
EC2 and beanstalk should not be used for any form of storage.
| 0 | 0 | 0 | 1 |
2014-04-25T16:35:00.000
| 3 | 1.2 | true | 23,298,546 | 0 | 0 | 1 | 2 |
I have been given a task to complete: Deploy my pre-existing Pyramid application onto our EC2 Linux server. I would like to do this with a minimal amount of stress and error, especially considering am I totally new to AWS.
What I have done so far:
Setup the EC2 instance which I can SSH into.
Locally develop my Pyramid application
And, we version control the application with GitHub.
We are using: Pyramid (latest), along with Python 2.7.5 and Postgresql (via SQLAlchemy and Alembic.)
What is a basic, high-level list of steps to ensure that my application is deployed appropriately?
Where, if at all, does something like Elastic Beanstalk come into play?
And, considering my project is currently in a Git repo, what steps or considerations must be taken to accommodate this?
I'm not looking for opinions on how to tweak my setup or anything like that. I am looking for a non-debatable, comprehensible set of steps or considerations to deploy my application in the most basic form. This server is for development purposes only, so I am not looking for a full-blown solution.
I have researched this topic for Django projects, and frankly, I am a bit overwhelmed with the amount of different possible options. I am trying to boil this situation down to its minimal components.
I appreciate the time and help.
|
Zero Footprint Python-Social-Auth authentication
| 23,304,504 | 0 | 1 | 195 | 0 |
python,django,oauth,oauth-2.0,python-social-auth
|
I would try to approach this problem by using django.contrib.auth.models.Group and django.contrib.auth.models.Permission. Create one general group with custom permissions to your apps' functionality and add all your normal users to that.
Save accounts created by python-social-auth in default django.contrib.auth.models.User but create seperate Group without any permissions for them.
If necessary create some scheduled task ( either with cronjob or Celery ) which will go through users and deactivate/delete those who expired.
| 0 | 0 | 0 | 0 |
2014-04-25T19:34:00.000
| 2 | 1.2 | true | 23,301,532 | 0 | 0 | 1 | 1 |
My site has regular users that use the django default User model, but for one particular functionality, I want people to be able to login using their social accounts (twitter, fb..etc) using python-social-auth without having these logins saved in the database with the user model (no accounts created, no ability to do certain normal user tasks) and with a session timeout.
I looked around for ways to do that but my little research bore no fruit. Any ideas?
Summary:
Separation between normal users and social (so I can limit what social auth'd users can do)
Session timeout for social auth'd users
No addition in the User table for social auth'd users (no footprint).
Optional: Obtain their social username and id for logging purposes.
Thanks
|
Taking the bit of a file and displaying them
| 23,303,846 | 0 | 0 | 41 | 0 |
java,python,file,bits
|
Sure, you read the file as byte stream (which you would typically do with a file), and then display the bytes in binary.
| 0 | 0 | 0 | 0 |
2014-04-25T22:01:00.000
| 2 | 0 | false | 23,303,787 | 0 | 0 | 1 | 1 |
I was wondering if it was possible to read a file bit meaning 0's and 1's and them displaying them, in either java or python. I don't know if it possible.
|
Receiving serial port data: real-time web display + logging (with downsampling)
| 25,128,746 | 0 | 2 | 1,126 | 0 |
php,python,mysql,logging,serial-port
|
I don't know if I understand your problem correctly, but it appears you want to show a non-stop “stream” of data with your PHP script. If that's the case, I'm afraid this won't be so easy.
The basic idea of the HTTP protocol is request/response based. Your browser sends a request and receives a (static) response.
You could build some sort of “streaming” server, but streaming (such as done by youtube.com) is also not much more than periodically sending static chunks of a video file, and the player re-assembles them into a video or audio “stream”.
You could, however, look into concepts like “web sockets” and “long polling”. For example, you could create a long-running PHP script that reads a certail file once every two seconds and outputs the value. (Remember to use flush(), or output will be buffered.)
A smart solution could even output a JavaScript snippet every two seconds, which again would update some sort of <div> container displaying charts and what not.
There are for example implementations of progress meters implemented with this type of approach.
| 0 | 0 | 0 | 1 |
2014-04-26T14:20:00.000
| 1 | 0 | false | 23,312,182 | 0 | 0 | 1 | 1 |
I am working on a small project which involves displaying and recording (for later processing) data received through a serial port connection from some sort of measurement device. I am using a Raspberry Pi to read and store the received information: this is done with a small program written in Python which opens the serial device, reads a frame and stores the data in a MySQL database (there is no need to poll or interact with the device, data is sent automatically).
The serial data is formatted into frames about 2.5kbits long, which are sent repeatedly at 1200baud, which means that a new frame is received about every 2 seconds.
Now, even though the useful data is just a portion of the frame, that is way too much information to store for what I need, so what I'm currently doing is "downsampling" the data by reading a frame only once per minute. Currently this is done via a cron task which calls my logging script every minute.
The problem with my setup is that the PHP webpage used to display (and process) the received data (pulled from the MySQL database) cannot show new data more than once per minute.
Thus here come my question:
How would you do to make the webpage show the live data (which doesn't need to be saved), while keeping the logging to the MySQL database @ once per minute?
I guess the solution would involve some sort of daemon, which stores the data at the specified frequency (once per minute), while keeping the latest received data available for the php webpage (how?). What do you think? Do you have any examples of similar code/applications which I could use as a starting point?
Thanks!
|
How to mimic the url aliasing functionality of mod_rewrite with Pyramid (Python Framework)?
| 23,315,196 | 0 | 3 | 801 | 0 |
python,url-rewriting,pyramid
|
mod_rewrite is a webserver module that is independent of the framework your application uses. If it is configured on the server, it should operate the same regardless of whether you are using Drupal or Pyramid. Since the module is the same for each framework, the overhead is precisely the same in both cases.
| 0 | 0 | 0 | 1 |
2014-04-26T18:10:00.000
| 3 | 0 | false | 23,314,745 | 0 | 0 | 1 | 1 |
I'm working on converting an existing Drupal site to Pyramid. The Drupal site has urls that are SEO friendly example: "testsite.com/this-is-a-page-about-programming". In Drupal they have a system which maps that alias to a path like "testsite.com/node/33" without redirecting the user to that path. So the user sees "testsite.com/this-is-a-page-about-programming" but Drupal loads node/33 internally. Also if the user lands on "testsite.com/node/33" they would be redirected to "testsite.com/this-is-a-page-about-programming".
How can this be achieved in Pyramid without a major performance hit?
|
Should I load the whole database at initialization in Flask web application?
| 23,318,252 | 1 | 2 | 486 | 0 |
python,flask
|
Of course it will be faster to get data from cache that is stored in memory. But you've got to be sure that the amount of data won't get too large, and that you're updating your cache every time you update the database. Depending on your exact goal you may choose python dict, cache (like memcached) or something else, such as tries.
There's also a "middle" way for this. You can store in memory not the whole records from database, but just the correspondence between the search params in request and the ids of the records in database. That way user makes a request, you quickly check the ids of the records needed, and query your database by id, which is pretty fast.
| 0 | 0 | 0 | 0 |
2014-04-26T22:43:00.000
| 1 | 1.2 | true | 23,317,286 | 0 | 0 | 1 | 1 |
I'm develop a web application using Flask. I have 2 approaches to return pages for user's request.
Load requesting data from database then return.
Load the whole database into python dictionary variable at initialization and return the related page when requested. (the whole database is not too big)
I'm curious which approach will have better performance?
|
Mutating ndb repeated property
| 23,324,452 | 1 | 2 | 440 | 0 |
python,google-app-engine,app-engine-ndb
|
There is no automatic way of doing this.
You need to perform queries for all types that could hold the key and then delete them in code.
If there could be a lot and/or it could take a long time you might want to consider using a task.
| 0 | 1 | 0 | 0 |
2014-04-27T09:47:00.000
| 2 | 0.099668 | false | 23,321,825 | 0 | 0 | 1 | 1 |
I have two classes, Department and Employee. Department has a property declared as
employees = ndb.KeyProperty(kind=ClassB, repeated=True)
The problem is,when i delete the entity whose key is held in the employees list, the entity is deleted in the Employee datastore, but the list in Department datastore remains the same (with the key of the deleted employee still in it).
How do i make sure that when the Employee is deleted, all references to it in the Department datastore is deleted as well?
|
Refresh same page with ajax with different data
| 23,328,225 | 1 | 0 | 540 | 0 |
javascript,jquery,python,ajax
|
You can use jQuery, which gives you a very simple way to do that:
$.post( "yourpage.html", $('form').serialize() + "&ajax=true", function(response) {
$('#results').html(response);
});
Server side, detect if ajax is true and then return only the query results instead of the whole page. They will be saved in the element of id="results". Replacing the whole page is generally not a good idea.
| 0 | 0 | 0 | 0 |
2014-04-27T18:58:00.000
| 1 | 1.2 | true | 23,327,609 | 0 | 0 | 1 | 1 |
I have a web page with a form each time a form is submitted same page loads but with different data relevant to the query. On the back-end i am using python for finding data relevant to query.
I want to process all this with ajax as back-end process needs more time so i need to show status to the user i -e whats going on now in the system
Also the data returned is the same html file but with some other data. so how can i display it on the current page. It should not be appended to current html file. it is standalone
Anyone please give me a solution to this problem
|
Herkou site looks different at launch then django local server site
| 23,351,079 | 0 | 1 | 68 | 0 |
python,django,git,heroku
|
Here is a list of suggestions on how I would approach this issue with Heroku.
You should try heroku restart. This restarts your application and can help pick up new changes.
I would clear my browser cache as often I do not see changes on my web page if the browser has cached them.
I would check that the git repository on Heroku matches my local one in that it has all the newest changes made on my local server.
| 0 | 0 | 0 | 0 |
2014-04-28T20:49:00.000
| 1 | 0 | false | 23,350,910 | 0 | 0 | 1 | 1 |
My issue is that when I view my site using python manage.py runserver or foreman start, I can see my site perfectly.
However, when I git push heroku master on the surface everything appears fine as no errors are given. But when I view my site with the Heroku given site link, I do not see my updated site as I see when I view my site using python manage.py runserver or foreman start.
I am building my site using 'pinax-theme-bootstrap` and my virtualenv is on my desktop directory.
Does anyone have a solution as to why this may be the case?
|
AWS's Elastic Beanstalk not using my virtualenv: "No module named boto"
| 23,691,652 | 1 | 1 | 6,301 | 0 |
python,django,amazon-web-services,virtualenv,amazon-elastic-beanstalk
|
OK, this is a hack, and an ugly one, but it worked.
Now, the error is happening on the local machine, nothing to do with remote.
I have boto installed locally and I am NOT using virtualenv (for reasons of my own, to test a more barebones approach).
1 note where the error is happening - in .git/AWSDevTools/aws/dev_tools.py
2 run a non-virtualenv python and
import boto
print boto.file
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/init.pyc
3 open up that dev_tools.py and add this on top:
import sys
sys.path.append("/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
Since you are appending to sys.path, you will only import modules from that addition if git aws.push hasn't found it in its own stuff.
That fixes the problem for now, except that it will re-occur on the next directory where you do an "eb init"
4 Go to where you have unzipped the CLI. In my case:
$cd ~/bin/AWS-ElasticBeanstalk-CLI-2.6.1
now
5 look for the original of dev_tools.py used by eb init
$find ~/bin -name dev_tools.py
~/bin/AWS-ElasticBeanstalk-CLI-2.6.1/AWSDevTools/Linux/scripts/aws/dev_tools.py
edit this file as in #3
if you do another eb init elsewhere you will see that your ugly hack is there as well.
Not great, but it works.
p.s. sorry for the formatting, newbie here, it's late and I wanna go skating.
| 0 | 0 | 0 | 0 |
2014-04-29T02:25:00.000
| 2 | 1.2 | true | 23,354,411 | 1 | 0 | 1 | 1 |
I'm trying to use AWS's Elastic Beanstalk, but when I run eb start, I get "ImportError: No module named boto Cannot run aws.push for local repository HEAD."
I am in the virtual environment of my Django project.
I ran pip install boto and it was successful.
I did pip freeze > requirements.txt, git add requirements.txt, and git commit -m 'Added boto to requirements.txt', all successful.
Then I got into the python shell and imported boto without any resulting errors.
Finally, I ran eb start on the normal command line again. Same "no module named boto" error.
It seems like the eb start command is not using my virtualenv. What should I do?
|
Django South migration is throwing an error 'module' object has no attribute 'SET_NULL'
| 23,358,017 | 0 | 0 | 338 | 0 |
python,django,django-models,django-south
|
OK This is not a valid question. I am embarrassed to admit I made a small tweak on the migration script that caused the problem. Please ignore this question - seems like I dont have a way to delete a question I had asked!
| 0 | 0 | 0 | 0 |
2014-04-29T05:30:00.000
| 2 | 1.2 | true | 23,356,211 | 0 | 0 | 1 | 1 |
I just generated the migration scripts through ./manage.py schemamigration --auto and ran it. I get the following error. I am stumped as to what it could mean. I have been using SET_NULL for a while now. So this is something new that didn't occur earlier. Any idea what could be wrong?
Traceback (most recent call last):
File "./manage.py", line 16, in
execute_from_command_line(sys.argv)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/init.py", line 399, in execute_from_command_line
utility.execute()
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/init.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.dict)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/management/commands/schemamigration.py", line 111, in handle
old_orm = last_migration.orm(),
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/utils/init.py", line 62, in method
value = function(self)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/migration/base.py", line 432, in orm
return FakeORM(self.migration_class(), self.app_label())
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/orm.py", line 48, in FakeORM
_orm_cache[args] = _FakeORM(*args)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/orm.py", line 134, in init
self.retry_failed_fields()
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/orm.py", line 377, in retry_failed_fields
fname, modelname, e
ValueError: Cannot successfully create field 'winner' for model 'match': 'module' object has no attribute 'SET_NULL'.
|
memcache.get returns wrong object (Celery, Django)
| 24,082,360 | 6 | 9 | 2,113 | 0 |
python,django,caching,memcached,celery
|
Solved it finally:
Celery has dynamic scaling feature- it's capable to add/kill workers according to load
It does it via forking existing one
Opened sockets and files are copied to the forked process, so both processes share them, which leads to race condition, when one process reads response of another one. Simply, it's possible that one process reads response intended for second one, and vise-versa.
from django.core.cache import cache this object stores pre-connected memcached socket. Don't use it when your process could be dynamically forked.. and don't use stored connections, pools and other.
OR store them under current PID, and check it each time you're accessing cache
| 0 | 1 | 0 | 0 |
2014-04-29T07:54:00.000
| 2 | 1 | false | 23,358,787 | 0 | 0 | 1 | 1 |
Here is what we have currently:
we're trying to get cached django model instance, cache key includes name of model and instance id. Django's standard memcached backend is used. This procedure is a part of common procedure used very widely, not only in celery.
sometimes(randomly and/or very rarely) cache.get(key) returns wrong object: either int or different model instance, even same-model-different-id case appeared. We catch this by checking correspondence of model name & id and cache key.
bug appears only in context of three of our celery tasks, never reproduces in python shell or other celery tasks. UPD: appears under long-running CPU-RAM intensive tasks only
cache stores correct value (we checked that manually at the moment the bug just appeared)
calling same task again with same arguments might don't reproduce the issue, although probability is much higher, so bug appearances tend to "group" in same period of time
restarting celery solves the issue for the random period of time (minutes - weeks)
*NEW* this isn't connected with memory overflow. We always have at least 2Gb free RAM when this happens.
*NEW* we have cache_instance = cache.get_cache("cache_entry") in static code. During investigation, I found that at the moment the bug happens cache_instance.get(key) returns wrong value, although get_cache("cache_entry").get(key) on the next line returns correct one. This means either bug disappears too quickly or for some reason cache_instance object got corrupted.
Isn't cache instance object returned by django's cache thread safe?
*NEW* we logged very strange case: as another wrong object from cache, we got model instance w/o id set. This means, the instance was never saved to DB therefore couldn't be cached. (I hope)
*NEW* At least one MemoryError was logged these days
I know, all of this sounds like some sort of magic.. And really, any ideas how that's possible or how to debug this would be very appreciated.
PS: My current assumption is that this is connected with multiprocessing: as soon as cache instance is created in static code and before Worker process fork this would lead to all workers sharing same socket (Does it sound plausibly?)
|
python-social-auth and github, I have this error "The redirect_uri MUST match the registered callback URL for this application"
| 63,099,520 | 0 | 1 | 4,663 | 0 |
python,django,github,python-social-auth
|
I did solve the login redirect URI mismatch by just using http://127.0.0.1:8000/
| 0 | 0 | 0 | 0 |
2014-04-29T09:02:00.000
| 3 | 0 | false | 23,360,160 | 0 | 0 | 1 | 2 |
I'm using python-social-auth on a project to authenticate the user with Github.
I need to redirect the user depending on the link they use. To do that I'm using the next attribute on the url, and I didn't declare any redirect url on my github app neither in my django settings.
This is the href attribute I'm using for my link : {% url 'social:begin' 'github' %}?next={% url 'apply' j.slug %}
And the first time I click on it, I'm getting redirected to my homepage with this error in the url field : http://127.0.0.1:8000/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fv3%2Foauth%2F%23redirect-uri-mismatch&state=Ui1EOKTHDhOkNJESI5RTjOCDEIdfFunt
But after first time the link work.
I don't know where is the problem, I hope someone can help me. Thanks
|
python-social-auth and github, I have this error "The redirect_uri MUST match the registered callback URL for this application"
| 57,011,829 | 0 | 1 | 4,663 | 0 |
python,django,github,python-social-auth
|
that worked
Setting your domain to 127.0.0.1 in your hosts file should work, something like this
127.0.0.1 example.com
| 0 | 0 | 0 | 0 |
2014-04-29T09:02:00.000
| 3 | 0 | false | 23,360,160 | 0 | 0 | 1 | 2 |
I'm using python-social-auth on a project to authenticate the user with Github.
I need to redirect the user depending on the link they use. To do that I'm using the next attribute on the url, and I didn't declare any redirect url on my github app neither in my django settings.
This is the href attribute I'm using for my link : {% url 'social:begin' 'github' %}?next={% url 'apply' j.slug %}
And the first time I click on it, I'm getting redirected to my homepage with this error in the url field : http://127.0.0.1:8000/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fv3%2Foauth%2F%23redirect-uri-mismatch&state=Ui1EOKTHDhOkNJESI5RTjOCDEIdfFunt
But after first time the link work.
I don't know where is the problem, I hope someone can help me. Thanks
|
django - comparing old and new field value before saving
| 23,363,551 | 49 | 81 | 49,211 | 0 |
python,django,django-signals
|
It is better to do this at ModelForm level.
There you get all the Data that you need for comparison in save method:
self.data : Actual Data passed to the Form.
self.cleaned_data : Data cleaned after validations, Contains Data eligible to be saved in the Model
self.changed_data : List of Fields which have changed. This will be empty if nothing has changed
If you want to do this at Model level then you can follow the method specified in Odif's answer.
| 0 | 0 | 0 | 0 |
2014-04-29T09:43:00.000
| 11 | 1 | false | 23,361,057 | 0 | 0 | 1 | 1 |
I have a django model, and I need to compare old and new values of field BEFORE saving.
I've tried the save() inheritance, and pre_save signal. It was triggered correctly, but I can't find the list of actually changed fields and can't compare old and new values. Is there a way? I need it for optimization of pre-save actions.
Thank you!
|
Recording 24-bit audio with pyaudio
| 35,827,692 | -1 | 5 | 2,262 | 0 |
python,audio,pyaudio
|
check if the top end 8 bits are 0. If they are not, you have a true 24 bit recording.
| 0 | 0 | 0 | 0 |
2014-04-29T16:43:00.000
| 2 | -0.099668 | false | 23,370,556 | 1 | 0 | 1 | 1 |
I need to record 24-bit audio (because it's the archival standard for audio digitization). However, the wave library seems to only go up to 16-bit.
It looks like pyaudio can work with 24-bit audio but every example I've found shows pyaudio using the wave library, meaning it has to save 16-bit.
Is it possible to record and playback 24-bit audio with pyaudio?
|
Python Django: urls.py questions
| 23,379,239 | 0 | 1 | 173 | 0 |
python,django,django-urls
|
For 1), if you don't want to do a separate route for every single route on your website, you'll need middleware that implements process_exception and outputs an HttpResponseRedirect.
For 2 and 3, those are rules that are presumably limited to specific routes, so you can do them without middleware.
2 might be doable in urls.py with a RedirectView, but since the relevant bit is a query string argument, I would probably make that an actual view function that looks at the query string. Putting a ? character in a url regex seems strange because it will interfere with any other use of query strings on that endpoint, among other reasons.
For 3, that's a straightforward RedirectView and you can do it entirely in urls.py.
| 0 | 0 | 0 | 0 |
2014-04-30T02:11:00.000
| 2 | 1.2 | true | 23,378,498 | 0 | 0 | 1 | 1 |
Need a little help with my urls.py and other stuff.
How can I replicate this in Django?
1) When user requests a non-existent page it will redirect to one up the directory level. Ex: example.com/somegoodpage/somebadpage should be redirected to example.com/somegoodpage.
2) When user requests page example.com/foo/bar/?name=John it will make url to example.com/foo/bar/name=John
3) When user requests page example.com/foo/bar/John it will change url to example.com/foo/bar/name=John.
Any help is greatly appreciated. Thank You.
|
How to set default url in settings.py for "www.example.com/path"
| 23,384,956 | 1 | 0 | 617 | 0 |
django,python-2.7,django-settings
|
In your settings.py, add FORCE_SCRIPT_NAME = 'path', where 'path' is the directory where the project resides.
For example, if your site exists at http://john.example.com/mywebsite, settings.py would contain FORCE_SCRIPT_NAME = '/mywebsite'.
| 0 | 0 | 0 | 0 |
2014-04-30T08:45:00.000
| 1 | 0.197375 | false | 23,383,569 | 0 | 0 | 1 | 1 |
I am trying to deploy my django on a server, but this server has an extra tag for the url like this. http://john.example.com/mywebsite instead of http://john.example.com
so whenever I want to redirect from the homepage to other pages, I got error messages because the redirected pages are missing the extra tag /mywebsite
for example
I want to direct to http://john.example.com/mywebsite/apple but when I choose on link on the template it direct me to http://john.example.com/apple
so I just wonder if there is a way to fix this by setting default home directory to http://john.example.com/mywebsite instead of http://john.example.com/ so I don't have to fix all my code
Thanks you
|
setting custom viewerProcess fails
| 23,422,211 | 0 | 2 | 902 | 0 |
python,menu,process,nuke
|
Turns out that I have to write out inside the menu.py file instead of init.py file.
And for some reasons, the naming convention - 'Show Primary Grade' works despite me unable to find the pass name for it even though I am able to track down its gizmo file...
| 0 | 0 | 0 | 0 |
2014-04-30T10:39:00.000
| 2 | 0 | false | 23,385,917 | 0 | 0 | 1 | 1 |
I am trying to set my viewerProcess option to be 'Show Primary Grade' instead of 'Film' whenever my nuke is booted
However, due to the limiting information I am able to find on the net, I tried inserting the following code nuke.knobDefault("viewerProcess", "Show Primary Grade") in the init.py but I am unable to get it covered, much less not knowing if the code I have written is right or wrong.
As the Show Primary Grade is a custom plugin that my workplace is using (it is shown in this naming in the list of selection), is there any ways to check and make sure that I am writing it right?
Oh and by the way, am I able to set its script Editor to be like Maya, where whenever the user clicks on something, it will display the results in the output field?
|
django import ipdb; ipdb.set_trace(); still want to run debugger even if commented. WHY?
| 23,388,104 | 1 | 0 | 1,057 | 0 |
python,django,debugging,ipdb
|
I'd ensure I've killed the runserver/gunicorn and restarted it cleanly, to ensure there are no threads that are still running ipdb. (if you're using django-devserver, for instance, that's multi-threaded)
| 0 | 0 | 0 | 0 |
2014-04-30T11:11:00.000
| 1 | 1.2 | true | 23,386,550 | 0 | 0 | 1 | 1 |
Ihave problem with IPDB. I comment out it after I do not use it but after I run the web page after single refresh the debbuger is fired anyway. I have to referesh at least two times or so, to force django not willing going into debugging.
additionally I am expiriencing extremlly often error: [Errno 32] Broken pipe
(if it matters I am running it in vagrant based vm)
|
How to detect POST after form submit in Selenium python?
| 45,631,061 | 0 | 3 | 2,251 | 0 |
python,django,forms,selenium
|
I guess it depends on how deep down the rabbit hole you want to go.
If you're building a test for the functional side from the user's perspective, and your GET action results in changes on the webpage, then trigger the submit with selenium, then wait for the changes to propagate to the webpage (like waiting for one/more elements to change value or waiting for an element to appear)
If you want to build an unit test, then all you should be testing is the ability to Submit the data, not also the ability of the javascript code to do a POST request, then a GET then display the data.
If you want to build an integration test, then you will need to check that each individual action in the sequence you described is performed correctly in whatever scenario you deem appropriate and then check that the total result of those actions is as expected. The tricky part will be chaining all those checks together.
If you want to build an end to end test, then you need to check for all of the above, plus changes to any permanent storage locations that the code you test changes (like databases or in-memory structures) plus whatever stress/security/usability/performance checks your software needs to pass in your specific context.
| 0 | 0 | 1 | 0 |
2014-04-30T20:04:00.000
| 2 | 0 | false | 23,397,090 | 0 | 0 | 1 | 1 |
I'm using Selenium (python bindings, with Django) for automated web app testing. My page has a form with a button, and the button has a .click() handler that captures the click (i.e. the form is not immediately submitted). The handler runs function2(), and when function2() is done, it submits the form.
In my test with Selenium, I fill in the form, then click the Submit button. I want to verify that the form is eventually submitted (resulting in a POST request). The form POSTs and then redirects with GET to the same url, so I can't check for a new url. I think I need to check that the POST request occurs. How do I check for successful form submission?
Alternatively I could check at the database level to see that some new object is created, but that would make for a bad "unit" test.
|
How to create reports of form view in Openerp-7?
| 23,443,504 | 0 | 0 | 480 | 0 |
python,openerp-7
|
You can use webkit reports,
Install report_webkit module in OpenERP
Install wkhtmltopdf
in terminal type, sudo apt-get install wkhtmltopdf
in terminal type which wkhtmltopdf and copy the path
in OpenERP, Go to Settings->Technical->Parameters->System Parameters
And create a new record with key 'webkit_path' and paste the path in value field
Now you can create reports using mako templates
For testing wkhtmltopdf is working correctly,
in terminal type, wkhtmltopdf www.google.com google.pdf
| 0 | 0 | 0 | 0 |
2014-05-02T06:17:00.000
| 1 | 0 | false | 23,421,959 | 0 | 0 | 1 | 1 |
I wanted to know whats is the process of creating reports in my openerp-7 module. Do I have to install some specific module for reporting or is there any default configuration ? I want to create reports of my forms . Like if I want to create report in form view then what am I suppose to do .
Hopes for suggestion
Thanks and regards
|
Comments scraping without using Api
| 37,941,838 | 0 | 0 | 514 | 0 |
python,web-crawler,scrapy
|
HtmlAgilityPack helped me in parsing and reading Xpath for the reviews. It worked :)
| 0 | 0 | 1 | 0 |
2014-05-02T08:11:00.000
| 2 | 1.2 | true | 23,423,582 | 0 | 0 | 1 | 1 |
I am using scrapy to scrape reviews about books from a site. Till now i have made a crawler and scraped comments of a single book by giving its url as starting url by myself and i even had to give tags of comments about that book by myself after finding it from page's source code. Ant it worked. But the problem is that till now the work i've done manually i want it to be done automatically. i.e. I want some way that crawler should be able to find book's page in the website and scrape its comments. I am extracting comments from goodreads and it doesn't provide a uniform method for url's or even tags are also different for different books. Plus i don't want to use Api. I want to do all work by myself. Any help would be appreciated.
|
cannot marshal objects
| 23,648,694 | 0 | 2 | 4,577 | 0 |
python,django,web-services,openerp,xmlrpclib
|
Aternatively you can promote datetime.date() to datetime.datetime() before sending the reply.
| 0 | 0 | 0 | 0 |
2014-05-02T10:59:00.000
| 2 | 0 | false | 23,426,515 | 0 | 0 | 1 | 1 |
I am trying save record from django (front-end) to openerp (back-end). I am using openerp webservice using xmlrpclib. It works well with normal string and number data, but when i tried to pass date field, it throws error. cannot marshal <type 'datetime.date'> objects
Please help me..
|
Products are not shown if the user is not logged in
| 23,427,146 | 1 | 0 | 56 | 0 |
python,django,mezzanine,cartridge
|
The products most likely aren't published, but can be previewed by an authenticated administrator.
Check the "status" and "published from" fields for each product.
| 0 | 0 | 0 | 0 |
2014-05-02T11:20:00.000
| 1 | 1.2 | true | 23,426,916 | 0 | 0 | 1 | 1 |
I am trying to develop a small project to learn how mezzanine and cartridge work.
I have the problem that items in the shop are listed only if I am logged in, while I'd like to be able to show them to unauthorized users.
Is there a setting that has to be toggled?
|
Parsing multiple News articles
| 23,464,291 | 0 | 0 | 917 | 0 |
python,parsing,html-parsing,beautifulsoup
|
Your solution is really going to be specific to each website page you want to scrape, so, without knowing the websites of interest, the only thing I could really suggest would be to inspect the page source of each page you want to scrape and look if the article is contained in some html element with a specific attribute (either a unique class, id, or even summary attribute) and then use beautiful soup to get the inner html text from that element
| 0 | 0 | 1 | 0 |
2014-05-04T09:13:00.000
| 2 | 0 | false | 23,454,496 | 0 | 0 | 1 | 1 |
I have built a program for summarization that utilizes a parser to parse from multiple websites at a time. I extract only <p> in each article.
This throws out a lot of random content that is unrelated to the article. I've seen several people who can parse any article perfectly. How can i do it? I am using Beautiful Soup
|
How to start a privileged process through web on the server?
| 23,454,864 | 0 | 0 | 166 | 0 |
python,linux,web,flask,raspberry-pi
|
Best practice is to never do this kind of thing. If you are giving sudo access to your pi from internet and then executing user input you are giving everyone in the internet the possibility of executing arbitrary commands in your system. I understand that this is probably your pet project, but still imagine someone getting access to your computer and turning camera when you don't really expect it.
| 0 | 0 | 0 | 1 |
2014-05-04T09:16:00.000
| 2 | 0 | false | 23,454,521 | 0 | 0 | 1 | 1 |
I have created a web-app using Python Flask framework on Raspberry Pi running Raspbian. I want to control the hardware and trigger some sudo tasks on the Pi through web.
The Flask based server runs in non-sudo mode listening to port 8080. When a web client sends request through HTTP, I want to start a subprocess with sudo privileges. (for ex. trigger changes on gpio pins, turn on camera etc.). What is the best practice for implementing this kind of behavior?
The webserver can ask for sudo password to the client, which can be used to raise the privileges. I want some pointers on how to achieve this.
|
In Python, can I hide a base class's members?
| 23,457,554 | 2 | 5 | 1,650 | 0 |
python,pycharm
|
I'd suggest you to use composition instead of inheritance. Then you design class's interface and decide which methods are available.
| 0 | 0 | 0 | 1 |
2014-05-04T14:43:00.000
| 3 | 0.132549 | false | 23,457,532 | 1 | 0 | 1 | 1 |
I am making a programming framework (based on Django) that is intended for students with limited programming experience. Students are supposed to inherit from my base classes (which themselves are inherited from Django models, forms, and views).
I am testing this out now with some students, and the problem is that when they write code in their IDE (most of them are using PyCharm), autocomplete gives them a ton of suggestions, since there are so many inherited methods and attributes, 90% of which are not relevant to them.
Is there some way to hide these inherited members? At the moment I am primarily thinking of how to hide them in auto-complete (in PyCharm and other IDEs). They can (and probably should) still work if called, but just not show up in places like auto-complete.
I tried setting __dict__, but that did not affect what showed up in autocomplete. Another idea I have is to use composition instead of inheritance, though I would have to think this through in more detail.
Edit: This framework is not being used in CS classes; rather, students will be using it to build apps for non-CS domain. So my priority is to keep it simple as possible, perhaps even if it's not a "pure" approach. (Nevertheless, I am considering those arguments as they do have merit.)
|
Django: Atomic operations on a directory in media storage
| 23,479,318 | 0 | 0 | 37 | 0 |
python,django,mercurial
|
This is something you should fix at the web application level, not at the Mercurial level. If you're fine with having people wait you set up a distributed locking scheme where the web worker thread tries to acquire a repository-specific lock from shared memory/storage before taking any actions. If it can't acquire the lock you respond with either a status-code 503 with a retry-after header or you have the web-worker thread retry until it can get the lock or times out.
| 0 | 0 | 0 | 0 |
2014-05-05T08:33:00.000
| 1 | 0 | false | 23,468,132 | 0 | 0 | 1 | 1 |
Our Django project provides interfaces to users to create repository
create new repo
add new changes to existing repo
Any user can access any repo to make changes directly via an HTTP POST containing changes.
Its totally fine if the traffic is less. But if the traffic increases up to the point that multiple users want to add changes to same repo at exactly same time, how to handle it?
We currently use Hg (Mercurial) for repos
|
Long-running Openshift Cron
| 23,485,693 | 4 | 3 | 629 | 0 |
python,cron,flask,openshift,nohup
|
I'm lazy. Cut and paste :)
I have been told 5 minutes is the limit for the free accounts. That includes all background processes. I asked a similar question here on SO.
| 0 | 1 | 0 | 1 |
2014-05-05T16:40:00.000
| 1 | 1.2 | true | 23,477,570 | 0 | 0 | 1 | 1 |
I have a long-running daily cron on OpenShift. It takes a couple hours to run. I've added nohup and I'm running it in the background. It still seems to timeout at the default 5 minutes (It works appropriately for this time). I'm receiving no errors and it works perfectly fine locally.
nohup python ${OPENSHIFT_REPO_DIR}wsgi/manage.py do_something >> \
${OPENSHIFT_DATA_DIR}do_something_data.log 2> \
${OPENSHIFT_DATA_DIR}do_something_error.log &
Any suggestions is appreciated.
|
How does apache runs a application when a request comes in?
| 23,501,387 | 1 | 0 | 54 | 0 |
python,apache
|
It looks like your application is using zmq to bind to some port.
As you have suspected already, each request can be run as independent process, thus competing in access to the port to bind to.
There can be so called workers, each running one process processing http/wsgi requests, and each trying to bind.
You shall redesign your app not to use bind, but connect, this will probably require having another process with zeromq serving something you do with that (but this last line is dependent on what you do in your app).
| 0 | 0 | 0 | 0 |
2014-05-06T17:16:00.000
| 1 | 0.197375 | false | 23,500,942 | 0 | 0 | 1 | 1 |
I have a python web application running on apache2 deployed with mod_wsgi. The application has a thread continuously running. This thread is a ZeroMQ thread and listening to a port in loop. The application is not maintaining session. Now if I open the browser and sends a request to the apache server the data is accepted for the first time. Now when second time I send the request It shows Internal server error. When I checked the error log file for traceback, It shows the ZMQError:- The address already in use.
Does apache reloads the application on each request sent from the browser since so that the ZeroMQ thread is being created everytime and being assigned the port but since the port has already been assigned it shows error....
|
Use Flask with Amazon DynamoDB without SQLite
| 23,512,512 | 2 | 0 | 637 | 1 |
python,flask,amazon-dynamodb
|
No. SQLite is just one option for backend storage. SQLite is mentioned in the tutorial only for its simplicity in getting something working fast and simply on a typical local developers environment. (No db to or service to install/configure etc.)
| 0 | 0 | 0 | 0 |
2014-05-07T06:21:00.000
| 1 | 0.379949 | false | 23,510,212 | 0 | 0 | 1 | 1 |
I am writing a small web application using Flask and I have to use DynamoDB as backend for some hard requirements.
I went through the tutorial on Flask website without establishing sqlite connection. All data were pulled directly from DynamoDB and it seemed to work.
Since I am new to web development in general and Flask framework, do you see any problems with this approach?
|
scrapyd pool_intervel to scheduler a spider
| 24,659,705 | 1 | 0 | 215 | 0 |
python,scrapy,scrapyd
|
Maybe you should do a cron job that executes every three hours and performs a curl call to Scrapyd to schedule the job.
| 0 | 0 | 1 | 0 |
2014-05-07T19:25:00.000
| 1 | 0.197375 | false | 23,526,579 | 0 | 0 | 1 | 1 |
I want to make my spider start every three hours.
I have a scrapy confinguration file located in c:/scrapyd folder.
I changed the poll_interval to 100
the spider works, but it didn't repeat each 100 seconds.
how to do that please?
|
Run python from virtualenv in org file & HTML5 export in org v.7.9.3
| 23,557,258 | 0 | 1 | 510 | 0 |
python,emacs,virtualenv,org-mode
|
Reads like a bug, please consider reporting it at [email protected]
As a workaround try setting the virtualenv at the Python-side, i.e. give PYTHONPATH as argument.
Alternatively, mark the source-block as region and execute it the common way, surpassing org
| 0 | 0 | 0 | 1 |
2014-05-08T02:09:00.000
| 1 | 0 | false | 23,531,555 | 0 | 0 | 1 | 1 |
I'm running into a few issues on my Emacs + Org mode + Python setup. I thought I'd put this out there to see if the community had any suggestions.
Virtualenv:
I'm trying to execute a python script within a SRC block using a virtual environment instead of my system's python implementation. I have a number of libraries in this virtual environment that I don't have on my system's python (e.g. Matplotlib). Now, I set python-shell-virtualenv-path to my virtualenv's root directory. When I run M-x run-python the shell runs from my virtual environment. That is, I can import Matplotlib with no problems. But when I import Matplotlib within a SRC block I get an import error.
How can I have it so the SRC block uses the python in my virtual
environment and not my system's python?
Is there any way I can set
the path to a given virtual environment automatically when I load an
org file?
HTML5 Export:
I'm trying to export my org-files in 'html5', as opposed to the default 'xhtml-strict'. The manual says to set org-html-html5-fancy to t. I tried searching for org-html-html5-fancy in M-x org-customize but I couldn't find it. I tried adding (setq org-html-html5-fancy t) to my init.el, but nothing happened. I'm not at all proficient in emacs-lisp so my syntax may be wrong. The manual also says I can set html5-fancy in an options line. I'm not really sure how to do this. I tried #+OPTIONS html5-fancy: t but it didn't do anything.
How can I export to 'html5' instead of 'xhtml-strict' in org version
7.9.3f and Emacs version 24.3.1?
Is there any way I can view and customize the back-end that parses
the org file to produce the html?
I appreciate any help you can offer.
|
Why not to extend User in Django REST
| 23,548,216 | 2 | 2 | 74 | 0 |
python,django,django-rest-framework
|
This is not a question about Django REST, but about Django itself.
The problem with extending the User object directly is that it is already a concrete model, so extending it will use multi-table inheritance. That's not usually a good idea - especially if you're further extending it.
AbstractUser is an abstract model, but (unlike AbatractBaseUser) contains all the fields that User defines. You should use that.
| 0 | 0 | 0 | 0 |
2014-05-08T16:44:00.000
| 1 | 1.2 | true | 23,547,783 | 0 | 0 | 1 | 1 |
I am using Django REST to create users for my app.
Everywhere i look at, for users they extend AbstractBaseUser.
I tried extending the User model, and it seems to work just fine.
I have an PersonalAbstractUser that extends the Django User. Then, Worker and Client extends PersonalAbstractUser.
Login and custom permissions seem to work just fine up until now, but i am getting concerned when i see that no one else is extending User...
Why is that? Did i miss something?
|
Allow customers to see Python code, but secure modification
| 23,548,585 | 2 | 2 | 65 | 0 |
python,django,security
|
This is not really suited for StackOveflow; but the suggestion I would make is to take the parts of your code that are subject to audit; write them as a Python C module which is then imported. You can ship the compiled module along with your normal, unmodified django application.
This would only work if certain parts of your code are subject to this audit/restriction and not the entire application.
Your only other recourse is to host it yourself and provide your own audit/controls on the source.
| 0 | 0 | 0 | 0 |
2014-05-08T17:21:00.000
| 1 | 1.2 | true | 23,548,519 | 0 | 0 | 1 | 1 |
Here's the deal: I have a Python application for business written in Django. It's not in Cloud, customers should install them at their own servers.
However, Brazil IT laws for tax payment calculation softwares forces me to homologate every piece of code (in this case, every file.py). They generate a MD5 hash and if a customer of mine is running a modified version, I have to pay a fine and should even be sued by Government.
I really don't care if my source code is available to everyone. Really. I just want to guarantee no changes at the source code.
Does anyone have an idea to protect the code? Customers should have root access to servers, so a simple "statement of compliance" should not guarantee anything...
|
django-celery infrastructure over multiple servers, broker is redis
| 23,846,005 | 3 | 1 | 2,804 | 0 |
python,django,architecture,celery
|
Celery actually makes this pretty simple, since you're already putting the tasks on a queue. All that changes with more workers is that each worker takes whatever's next on the queue - so multiple workers can process at once, each on their own machine.
There's three parts to this, and you've already got one of them.
Shared storage, so that all machines can access the same files
A broker that can hand out tasks to multiple workers - redis is fine for that
Workers on multiple machines
Here's how you set it up:
User uploads file to front-end server, which stores in your shared storage (e.g. S3, Samba, NFS, whatever), and stores the reference in the database
Front-end server kicks off a celery task to process the file e.g.
def my_view(request):
# ... deal with storing the file
file_in_db = store_file(request)
my_process_file_task.delay(file_in_db.id) # Use PK of DB record
# do rest of view logic...
On each processing machine, run celery-worker:
python manage.py celery worker --loglevel=INFO -Q default -E
Then as you add more machines, you'll have more workers and the work will be split between them.
Key things to ensure:
You must have shared storage, or this gets much more complicated
Every worker machine must have the right Django/Celery settings to be able to find the redis broker and the shared storage (e.g. S3 bucket, keys etc)
| 0 | 1 | 0 | 0 |
2014-05-08T20:24:00.000
| 2 | 0.291313 | false | 23,551,808 | 0 | 0 | 1 | 2 |
Currently we have everything setup on single cloud server, that includes:
Database server
Apache
Celery
redis to serve as a broker for celery and for some other tasks
etc
Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed.
I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated.
Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers):
User uploads a abc.pptx file->reference is stored in database->stored on server disk
A task (convert document to pdf) is created and goes in redis (broker) queue
celery which is running on same server picks the task from queue
Read the file, convert it to pdf using software called docsplit
create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file
Considering the above use case, how can you setup up multiple web servers which can perform the same functionality?
|
django-celery infrastructure over multiple servers, broker is redis
| 23,552,055 | 4 | 1 | 2,804 | 0 |
python,django,architecture,celery
|
What will strongly simplify your processing is some shared storage, accessible from all cooperating servers. With such design, you may distribute the work among more servers without worrying on which server will be next processing step done.
Using AWS S3 (or similar) cloud storage
If you can use some cloud storage, like AWS S3, use that.
In case you have your servers running at AWS too, you do not pay for traffic within the same region, and transfers are quite fast.
Main advantage is, your data are available from all the servers under the same bucket/key name, so you do not have to bother about who is processing which file, as all have shared storage on S3.
note: If you need to get rid of old files, you may even set up some policy file on give bucket, e.g. to delete files older than 1 day or 1 week.
Using other types of shared storage
There are more options
Samba
central file server
FTP
Google storage (very similar to AWS S3)
Swift (from OpenStack)
etc.
For small files you could even use Redis, but such solutions are for good reasons rather rare.
| 0 | 1 | 0 | 0 |
2014-05-08T20:24:00.000
| 2 | 0.379949 | false | 23,551,808 | 0 | 0 | 1 | 2 |
Currently we have everything setup on single cloud server, that includes:
Database server
Apache
Celery
redis to serve as a broker for celery and for some other tasks
etc
Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed.
I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated.
Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers):
User uploads a abc.pptx file->reference is stored in database->stored on server disk
A task (convert document to pdf) is created and goes in redis (broker) queue
celery which is running on same server picks the task from queue
Read the file, convert it to pdf using software called docsplit
create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file
Considering the above use case, how can you setup up multiple web servers which can perform the same functionality?
|
Uploading and serving media files from google drive on django
| 23,559,388 | 1 | 1 | 909 | 0 |
python,django,google-app-engine,google-drive-api
|
You can insert/upload files using the Drive API and set the "restricted" label to prevent downloading of the file. You would then set the appropriate permissions to this file to allow anyone or a specified set of users to access the file.
Download restrictions may or may not apply for files that are converted to one of the Google Apps formats because the option to prevent downloading seems unavailable for these files through the Google Drive UI. You would have to test this yourself.
| 0 | 0 | 0 | 0 |
2014-05-09T04:25:00.000
| 1 | 0.197375 | false | 23,556,597 | 0 | 0 | 1 | 1 |
I'm working on an app in django that allows users to upload documents to google drive and share them with friends. The problem is I want to restrict the shared documents to view only (no download option). How can I go about doing this?
|
Django: Search URL for keyword RegExp
| 23,557,031 | 2 | 0 | 157 | 0 |
python,regex,django
|
Part after the ? not used in the url dispatching process because it contains GET parameters. Otherwise you will not be able to use a GET request with parameters on patterns like r'^foo/&' (ampersand at the end means the end of the string, and without it it would be harder to use patterns like r'^foo/' and r'^foo/bar/' at the same time).
So for your url the pattern should look like r'^databaseadd/q=DQ-TDOXRTA&' and then in add_recipe view you need to check for the recipe GET parameter.
| 0 | 0 | 0 | 0 |
2014-05-09T04:41:00.000
| 2 | 1.2 | true | 23,556,752 | 0 | 0 | 1 | 1 |
I have the following URL:
http://127.0.0.1:8000/databaseadd/q=DQ-TDOXRTA?recipe=&recipe=&recipe=&recipe=&recipe=
I'd like to match if the keyword "recipe=" is found in the url.
I have the following line in django url.py but it's not working:
url(r'recipe=', 'plots.views.add_recipe'),
Are the "&" and the "?" throwing it off?
Thanks!
Alex
|
How integrate a websocket between tornado and uwsgi?
| 23,577,862 | 1 | 0 | 728 | 0 |
python,websocket,tornado,uwsgi
|
The uWSGI tornado loop engine is no more than a proof of concept, you could try to use it but native uWSGI websockets support or having nginx routing requests to both uWSGI and tornado are better (and more solid) choices for sure.
| 0 | 1 | 0 | 0 |
2014-05-09T14:38:00.000
| 1 | 1.2 | true | 23,567,368 | 0 | 0 | 1 | 1 |
I'm work in a little message chat project and i use tornado websocket for the communication between web browser and the server here all work fine but i was working with tornado integrate web framework and i want to configurate my app for run on web server nginx -uwsgi , i read for integrate tornado and uwsgi i have to run the application tornado in wsgi mode but on this way the asynchronous methods are not supported. And i ask what is the best way for integrate a tornado websocket to uwsgi? Or i should run tornado websocket and configure it on nginx separate of the rest my app?
|
Reportlab error after deploying
| 23,571,695 | 1 | 0 | 127 | 0 |
python,google-app-engine,reportlab
|
If you want to access files from your application code that are covered by a static-file route in your app config, you need to set application_readable to true. Or, you can move/copy the file somewhere else in your project.
| 0 | 0 | 0 | 0 |
2014-05-09T18:16:00.000
| 1 | 1.2 | true | 23,571,451 | 0 | 0 | 1 | 1 |
I have an app running on GAE, using reportlab to email generated PDF's.
When I run my reportlab app on localhost everything works perfectly. But when I run it after deploying, it throws out an error.
Error
IOError: Cannot open resource
"/base/data/home/apps/myapp/1.375717494064852868/static/img/__.jpg"
Line
img=[
[Image(os.path.join(os.path.dirname(os.path.abspath(file)),
'static/img/__.jpg'))]
]
app.yaml
-url: /static/img
static_dir: static/img
|
Can web2py be made to allow exceptions to be caught in my PyCharm debugger?
| 23,634,516 | 0 | 0 | 95 | 0 |
python,debugging,exception,web2py,pycharm
|
I figured this out by looking through the web2py source code. Apparently, web2py is set up to do what I want to do for a particular debugger, seemingly called Wing Db. There's a constant env var ref in the code named WINGDB_ACTIVE that, if set, redirects exceptions to an external debugger.
All I had do to was define this env var, WINGDB_ACTIVE, as 1 in my PyCharm execution configuration, and voila, exceptions are now passed through to my debugger!
| 0 | 0 | 0 | 0 |
2014-05-10T15:40:00.000
| 1 | 1.2 | true | 23,583,017 | 1 | 0 | 1 | 1 |
I'm just starting to build a web app with web2py for the first time. It's great how well PyCharm integrates with web2py.
One thing I'd like to do, however, is avoid the web2py ticketing system and just allow exceptions to be caught in the normal way in PyCharm. Currently, any attempt to catch exceptions, even via an "All Exceptions" breakpoint, never results in anything getting caught by Pycharm.
Can someone tell me if this is possible, and if so, how to do it?
|
Single page application losing custom headers after a refresh
| 23,587,708 | 1 | 0 | 597 | 0 |
python,angularjs,single-page-application,custom-headers
|
Store the token in sessionStorage or localStorage. In your application startup (config or run) look for this information and set your header.
Perhaps if your user selects "remember me" when they log-in; save the token in local storage otherwise keep it in session storage.
| 0 | 0 | 0 | 0 |
2014-05-10T20:41:00.000
| 1 | 1.2 | true | 23,586,090 | 0 | 0 | 1 | 1 |
I am building a single page web application (angularjs + python) where a user first needs to login with a username and password. After the user gets authenticated, a new custom header with a token is created and sent everytime this application makes calls to the python api.
One thing I noticed though, is that if I refresh the page (with F5 or Ctrl+F5) then the browser loses this custom header, so it is not sent anymore to the api.
Is there a way to keep the custom headers even after a refresh of the page?
|
mezzanine shop is not shown if behind a virtualhost
| 23,596,414 | 1 | 0 | 66 | 0 |
python,django,apache,virtualhost,mezzanine
|
It sounds like you've misconfigured the "sites" section.
Under "sites" in the admin interface, you'll find each configured site. Each of the pages in a site is related to one of these, and matched to the host name you use in the browser.
You'll probably find there's only one site record configured, and its domain doesn't match your production host that you're accessing the site via. If you update it, it should resolve everything.
| 0 | 1 | 0 | 0 |
2014-05-11T08:56:00.000
| 1 | 1.2 | true | 23,590,741 | 0 | 0 | 1 | 1 |
I developed my first shop using mezzanine.
If I run it with python manage runserver 0.0.0.0:8000 it works well, but if I try to put an apache virtualhost in front of it, the result I get is awful, because I only see the home page, but not the other ones.
I checked the generated HTML, and it looks very different.
I think it's a problem of mezzanine configuration, maybe on the configured sites, but I am not able to understand what I have to change.
Can you please give me a hint?
|
datamigration from datefield to datetimefield
| 23,591,861 | 1 | 2 | 56 | 0 |
python,django,django-models
|
The GMT offset for Pacific/Auckland is UTC+12 (hours).
datetime.datetime(2014, 5, 4, 12, 0, tzinfo=UTC) represents noon UTC on the 4th. But due to the offet, this is midnight in Auckland. However, midnight in Auckland is in fact "hour 0" on the 5th.
So, yes, the issue does have to do with timezones, but it's in fact not an issue. The date didn't change, it's just expressed in a different timezone.
Naturally, you could rollback the migration, and you account for timezones differently in step 1 where you transform the date into a datetime.
| 0 | 0 | 0 | 0 |
2014-05-11T10:54:00.000
| 1 | 0.197375 | false | 23,591,711 | 1 | 0 | 1 | 1 |
I'm just wondering where/what did I do wrong. So here's the scenario.
I have a field date = DateField() then I want to change it to date = DateTimeField() without losing the data stored from it.
What I did:
Add temporary field temp = DatetimeField() then transfer the value from date = DateField() to temp by datamigration
remove date = DateField()
Add date = DateTimeField()
Transfer stored value from temp to date
Everything went great, no errors. But one thing changed, the value.
For example:
Old data: datetime.date(2014, 5, 5)
New data: datetime.datetime(2014, 5, 4, 12, 0, tzinfo=UTC)
So my question is, why did it changed and deduct 1 day from the original value? Any thoughts? Is it because of the timezone? Timezone was set to Pacific/Auckland
Any help would be much appreciated. Thanks!
|
Django using mod_wsgi returns my 500 page
| 23,625,565 | 0 | 0 | 23 | 0 |
django,python-2.7,apache2,wsgi
|
Solved: Changed the django settings. Apparently Debug was False, should be True
| 0 | 0 | 0 | 0 |
2014-05-12T07:35:00.000
| 1 | 0 | false | 23,603,414 | 0 | 0 | 1 | 1 |
When I run 127.0.0.1 on the web (company LAN) I get an 500 page, but the costume one I wrote to my specific django application.
I'm using apache 2.2 with mod_wsgi and django 1.5.1 (Python 2.7). I checked all of my settings so it should work fine but I can't understand why it's still returning a 500.
Appreciate any help,
Alon
|
GAE: how to quantify Frontend Instance Hours usage?
| 23,612,408 | 4 | 0 | 1,362 | 0 |
python,google-app-engine
|
There's no 100% sure way to assess the number of frontend instance hours. An instance can serve more than one request at a time. In addition, the algorithm of the scheduler (the system that starts the instances) is not documented by Google.
Depending on how demanding your code is, I think you can expect a standard F1 instance to hold up to 5 requests in parallel, that's a maximum. 2 is a safer bet.
My recommendation, if possible, would be to simulate standard interaction on your website with limited number of users, and see how the number of instances grow, then extrapolate.
For example, let's say you simulate 100 requests per minute during 2 hours, and you see that GAE spawns 5 instances for that, then you can extrapolate that a continuous load of 3000 requests per minute would require 150 instances during the same 2 hours. Then I would double this number for safety, and end up with an estimate of 300 instances.
| 0 | 1 | 0 | 0 |
2014-05-12T13:44:00.000
| 1 | 1.2 | true | 23,610,748 | 0 | 0 | 1 | 1 |
We are developing a Python server on Google App Engine that should be capable of handling incoming HTTP POST requests (around 1,000 to 3,000 per minute in total). Each of the requests will trigger some datastore writing operations. In addition we will write a web-client as a human-usable interface for displaying and analyse stored data.
First we are trying to estimate usage for GAE to have at least an approximation about the costs we would have to cover in future based on the number of requests. As for datastore write operations and data storage size it is fairly easy to come up with an approximate number, though it is not so obvious for the frontend and backend instance hours.
As far as I understood each time a request is coming in, an instance is being started which then is running for 15 minutes. If a request is coming in within these 15 minutes, the same instance would have been used. And now it is getting a bit tricky I think: if two requests are coming in at the very same time (which is not so odd with 3,000 requests per minute), is Google firing up another instance, hence Google would count an addition of (at least) 0.15 instance hours? Also I am not quite sure how a web-client that is constantly performing read operations on the datastore in order to display and analyse data would increase the instance hours.
Does anyone know a reliable way of counting instance hours and creating meaningful estimations? We would use that information to know how expensive it would be to run an application on GAE in comparison to just ordering a web server.
|
django change default runserver port
| 38,094,213 | -2 | 193 | 268,838 | 0 |
python,django,django-manage.py,manage.py
|
I was struggling with the same problem and found one solution. I guess it can help you.
when you run python manage.py runserver, it will take 127.0.0.1 as default ip address and 8000 as default port number which can be configured in your python environment.
In your python setting, go to <your python env>\Lib\site-packages\django\core\management\commands\runserver.py and set
1. default_port = '<your_port>'
2. find this under def handle and set
if not options.get('addrport'):
self.addr = '0.0.0.0'
self.port = self.default_port
Now if you run "python manage.py runserver" it will run by default on "0.0.0.0:
Enjoy coding .....
| 0 | 0 | 0 | 0 |
2014-05-13T18:39:00.000
| 15 | -0.02666 | false | 23,639,085 | 0 | 0 | 1 | 3 |
I would like to make the default port that manage.py runserver listens on specifiable in an extraneous config.ini. Is there an easier fix than parsing sys.argv inside manage.py and inserting the configured port?
The goal is to run ./manage.py runserver without having to specify address and port every time but having it take the arguments from the config.ini.
|
django change default runserver port
| 26,317,016 | -2 | 193 | 268,838 | 0 |
python,django,django-manage.py,manage.py
|
This is an old post but for those who are interested:
If you want to change the default port number so when you run the "runserver" command you start with your preferred port do this:
Find your python installation. (you can have multiple pythons installed and you can have your virtual environment version as well so make sure you find the right one)
Inside the python folder locate the site-packages folder. Inside that you will find your django installation
Open the django folder-> core -> management -> commands
Inside the commands folder open up the runserver.py script with a text editor
Find the DEFAULT_PORT field. it is equal to 8000 by default. Change it to whatever you like
DEFAULT_PORT = "8080"
Restart your server: python manage.py runserver and see that it uses your set port number
It works with python 2.7 but it should work with newer versions of python as well. Good luck
| 0 | 0 | 0 | 0 |
2014-05-13T18:39:00.000
| 15 | -0.02666 | false | 23,639,085 | 0 | 0 | 1 | 3 |
I would like to make the default port that manage.py runserver listens on specifiable in an extraneous config.ini. Is there an easier fix than parsing sys.argv inside manage.py and inserting the configured port?
The goal is to run ./manage.py runserver without having to specify address and port every time but having it take the arguments from the config.ini.
|
django change default runserver port
| 36,612,064 | 2 | 193 | 268,838 | 0 |
python,django,django-manage.py,manage.py
|
I'm very late to the party here, but if you use an IDE like PyCharm, there's an option in 'Edit Configurations' under the 'Run' menu (Run > Edit Configurations) where you can specify a default port. This of course is relevant only if you are debugging/testing through PyCharm.
| 0 | 0 | 0 | 0 |
2014-05-13T18:39:00.000
| 15 | 0.02666 | false | 23,639,085 | 0 | 0 | 1 | 3 |
I would like to make the default port that manage.py runserver listens on specifiable in an extraneous config.ini. Is there an easier fix than parsing sys.argv inside manage.py and inserting the configured port?
The goal is to run ./manage.py runserver without having to specify address and port every time but having it take the arguments from the config.ini.
|
Django was push in base wrong date
| 23,640,050 | 1 | 0 | 59 | 0 |
python,django
|
This is not strictly an answer to your question, but in general the best practice is to store time in UTC and convert it to whatever timezone you want at display time. This way there is less ambiguity about the time.
| 0 | 0 | 0 | 0 |
2014-05-13T18:57:00.000
| 2 | 0.099668 | false | 23,639,412 | 0 | 0 | 1 | 1 |
Please help!!!
I'm in views have value date by localzone, but in database, the Django was pull date by UTC...
What i'm must do, to push in base date by my local timezone? (my local zone Europe/Kiev)
Please help)))
|
App Engine: Difference between NDB and Datastore
| 23,646,875 | 5 | 2 | 568 | 1 |
python,django,google-app-engine
|
In simple words these are two versions of datastore . db being the older version and ndb the newer one. The difference is in the models, in the datastore these are the same thing. NDB provides advantages like handling caching (memcache) itself. and ndb is faster than db. so you should definitely go with ndb. to use ndb datastore just use ndb.Model while defining your models
| 0 | 1 | 0 | 0 |
2014-05-14T04:19:00.000
| 1 | 1.2 | true | 23,645,572 | 0 | 0 | 1 | 1 |
I have been going through the Google App Engine documentation (Python) now and found two different types of storage.
NDB Datastore
DB Datastore
Both quota limits (free) seem to be same, and their database design too. However NDB automatically cache data in Memcache!
I am actually wondering when to use which storage? What are the general practices regarding this?
Can I completely rely on NDB and ignore DB? How should it be done?
I have been using Django for a while and read that in Django-nonrel the JOIN operations can be somehow done in NDB! and rest of the storage is used in DB! Why is that? Both storages are schemaless and pretty well use same design.. How is that someone can tweak JOIN in NDB and not in DB?
|
Pyramid Mako pserver --reload not reloading in Mac
| 23,654,584 | 1 | 0 | 216 | 0 |
python,pyramid,mako,waitress
|
Oh my,
I found the thing... I had <%block cached="True" cache_key="${self.filename}+body"> and the file inclusion was inside of that block.
Cheerious:)
| 1 | 0 | 0 | 1 |
2014-05-14T05:42:00.000
| 2 | 0.099668 | false | 23,646,485 | 0 | 0 | 1 | 1 |
I've a strange issue. pserve --reload has stopped reloading the templates. It is reloading if some .py-file is changing, but won't notice .mak-file changes anymore.
I tried to fix it by:
Checking the filepermissions
Creating the new virtualenv, which didn't help.
Installing different version of mako without any effect.
Checking that the python is used from virtualenv
playing with the development.ini. It has the flag: pyramid.reload_templates = true
Any idea how to start debugging the system?
Versions:
Python 2.7
pyramid 1.5
pyramid_mako 1.02
mako 0.9.1
Yours
Heikki
|
Variable access in gunicorn with multiple workers
| 25,699,363 | 0 | 3 | 1,514 | 0 |
python,gunicorn
|
Assuming that by global variable, you mean another process that keeps them in memory or on disk, yes I think so. I haven't checked the source code of Gunicorn, but based on a problem I had with some old piece of code was that several users retrieved the same key from a legacy MyISAM table, incrementing it and creating a new entry using it assuming it was unique to create a new record. The result was that occasionally (when under very heavy traffic) one record is created (the newest one overwriting the older ones, all using the same incremented key). This problem was never observed during a hardware upgrade, when I reduced the gunicorn workers of the website to one, which was the reason to explore this probable cause in the first place.
Now usually, reducing the workers will degrade performance, and it is better to deal with these issues with transactions (if you are using an ACID RDBMS, unlike MyISAM). The same issue should be present with in Redis and similar stores.
Also this shouldn't a problem with files and sockets, since to my knowledge, the operating system will block other processes (even children) from accessing an open file.
| 0 | 0 | 0 | 0 |
2014-05-14T11:18:00.000
| 1 | 0 | false | 23,653,170 | 1 | 0 | 1 | 1 |
Is it possible to have multiple workers running with Gunicorn and have them accessing some global variable in an ordered manner i.e. without running into problems with race conditions?
|
Consume REST API from Python: do I need an async library?
| 23,657,898 | 1 | 1 | 3,037 | 0 |
python,rest
|
Start simple and use the way, which seems to be easy to use for you. Consider optimization to be done later on only if needed.
Use of async libraries would come into play as helpful if you would have thousands of request a second. Much sooner you are likely to have performance problems related to database (if you use it), which is not to be resolved by async magic.
| 0 | 0 | 1 | 0 |
2014-05-14T14:35:00.000
| 3 | 0.066568 | false | 23,657,718 | 1 | 0 | 1 | 1 |
I have a REST API and now I want to create a web site that will use this API as only and primary datasource. The system is distributed: REST API is on one group of machines and the site will be on the other(s).
I'm expecting to have quite a lot of load, so I'd like to make requests as efficient, as possible.
Do I need some async HTTP requests library or any HTTP client library will work?
API is done using Flask, web site will be also built using Flask and Jinja as template engine.
|
Change base path of the generated migration files
| 23,666,529 | 2 | 3 | 798 | 0 |
python,django-1.7,django-migrations
|
It's done via MIGRATION_MODULES setting.
In my case:
MIGRATION_MODULES = dict([(app, 'migrations.' + app) for app in INSTALLED_APPS])
| 0 | 0 | 0 | 0 |
2014-05-14T22:36:00.000
| 1 | 1.2 | true | 23,666,321 | 0 | 0 | 1 | 1 |
In django 1.7, using the provided makemigrations command(not from South), is there a way to change the location of where the generated migration files are stored?
I'm keeping these files under version control and for apps imported from Django's contrib, they get generated right inside the app directory, which resides outside my project's root path.
For example, the auth app gets the files generated in this location in my case:
/home/dev/.envs/myproj/lib/python2.7/site-packages/django/contrib/auth/migrations/0002_group.py
Thanks
|
Clicking link using beautifulsoup in python
| 65,200,203 | 0 | 9 | 52,594 | 0 |
python,web-scraping,beautifulsoup
|
print(soup.find('h1',class_='pdp_product_title'))
does not give any result
<div class="pr2-sm css-1ou6bb2"><h2 class="headline-5-small pb1-sm d-sm-ib css-1ppcdci" data-test="product-sub-title">Women's Shoe</h2><h1 id="pdp_product_title" class="headline-2 css-zis9ta" data-test="product-title">Nike Air Force 1 Shadow</h1></div>
| 0 | 0 | 1 | 0 |
2014-05-15T13:19:00.000
| 2 | 0 | false | 23,679,480 | 0 | 0 | 1 | 1 |
In mechanize we click links either by using follow_link or click_link. Is there a similar kind of thing in beautiful soup to click a link on a web page?
|
Do Django excecute scripts at server or in browser?
| 23,682,246 | 2 | 1 | 44 | 0 |
python,django
|
django is a server side application
| 0 | 0 | 0 | 0 |
2014-05-15T15:11:00.000
| 1 | 1.2 | true | 23,682,222 | 0 | 0 | 1 | 1 |
Hello :) My task is to write a web framework for a Python program with some massive calculations, which are desired to be executed at proper server and the result should be sent to the browser after it's calculated. My question is - is Django the right framework for that purpose? I tried to find out where Django executes scripts, but I haven't found any satisfying answer, so I hope that I would find one here.
Thank you for any attention.
|
Cocos2D-JS -- Compile and Run a 'Clean' build?
| 33,130,769 | 0 | 4 | 4,146 | 0 |
javascript,python,cocos2d-x,cocos2d-js
|
You should know, Chrome is tenacious about caching.
You can turn it off every way they offer, and it will still retain js files you don't want it to.
My advice is to restart your entire browser--not just the tab you're debugging--at least once an hour.
| 0 | 0 | 0 | 0 |
2014-05-16T18:31:00.000
| 4 | 0 | false | 23,702,324 | 1 | 0 | 1 | 3 |
Context:
I'm creating a Cocos2d-JS Game. I have already created the Project and am in development phase.
I can run cocos run -p web or cocos run -p web --source-map from the project directory in the Console. These commands work and allow me to run and test my project.
Problem:
Simply: Code changes I make are not being picked up by the cocos2d-JSB compiler. Old code that I've recently changed still exists in newly compiled cocos2d projects. I cannot detect changes I've made to class files that have already been compiled.
Technical:
The problem technically: Modified .js files are not being copied correctly by the cocos2d-js compiler (from the Terminal/Console). The previous version of the .js file are retained somehow in the localhost-web-server. The localhost is maintained by the Python script that runs the cocos2d app.
(I am writing most of my code using Typescript .ts and compiling down into Javascript .js with a .js.map. I am definitely compiling down the Typescript to Javascript before running the cocos compiler)
More:
I can see my .ts files are visible from the localhost when using Javascript Console in my Chrome Browser. I can also see my .js files this way, and can confirm that the code has not been updated.
Question:
How can I 'Force' the cocos compile or cocos run commands to overwrite old any .js files, instead of 'intelligently' retaining old files?
Is it possible that --source-map makes the run command force a fresh build?
I want to make a 'Clean Build' like in Apple's Xcode, but for cocos2d-js. How can I do this?
If none of that is possible, where can I locate the build/run directory used by the localhost so I can manually update the .js files myself?
|
Cocos2D-JS -- Compile and Run a 'Clean' build?
| 23,703,216 | 7 | 4 | 4,146 | 0 |
javascript,python,cocos2d-x,cocos2d-js
|
Fix it: .js files were being Cached by my Browser.
Issue:
Chrome Browser was Caching the .js files. I solved this problem by turning off Caching. I did not realize that the localhost was indeed pointing to the project directory.
Solution: Disable Caching in Chrome:
Menu (top right icon) -> Tools -> Developer Tools -> Settings (Gear Icon) -> Checked the box for Disabling Caching (when DevTools is open)
| 0 | 0 | 0 | 0 |
2014-05-16T18:31:00.000
| 4 | 1.2 | true | 23,702,324 | 1 | 0 | 1 | 3 |
Context:
I'm creating a Cocos2d-JS Game. I have already created the Project and am in development phase.
I can run cocos run -p web or cocos run -p web --source-map from the project directory in the Console. These commands work and allow me to run and test my project.
Problem:
Simply: Code changes I make are not being picked up by the cocos2d-JSB compiler. Old code that I've recently changed still exists in newly compiled cocos2d projects. I cannot detect changes I've made to class files that have already been compiled.
Technical:
The problem technically: Modified .js files are not being copied correctly by the cocos2d-js compiler (from the Terminal/Console). The previous version of the .js file are retained somehow in the localhost-web-server. The localhost is maintained by the Python script that runs the cocos2d app.
(I am writing most of my code using Typescript .ts and compiling down into Javascript .js with a .js.map. I am definitely compiling down the Typescript to Javascript before running the cocos compiler)
More:
I can see my .ts files are visible from the localhost when using Javascript Console in my Chrome Browser. I can also see my .js files this way, and can confirm that the code has not been updated.
Question:
How can I 'Force' the cocos compile or cocos run commands to overwrite old any .js files, instead of 'intelligently' retaining old files?
Is it possible that --source-map makes the run command force a fresh build?
I want to make a 'Clean Build' like in Apple's Xcode, but for cocos2d-js. How can I do this?
If none of that is possible, where can I locate the build/run directory used by the localhost so I can manually update the .js files myself?
|
Cocos2D-JS -- Compile and Run a 'Clean' build?
| 34,370,929 | 0 | 4 | 4,146 | 0 |
javascript,python,cocos2d-x,cocos2d-js
|
Yes it was as simple as this, just open the dev tools with F12, then go to settings, do the cache thing, and when you run your game , activate the dev tools again (F12) and refresh the page
| 0 | 0 | 0 | 0 |
2014-05-16T18:31:00.000
| 4 | 0 | false | 23,702,324 | 1 | 0 | 1 | 3 |
Context:
I'm creating a Cocos2d-JS Game. I have already created the Project and am in development phase.
I can run cocos run -p web or cocos run -p web --source-map from the project directory in the Console. These commands work and allow me to run and test my project.
Problem:
Simply: Code changes I make are not being picked up by the cocos2d-JSB compiler. Old code that I've recently changed still exists in newly compiled cocos2d projects. I cannot detect changes I've made to class files that have already been compiled.
Technical:
The problem technically: Modified .js files are not being copied correctly by the cocos2d-js compiler (from the Terminal/Console). The previous version of the .js file are retained somehow in the localhost-web-server. The localhost is maintained by the Python script that runs the cocos2d app.
(I am writing most of my code using Typescript .ts and compiling down into Javascript .js with a .js.map. I am definitely compiling down the Typescript to Javascript before running the cocos compiler)
More:
I can see my .ts files are visible from the localhost when using Javascript Console in my Chrome Browser. I can also see my .js files this way, and can confirm that the code has not been updated.
Question:
How can I 'Force' the cocos compile or cocos run commands to overwrite old any .js files, instead of 'intelligently' retaining old files?
Is it possible that --source-map makes the run command force a fresh build?
I want to make a 'Clean Build' like in Apple's Xcode, but for cocos2d-js. How can I do this?
If none of that is possible, where can I locate the build/run directory used by the localhost so I can manually update the .js files myself?
|
Is it possible to make writing to files/reading from files safe for a questionnaire type website?
| 23,703,319 | 0 | 0 | 31 | 1 |
python,flask
|
I see a few solutions:
read /dev/urandom a few times, calculate sha-256 of the number and use it as a file name; collision is extremely improbable
use Redis and command like LPUSH, using it from Python is very easy; then RPOP from right end of the linked list, there's your queue
| 0 | 0 | 0 | 0 |
2014-05-16T19:24:00.000
| 1 | 0 | false | 23,703,135 | 0 | 0 | 1 | 1 |
My web app asks users 3 questions and simple writes that to a file, a1,a2,a3. I also have real time visualization of the average of the data (reads real time from file).
Must I use a database to ensure that no/minimal information is lost? Is it possible to produce a queue of read/writes>(Since files are small I am not too worried about the execution time of each call). Does python/flask already take care of this?
I am quite experienced in python itself, but not in this area(with flask).
|
Python neo4j-rest-client takes too long within a flask view
| 23,739,995 | 0 | 0 | 107 | 0 |
python,neo4j,flask
|
Well, if this is taking too much time, you might want to implement your own REST client that uses a faster parser, or speed up neo4j-rest-client and submit a patch?
| 0 | 0 | 0 | 0 |
2014-05-16T19:53:00.000
| 1 | 0 | false | 23,703,538 | 0 | 0 | 1 | 1 |
I am developing a web application using flask and neo4j. I use noe4j-rest-client for the python side. When I query neo4j using python shell, it takes 78ms. But when I make request within a flask view it takes 0.8seconds. I have profiled and I see that neo4j-rest-client/request.py is the responsible, because it takes 0.5 seconds. What do you think ?
|
How to upgrade django?
| 69,937,877 | 0 | 64 | 118,680 | 0 |
python,django
|
pip3 install django -U
this will uninstall django, and then install the latest version of django.
pip3 is if you use python3.
-U is shortcut for --upgrade
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 0 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 64,373,619 | 0 | 64 | 118,680 | 0 |
python,django
|
From the Django Docs: if you are using a Virtual Environment and it is a major upgrade, you might want to set up a new environment with the dependencies first.
Or, if you have installed Django using the PIP, then the below is for you:
python3.8 -m pip install -U Django .
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 0 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 60,089,696 | 3 | 64 | 118,680 | 0 |
python,django
|
How to upgrade Django Version
python -m pip install -U Django
use cammand on CMD
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 0.033321 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 59,508,823 | 1 | 64 | 118,680 | 0 |
python,django
|
I think after updating your project, you have to restart the server.
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 0.011111 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 23,711,285 | 1 | 64 | 118,680 | 0 |
python,django
|
You can use the upgraded version after upgrading.
You should check that all your tests pass before deploying :-)
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 0.011111 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 62,312,728 | 0 | 64 | 118,680 | 0 |
python,django
|
you must do the following:
1- Update pip
python -m pip install --upgrade pip
2- If you already install Django update by using the following command
pip install --upgrade Django
or you can uninstall it using the following command
pip uninstall Django
3- If you don't install it yet use the following command
python -m pip install Django
4- Type your code
Enjoy
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 0 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 71,192,619 | -2 | 64 | 118,680 | 0 |
python,django
|
new version in django :-
pip install Django==4.0.2
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | -0.022219 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 30,253,720 | 13 | 64 | 118,680 | 0 |
python,django
|
Use this command to get all available Django versions: yolk -V django
Type pip install -U Django for latest version, or if you want to specify version then use pip install --upgrade django==1.6.5
NOTE: Make sure you test locally with the updated version of Django before updating production.
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 1 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
How to upgrade django?
| 41,422,829 | 2 | 64 | 118,680 | 0 |
python,django
|
sudo pip install --upgrade django
also upgrade the DjangoRestFramework:
sudo pip install --upgrade djangorestframework
| 0 | 0 | 0 | 0 |
2014-05-17T07:45:00.000
| 18 | 0.022219 | false | 23,708,895 | 1 | 0 | 1 | 9 |
My project was running on Django 1.5.4 and I wanted to upgrade it. I did pip install -U -I django and now pip freeze shows Django 1.6.5 (clearly django has upgraded, I'm in virtualenv) but my project is still using Django 1.5.4. How can I use the upgraded version?
UPDATE: Thanks for your comments. I tried everything but unfortunately nothing worked and I had to re-deploy the app.
Hope someone explains why this happened.
|
JAVA_HOME "bug"
| 31,828,819 | 0 | 1 | 90 | 0 |
java,android,python,c++
|
You have to point JAVA_HOME to this path:
C:\android\Java\jdk1.8.0_05
| 1 | 0 | 0 | 1 |
2014-05-17T21:04:00.000
| 1 | 0 | false | 23,716,064 | 0 | 0 | 1 | 1 |
so I trying to build project for cocos2d-x. I'm currently at cmd and when I type python android-build.py -p 19 cpp-tests it start making project but then I get error that build failed. Problem is bescause it can't find javac compiler.
"Perhaps JAVA_HOME does not point to the JDk. It is currently set to
"c:/Program Files/Java/jre7"
Problem is bescause in system variables I made new variable called JAVA_HOME and it is pointed to C:\android\Java\jdk1.8.0_05\bin but still I getting that error. What to do guys?
|
Can't access my laptop's localhost through an Android app
| 47,569,447 | 1 | 3 | 2,908 | 0 |
android,python,django,localhost,django-rest-framework
|
I tried the above, but failed to work in my case. Then with running
python manage.py runserver 0.0.0.0:8000 I also had to add my IP to ALLOWED_HOSTS in settings.py, which solved this issue.
eg.
Add your ip to allowed hosts in settings.py
ALLOWED_HOSTS = ['192.168.XXX.XXX']
Then run the server
python manage.py runserver 0.0.0.0:8000
| 0 | 0 | 1 | 0 |
2014-05-17T22:31:00.000
| 3 | 0.066568 | false | 23,716,724 | 0 | 0 | 1 | 1 |
So I did a research before posting this and the solutions I found didn't work, more precisely:
-Connecting to my laptop's IPv4 192.168.XXX.XXX - didn't work
-Conecting to 10.0.2.2 (plus the port) - didn't work
I need to test an API i built using Django Rest framework so I can get the json it returns, but i cant access through an Android app i'm building (I'm testing with a real device, not an emulator). Internet permissions are set on Manifest and i can access remote websites normally. Just can't reach my laptop's localhost(they are in the same network)
I'm pretty new to Android and Python and Django as well (used to built Django's Rest Framework API).
EDIT: I use localhost:8000/snippets.json or smth like this to connect on my laptop.
PS: I read something about XAMP server... do I need it in this case?
Thanks in advance
|
Prevent double submits
| 23,742,459 | 1 | 0 | 71 | 0 |
python,google-app-engine,webapp2
|
Preventing it on the server side is not trivial - a second call may hit a different instance. So you need to deal with sessions. The code will get complex quickly.
I would recommend disabling the button before a call and reenabling it upon a response.
| 0 | 0 | 0 | 0 |
2014-05-19T13:52:00.000
| 2 | 0.099668 | false | 23,739,630 | 0 | 0 | 1 | 1 |
I am using GAE for an app that has various submit href buttons, and use javascript to submit.
I am having a real tough time trying to figure out how to prevent multiple submits or doubl-clicking. I have tried various methods to disable or remove the href with javascript.
But I am thinking if there is maybe a method to prevent this in the backend.
What methods would you recommend I use?
|
Running django test on the gitlab ci
| 25,917,374 | 0 | 3 | 3,987 | 0 |
python,django-testing,django-1.4,gitlab-ci
|
Do you have Django installed on the testrunner?
If not, try to configure a virtualenv for your testsuite. Best might be (if you have changing requirements) to make the setup and installation of this virtualenv part of your testsuite.
| 0 | 0 | 0 | 1 |
2014-05-19T15:21:00.000
| 2 | 0 | false | 23,741,509 | 0 | 0 | 1 | 1 |
I have project in django 1.4 and I need to run django test in contious integration system (GitLab 6.8.1 with Gitlab CI 4.3).
Gitlab Runner have installed on server with project.
When I run:
cd project/app/ && ./runtest.sh test some_app
I get:
Traceback (most recent call last):
File "manage.py", line 2, in <module>
from django.core.management import execute_manager
ImportError: No module named django.core.management
How I may run tests?
|
Can JSP and Python used together for same database?
| 23,743,682 | 0 | 1 | 109 | 0 |
java,android,python,database,jsp
|
We are planning an android app in this summer and I'm considering
developing it with Python
Native Android apps are developed using Java.
However, the service provided by the app is supposed to be added to
the website made with JSP later. I'm afraid the difference of the
language would cause any obstacle.
You will need to create an API that communicates between Android and your database.
| 0 | 0 | 0 | 0 |
2014-05-19T17:15:00.000
| 2 | 0 | false | 23,743,546 | 0 | 0 | 1 | 1 |
I'm a student who work part-time at a start-up, which runs a website made with JSP.
We are planning an android app in this summer and I'm considering developing it with Python, which I'm interested in.
However, the service provided by the app is supposed to be added to the website made with JSP later. I'm afraid the difference of the language would cause any obstacle.
Since they will use a common database, I think using different languages to access it won't have any problem. I want to make sure that my guess is correct.
Pardon my poor English. I'd appreciate your answers.
|
How do I tell Python not to interpret backslashes in strings?
| 23,763,477 | 2 | 2 | 2,703 | 0 |
python,django
|
This has nothing to do with the Django template, but how you define the variable in the first place.
Basckslashes are only "interpreted" when you specify them as literals in your Python code. So given your Python code above, you can either use the double backslash, or use a raw string.
If you were loading the string "fred\xbf" from your database and outputting it in a template, it would not be "escaped".
| 0 | 0 | 0 | 0 |
2014-05-20T14:59:00.000
| 2 | 0.197375 | false | 23,763,365 | 0 | 0 | 1 | 1 |
I'm using Python 2.7 and Django 1.4
If I have a string variable result = "fred\xbf", how do I tell the Django template to display "fred\xbf" rather than process the backslash and display some strange character?
I know I can escape the backslash: "fred\\xbf" , but can I get the Django template to understand I want the backslash not to be processed?
|
Why django has collectstatic?
| 23,770,780 | 3 | 1 | 238 | 0 |
python,django,django-staticfiles,static-files,collectstatic
|
Because of the pluggable app philosophy of django made apparent by their whole encapsulated app structure (urls, views, models, templates, etc., are app specific).
You can see this philosophy pressed further in the latest django project structure where project names are not to be included in the imports / apps are imported globally: from myapp import models and not from project.myapp import models
If you install an app from a third party, you don't need to painstakingly figure out where the application lives, django can simply move it to your environment specific static file serving location.
Simply add to INSTALLED_APPS and you can gain almost all of the functionality of a third party app whose files live who knows where, from templates to models to static files.
PS: I personally don't use the app-directory static file system unless I am making an app pluggable. It's harder to find and maintain IMO when files live absolutely everywhere.
| 0 | 0 | 0 | 0 |
2014-05-20T21:06:00.000
| 1 | 1.2 | true | 23,769,964 | 0 | 0 | 1 | 1 |
Thinking about a project that has an app called 'website', and has a 'static' folder inside, that contains all the project's static files, why do I have to collect all static files and put at another folder, instead of just map the static folder (website/static) on my webserver? What's the real need to Django collect static files? Just because there are a lot of apps, and you could put your static file in different folders? Or, are there more than that involved?
|
Specify Django CMS placeholder type (text,picture,link) in template
| 23,785,693 | 0 | 2 | 363 | 0 |
python,django,django-cms
|
No, this is not supported at the moment.
| 0 | 0 | 0 | 0 |
2014-05-21T13:56:00.000
| 2 | 0 | false | 23,785,374 | 0 | 0 | 1 | 1 |
Is it possible to restrict a placeholder type without defining it in settings.py?
Something like: {% placeholder "home_banner_title" image %}
|
Change website text with python
| 23,790,111 | 0 | 1 | 2,023 | 0 |
python,html
|
What you are talking about seems to be much more of the job of a browser extension. Javascript will be much more appropriate, as @brbcoding said. Beautiful Soup is for scraping web pages, not for modifying them on the client side in a browser. To be honest, I don't think you can use Python for that.
| 0 | 0 | 1 | 0 |
2014-05-21T17:25:00.000
| 1 | 1.2 | true | 23,790,052 | 0 | 0 | 1 | 1 |
This is my first StackOverflow post so please bear with me.
What I'm trying to accomplish is a simple program written in python which will change all of a certain html tag's content (ex. all <h1> or all <p> tags) to something else. This should be done on an existing web page which is currently open in a web browser.
In other words, I want to be able to automate the inspect element function in a browser which will then let me change elements however I wish. I know these changes will just be on my side, but that will serve my larger purpose.
I looked at Beautiful Soup and couldn't find anything in the documentation which will let me change the website as seen in a browser. If someone could point me in the right direction, I would be greatly appreciative!
|
python program to read a ajax website
| 23,831,288 | 1 | 0 | 62 | 0 |
jquery,python,ajax
|
You can use "Network" panel of chrome's devTool to figure out what is the path that the ajax requests to.
Then use python script to fetch the content with the path.
| 0 | 0 | 1 | 0 |
2014-05-23T13:50:00.000
| 1 | 0.197375 | false | 23,831,048 | 1 | 0 | 1 | 1 |
I want to parse a website which contains a list of people and their information, The problem is that the website using ajax loads new and new information as I scroll down the website.
I need information of ALL the people.
urllib.open(..).read() does not take care of the scroll down. Can you please suggest me a way to parse all the data.
|
Does cron.yaml support conditions?
| 23,835,748 | 2 | 0 | 134 | 0 |
python,google-app-engine,python-2.7,cron
|
As far as I know, that isn't possible.
The Cron.yaml page is only made for defining the jobs, not to code.
I'd recommend putting your logic inside of the job that you're calling, as you mentioned.
Hope this helps.
| 0 | 1 | 0 | 0 |
2014-05-23T15:57:00.000
| 1 | 1.2 | true | 23,833,693 | 0 | 0 | 1 | 1 |
Is it possible to have conditions (if ... else ...) in GAE cron.yaml?
For ex., to have something like
if app_identity.get_application_id() == 'my-appid' then run the job.
Understand, that probably the same result I can have by implementing it in the job handler. Just interesting if it could be done within cron.yaml.
|
How to Automate repeated tasks in SAP Logon
| 24,307,979 | 0 | 1 | 4,691 | 0 |
java,php,python,automation,sap
|
You can implement Scheduled Jobs using JAVA if i am understanding you correctly.
| 0 | 0 | 0 | 1 |
2014-05-24T14:14:00.000
| 4 | 0 | false | 23,846,012 | 0 | 0 | 1 | 3 |
I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started.
|
How to Automate repeated tasks in SAP Logon
| 42,849,870 | 0 | 1 | 4,691 | 0 |
java,php,python,automation,sap
|
SapGui has buit in record and playback tool which gives you out of the box vbs files which you can use for automation , if the values does not change, then you can use the same scripts every time.
You can find it in the main menu of the sap gui window customise local layout(Alt+F12)->Script Recording and playback.
| 0 | 0 | 0 | 1 |
2014-05-24T14:14:00.000
| 4 | 0 | false | 23,846,012 | 0 | 0 | 1 | 3 |
I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started.
|
How to Automate repeated tasks in SAP Logon
| 23,878,952 | 1 | 1 | 4,691 | 0 |
java,php,python,automation,sap
|
We use either VBScript or C# to automate tasks. Using VBSCript is the easiest. Have the SAP GUI record a task then it will produce a vbscript that can serve as a starting point for your coding. When you have this vbscript file then you can translate it into other languages.
| 0 | 0 | 0 | 1 |
2014-05-24T14:14:00.000
| 4 | 0.049958 | false | 23,846,012 | 0 | 0 | 1 | 3 |
I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started.
|
How to properly unit test a web app?
| 23,849,290 | 5 | 3 | 1,256 | 0 |
python,unit-testing,flask,integration-testing
|
Most of this is personal opinion and will vary from developer to developer.
There are a ton of python libraries for unit testing - that's a decision best left to you as the developer of the project to find one that fits best with your tool set / build process.
This isn't exactly 'unit testing' per se, I'd consider it more like integration testing. That's not to say this isn't valuable, it's just a different task and will often use different tools. For something like this, testing will pay off in the long run because you'll have piece of mind that your bug fixes and feature additions aren't impacting your end to end code. If you're already doing it, I would continue. These sorts of tests are highly valuable when refactoring down the road to ensure consistent functionality.
I would not waste time testing 3rd party APIs. It's their job to make sure their product behaves reliably. You'll be there all day if you start testing 3rd party features. A big reason to use 3rd party APIs is so you don't have to test them. If you ever discover that your app is breaking because of a 3rd party API it's probably time to pick a different API. If your project scales to a size where you're losing thousands of dollars every time that API fails you have a whole new ball of issues to deal with (and hopefully the resources to address them) at that time.
In general, I don't test static content or html. There are tools out there (web scraping tools) that will let you troll your own website for consistent functionality. I would personally leave this as a last priority for the final stages of refinement if you have time. The look and feel of most websites change so often that writing tests isn't worth it. Look and feel is also really easy to test manually because it's so visual.
| 0 | 0 | 0 | 1 |
2014-05-24T20:07:00.000
| 1 | 1.2 | true | 23,849,163 | 0 | 0 | 1 | 1 |
I'm teaching myself backend and frontend web development (I'm using Flaks if it matters) and I need few pointers for when it comes to unit test my app.
I am mostly concerned with these different cases:
The internal consistency of the data: that's the easy one - I'm aiming for 100% coverage when it comes to issues like the login procedure and, most generally, checking that everything that happens between the python code and the database after every request remain consistent.
The JSON responses: What I'm doing atm is performing a test-request for every get/post call on my app and then asserting that the json response must be this-and-that, but honestly I don't quite appreciate the value in doing this - maybe because my app is still at an early stage?
Should I keep testing every json response for every request?
If yes, what are the long-term benefits?
External APIs: I read conflicting opinions here. Say I'm using an external API to translate some text:
Should I test only the very high level API, i.e. see if I get the access token and that's it?
Should I test that the returned json is what I expect?
Should I test nothing to speed up my test suite and don't make it dependent from a third-party API?
The outputted HTML: I'm lost on this one as well. Say I'm testing the function add_post():
Should I test that on the page that follows the request the desired post is actually there?
I started checking for the presence of strings/html tags in the row response.data, but then I kind of gave up because 1) it takes a lot of time and 2) I would have to constantly rewrite the tests since I'm changing the app so often.
What is the recommended approach in this case?
Thank you and sorry for the verbosity. I hope I made myself clear!
|
How to terminate previous build's process when running the project?
| 24,248,091 | 1 | 0 | 38 | 0 |
python,eclipse,pydev
|
In the PyDev editor you can use Ctrl+Shift+F9 to terminate/relaunch by default.
But as you're dealing with flask, you should be able to use it to reload automatically on code-changes without doing anything by setting use_reloader=True.
I.e.: I haven't actually tested, but its documentation says that you can set the reload flag for that run(use_reloader=True).
| 0 | 0 | 0 | 0 |
2014-05-25T19:18:00.000
| 2 | 0.099668 | false | 23,859,042 | 0 | 0 | 1 | 1 |
I've just started using PyDev to work on a Flask app. The thing is, every time I make a change, I have to click on the "stop process" button in the console window, then click "Run" again.
This is necessary, because Flask runs a web server on a specific port, and running more than one instance of the application results in errors connecting to the port.
Is there a way I can automatize this process? (configuration, some sort of event handler, or any other way)
|
Using Django cms for editable webpage?
| 23,859,178 | 1 | 0 | 206 | 0 |
python,django,django-cms
|
I don't know about Django CMS, but if all you want to do is let them edit a plain text chunk on a web page, plain old Django can do that without breaking a sweat. Django admin can be used to handle editing at the very least, and you just need a model with a TextField to store the text, and a template to render it into an HTML page. You could probably figure out how to do it after working through the Django tutorial.
| 0 | 0 | 0 | 0 |
2014-05-25T19:23:00.000
| 1 | 1.2 | true | 23,859,110 | 0 | 0 | 1 | 1 |
I am currently working on a website using Django. Here's my issue. I want some pages of my website to be partially editable (just text edition) by registered users. And I want it to be user friendly enough.
I first thought of using regular html forms to make the content editable. And then I discovered Django CMS. As far as I understand I can pretty much do what I want with Django CMS. But I am wondering if it's not too heavy in this situation, and I want to have a lot of control on what I make editable or not by the users.
Therefore my questions are :
Should I use Django CMS or not ?
If yes, would it be possible to restrict the standard usage of Django CMS depending on the logged user ? (For example, I mean by that, just allowing the user to edit a paragraph, and not to modifying the whole layout of the page)
Thanks !
|
How to set up a Django project in PyCharm
| 32,492,931 | 7 | 51 | 89,573 | 0 |
python,django,pycharm
|
I have met the problems today. At last, I finished it by:
Create project in command line
Create app in command line
Just open the existing files and code in pycharm
The way I use have the benefits:
don't need by professional pycharm
code django project in pycharm
| 0 | 0 | 0 | 0 |
2014-05-26T12:38:00.000
| 6 | 1 | false | 23,870,365 | 1 | 0 | 1 | 1 |
I'm new in this area so I have a question. Recently, I started working with Python and Django. I installed PyCharm Community edition as my IDE, but I'm unable to create a Django project.
I looked for some tutorials, and there is an option to select "project type", but in the latest version this option is missing. Can someone tell me how to do this?
|
Wsgi custom field in request header
| 23,883,386 | 0 | 0 | 1,218 | 0 |
python,apache,ubuntu,request,wsgi
|
I access this variable with web.ctx.env
web.ctx.env.get('HTTP_X_SOURCE')
This code works well on another server with apache 2 and wsgi.
On my new server (ubuntu 13)
test with pure web.py (no apache no wsgi), the variable pass
test with apache2-wsgi+web.py the variable don't pass
On my old server (ubuntu 12)
test with pure web.py (no apache no wsgi), the variable pass
test with apache2-wsgi+web.py the variable pass too
| 0 | 0 | 0 | 0 |
2014-05-26T15:57:00.000
| 3 | 0 | false | 23,873,888 | 0 | 0 | 1 | 1 |
I have a probleme with apache2 and wsgi
I send my server a request with a custom field in headers (HTTP_X_SOURCE) and apache2 (or wsgi) block this field.
request => apache2 => web.py
Does anyone know why apache2 or wsgi block this field ?
|
How to interface blocking and non-blocking code with asyncio
| 47,181,879 | 1 | 7 | 1,630 | 0 |
python-asyncio
|
About case #2: Blocking code should be at least wrapped with .run_in_executor.
| 0 | 0 | 0 | 0 |
2014-05-27T20:30:00.000
| 2 | 0.099668 | false | 23,898,363 | 1 | 0 | 1 | 1 |
I'm trying to use a coroutine function outside of the event loop. (In this case, I want to call a function in Django that could also be used inside the event loop too)
There doesn't seem to be a way to do this without making the calling function a coroutine.
I realize that Django is built to be blocking and a therefore incompatible with asyncio. Though I think that this question might help people who are making the transition or using legacy code.
For that matter, it might help to understand async programming and why it doesn't work with blocking code.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.