Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Eclipse error: "javaw.exe not found"
| 12,868,194 | 0 | 0 | 556 | 0 |
eclipse,python-2.7
|
Have you installed java runtime? If not, you need to install it first. If you have it already, you might need to adjust your path.
| 0 | 0 | 0 | 0 |
2012-10-12T22:48:00.000
| 2 | 1.2 | true | 12,868,135 | 0 | 0 | 1 | 1 |
I installed Python 2.7 recently and with it Django--as part of the installation I had to add a PATH environment variable with the path to my Python installation.
After I did that, when I launch Eclipse, I get an error saying "javaw.exe" was not found. I need both Python and Eclipse on my machine, is there something I can do to fix this issue?
|
Place Python Output in HTML Widget
| 12,869,309 | 1 | 0 | 221 | 0 |
python,html,rss,widget,feedparser
|
One of several options: i'm thinking this might be quickest/easiest...
Upload the python script to the server. Have it create/update a publicly readable file under the webroot. Write some javascript to load that page into your html page.
If you want to get fancy schedule the script to run periodically via cron.
| 0 | 0 | 0 | 0 |
2012-10-13T01:10:00.000
| 2 | 1.2 | true | 12,869,051 | 0 | 0 | 1 | 2 |
This seems like a very simple question, so I will remove if it's a repeat. I just can't seem to find the answer.
I'm using the feedparser module to parse an RSS feed. I want to post the output to a widget on a site. I don't want the python script to write the whole page. I just want to be able to get the output to the web page. Can anyone just point me in the right direction?
Details:
I have a blog that is run separately from the page that I would like to post the RSS Feed on. I have the script to parse the blog and generate the info I want. I want to get the generated output into an iframe or table or the like on a static HTML page. The static page is entirely separate from the blog.
|
Place Python Output in HTML Widget
| 12,869,418 | 1 | 0 | 221 | 0 |
python,html,rss,widget,feedparser
|
I'll paste my comment into the answer here as well.
There are a lot of ways to go about it, but I'll focus on the design decision question first.
To make things simple, create two pages. First page for displaying the generated info without the generated info, just the styling and all other icing on the cake. Include an which point to the second page, which is the generated content.
John's answer has a good method of solving the problem of manually running the script: by using cron.
| 0 | 0 | 0 | 0 |
2012-10-13T01:10:00.000
| 2 | 0.099668 | false | 12,869,051 | 0 | 0 | 1 | 2 |
This seems like a very simple question, so I will remove if it's a repeat. I just can't seem to find the answer.
I'm using the feedparser module to parse an RSS feed. I want to post the output to a widget on a site. I don't want the python script to write the whole page. I just want to be able to get the output to the web page. Can anyone just point me in the right direction?
Details:
I have a blog that is run separately from the page that I would like to post the RSS Feed on. I have the script to parse the blog and generate the info I want. I want to get the generated output into an iframe or table or the like on a static HTML page. The static page is entirely separate from the blog.
|
Mod_Python: Publisher Handler vs PSP Handler
| 12,876,548 | 3 | 0 | 274 | 0 |
python,mod-python
|
I am not familiar with the mod_python (project was abandoned long ago) but nowadays Python applications are using wsgi (mod_wsgi or uwsgi). If you are using apache, mod_wsgi is easy to configure, for nginx use the uwsgi.
| 0 | 0 | 0 | 1 |
2012-10-13T19:19:00.000
| 1 | 1.2 | true | 12,876,159 | 0 | 0 | 1 | 1 |
I've been testing out Mod_python and it seems that there are two ways of producing python code using:-
Publisher Handler
PSP Handler
I've gotten both to work at the same time however, should I use one over the other? PSP resembles PHP a lot but Publisher seems to resemble python more. Is there an advantage over using one (speed, ease of use, etc.)?
|
How can a python web app open a program on the client machine?
| 12,877,893 | 1 | 1 | 351 | 0 |
python,linux,web
|
Note this is not a standard way. Imagine the websites out there had the ability to open Notepad or Minesweeper at their will when you visit or click something.
The way it is done is, you need to have a service which is running on the client machine which can expose certain apis and trust the request from the web apps call. this needs to be running on the client machines all the time and in your web app, you can send a request to this service to launch the application that you desire.
| 0 | 1 | 0 | 0 |
2012-10-13T23:18:00.000
| 3 | 0.066568 | false | 12,877,870 | 0 | 0 | 1 | 1 |
I'm going to be using python to build a web-based asset management system to manage the production of short cg film. The app will be intranet-based running on a centos machine on the local network. I'm hoping you'll be able to browse through all the assets and shots and then open any of them in the appropriate program on the client machine (also running centos). I'm guessing that there will have to be some sort of set up on the client-side to allow the app to run commands, which is fine because I have access to all of the clients that will be using it (although I don't have root access). Is this sort of thing possible?
|
Check size of HTTP POST without saving to disk
| 12,879,591 | 2 | 1 | 134 | 0 |
python,pyramid
|
You should be able to check the request.content_length. WSGI does not support streaming the request body so content length must be specified. If you ever access request.body, request.params or request.POST it will read the content and save it to disk.
The best way to handle this, however, is as close to the client as possible. Meaning if you are running behind a proxy of any sort, have that proxy reject requests that are too large. Once it gets to Python, something else may have already stored the request to disk.
| 0 | 0 | 0 | 1 |
2012-10-14T02:30:00.000
| 1 | 1.2 | true | 12,878,819 | 0 | 0 | 1 | 1 |
Is there a way to check the size of the incoming POST in Pyramid, without saving the file to disk and using the os module?
|
Youtube GData 2 thousand most viewed
| 12,892,498 | 1 | 0 | 164 | 0 |
python,youtube,gdata
|
YouTube will not provide this to you. They intentionally rate limit their feeds to prevent abuse.
| 0 | 0 | 1 | 0 |
2012-10-14T06:11:00.000
| 1 | 0.197375 | false | 12,879,754 | 0 | 0 | 1 | 1 |
I want to catch the 1 thousand most viewed youtube videos through gdata youtube api.
However, only the first 84 are being returned.
If I use the following query, only 34 records are returned (plus the first 50).
Anyone knows what is wrong?
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=1"
returns 25 records
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=26"
returns 25 records
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=51"
returns 25 records
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=76"
returns 9 records
|
Accessing Google Groups Services on Google Apps Script in Google App Engine?
| 12,881,937 | 2 | 1 | 221 | 0 |
python,google-app-engine,google-apps-script
|
No you can't. AS and GAE are totally different things and won't work together.
What you can really do is (abstract):
write an AS that does what you need
redirect from GAE to the AS url to make sure that the user logs-in/grants permissions.
perform what you needed to do with AS
send back the user to GAE with a bunch of parameters if you need to
| 0 | 1 | 0 | 0 |
2012-10-14T11:35:00.000
| 2 | 1.2 | true | 12,881,834 | 0 | 0 | 1 | 1 |
I'm trying to work out if it is possible to use Google Apps Scripts inside Google App Engine?
And if there is a tutorial in doing this out there ?
Through reading Google's App Script site I get the feeling that you can only use app scripts inside Google Apps like Drive, Docs etc?
What I would like to do is to be able to use the Groups Service that is in GAS inside GAE to Create, delete and only show the groups that a person is in, all inside my GAE App.
Thanks
|
More Efficient Way to Prevent Duplicate Voting
| 12,889,623 | 0 | 0 | 193 | 0 |
python,google-app-engine,google-cloud-datastore,security,voting
|
Have you tried set() variable type which avoid duplicate?
| 0 | 0 | 0 | 0 |
2012-10-15T04:53:00.000
| 3 | 0 | false | 12,889,477 | 0 | 0 | 1 | 2 |
There are a few questions similar to this already but I am hoping for a somewhat different answer.
I have a website on Google App Engine in Python and I let registered users vote on posts, either up or down, and I only want each user to be able to vote once.
The primary method that I have thought of and read about is to simply keep a list in each post of the users who have voted on it and only show the voting button to users who have not voted yet. To me this seems like an inelegant solution. To keep track of this when hundreds of people are voting on hundreds of posts is a ton of information to keep track of for such a little bit of functionality.
Though I could be wrong about this. Is this as much of a data/datastore write hog as I am imagining it to be? If I have a few hundred votes going on each day is that not such a big deal?
|
More Efficient Way to Prevent Duplicate Voting
| 12,895,187 | 0 | 0 | 193 | 0 |
python,google-app-engine,google-cloud-datastore,security,voting
|
If you only have hundreds of votes for hundreds of posts this is not a big data hog or read/write hog. I am assuming you are storing the list of users as a list in the post enity on the data store. Depending on if you are storing a long that points to the user or a string for their email you are probably at most using 10 bytes per user per post. if you have a thousand votes per post and 1000 posts then you would be using 10 MB. Plus it wouldn't add much to the cost of read/writes. You could reduce that cost if you want by not indexing the value and just search through it when you get it from the data store for the needed information.
| 0 | 0 | 0 | 0 |
2012-10-15T04:53:00.000
| 3 | 0 | false | 12,889,477 | 0 | 0 | 1 | 2 |
There are a few questions similar to this already but I am hoping for a somewhat different answer.
I have a website on Google App Engine in Python and I let registered users vote on posts, either up or down, and I only want each user to be able to vote once.
The primary method that I have thought of and read about is to simply keep a list in each post of the users who have voted on it and only show the voting button to users who have not voted yet. To me this seems like an inelegant solution. To keep track of this when hundreds of people are voting on hundreds of posts is a ton of information to keep track of for such a little bit of functionality.
Though I could be wrong about this. Is this as much of a data/datastore write hog as I am imagining it to be? If I have a few hundred votes going on each day is that not such a big deal?
|
Using Java RMI to invoke Python method
| 12,890,526 | 1 | 0 | 2,053 | 0 |
java,python,rmi,rpc,web2py
|
I'd be astonished if you could do it at all. Java RMI requires Java peers.
| 0 | 0 | 1 | 1 |
2012-10-15T06:06:00.000
| 4 | 0.049958 | false | 12,890,137 | 0 | 0 | 1 | 1 |
I have a remote method created via Python web2py. How do I test and invoke the method from Java?
I was able to test if the method implements @service.xmlrpc but how do i test if the method implements @service.run?
|
Going from Ruby to Python : Crawlers
| 12,891,191 | 3 | 2 | 5,706 | 0 |
python,ruby,web-crawler
|
Between lxml and beautiful soup, lxml is more equivalent to nokogiri
because it is based on libxml2 and it has xpath/css support.
The equivalent of net/http is urllib2
| 0 | 0 | 1 | 0 |
2012-10-15T07:18:00.000
| 4 | 0.148885 | false | 12,890,897 | 0 | 0 | 1 | 1 |
I've started to learn python the past couple of days. I want to know the equivalent way of writing crawlers in python.
so In ruby I use:
nokogiri for crawling html and getting content through css tags
Net::HTTP and Net::HTTP::Get.new(uri.request_uri).body for getting JSON data from a url
what are equivalents of these in python?
|
Redirecting PyUnit output to file in Eclipse
| 12,945,643 | 2 | 2 | 823 | 0 |
eclipse,pydev,python-unittest
|
Output can be easily redirected to a file in Run Configurations > Common tab > Standard Input and Output section. Hiding just in plain sight...
| 0 | 0 | 0 | 1 |
2012-10-15T14:35:00.000
| 1 | 1.2 | true | 12,897,908 | 0 | 0 | 1 | 1 |
Is there a built-in way in Eclipse to redirect PyUnit's output to a file (~ save the report)?
|
How can I run a celery periodic task from the shell manually?
| 12,900,160 | 8 | 89 | 60,194 | 0 |
python,django,celery,django-celery,celery-task
|
I think you'll need to open two shells: one for executing tasks from the Python/Django shell, and one for running celery worker (python manage.py celery worker). And as the previous answer said, you can run tasks using apply() or apply_async()
I've edited the answer so you're not using a deprecated command.
| 0 | 1 | 0 | 1 |
2012-10-15T16:34:00.000
| 3 | 1 | false | 12,900,023 | 0 | 0 | 1 | 1 |
I'm using celery and django-celery. I have defined a periodic task that I'd like to test. Is it possible to run the periodic task from the shell manually so that I view the console output?
|
Search files(key) in s3 bucket takes longer time
| 12,907,767 | 3 | 2 | 5,534 | 1 |
python,amazon-s3,boto
|
There are two ways to implement the search...
Case 1. As suggested by john - you can specify the prefix of the s3 key file in your list method. that will return you result of S3 key files which starts with the given prefix.
Case 2. If you want to search the S3 key which are end with specific suffix or we can say extension then you can specify the suffix in delimiter. Remember it will give you correct result only in the case if you are giving suffix for the search item which is end with that string.
Else delimiter is used for path separator.
I will suggest you Case 1 but if you want to faster search with specific suffix then you can try case 2
| 0 | 0 | 1 | 0 |
2012-10-15T21:29:00.000
| 2 | 0.291313 | false | 12,904,326 | 0 | 0 | 1 | 1 |
I have 10000 files in a s3 bucket.When I list all the files it takes 10 minutes. I want to implement a search module using BOTO (Python interface to AWS) which searches files based on user input. Is there a way I can search specific files with less time?
|
Google App Engine Search API Performing Location Based Searches
| 13,315,721 | 1 | 2 | 498 | 0 |
python,google-app-engine,geolocation,geocoding,gae-search
|
I don't know of any examples that show geo specifically, but the process is very much the same. You can index documents containing GeoFields, each of which has a latitude and a longitude. Then, when you construct your query, you can:
limit the results by distance from a fixed point by using a query like distance(my_geo_field, geopoint(41, 65)) < 100
sort by distance from a point with a sort expression like distance(my_geo_field, geopoint(55, -20))
calculate expressions based on the distance between points by using a FieldExpression like distance(my_geo_field, geopoint(10, -30))
They work pretty much like any other field, except you can use the distance function in the query and expression languages. If you have any specific questions, feel free to ask here.
| 0 | 0 | 0 | 0 |
2012-10-15T21:36:00.000
| 1 | 0.197375 | false | 12,904,413 | 0 | 0 | 1 | 1 |
I have been going through trying to find the best option on Google App Engine to search a Model by GPS coordinates. There seem to be a few decent options out there such as GeoModel (which is out of date) but the new Search API seems to be the most current and gives a good bit of functionality. This of course comes at the possibility of it getting expensive after 1000 searches once this leaves the experimental zone.
I am having trouble going through the docs and creating a coherent full example to be able to use the Search API to search by location and I want to know if anyone has examples they are willing to share or create to help make this process a little more straightforward. I understand how to create the actual geosearch query but I am unclear as to how to glue that together with the construction and indexing of the document.
|
Safely storing encrypted credentials in django
| 12,904,783 | -2 | 12 | 6,457 | 0 |
python,django,security,encryption,credentials
|
Maybe you can rely on a multi-user scheme, by creating :
A user running Django (e.g. django) who does not have the permission to access the credentials
A user having those permissions (e.g. sync).
Both of them can be in the django group, to allow them to access the app. After that, make a script (a Django command, such as manage.py sync-external, for instance) that syncs what you want.
That way, the django user will have access to the app and the sync script, but not the credentials, because only the sync user does. If anyone tries to run that script without the credentials, it will of course result in an error.
Relying on Linux permission model is in my opinion a "Good Idea", but I'm not a security expert, so bear that in mind. If anyone has anything to say about what's above, don't hesitate!
| 0 | 0 | 0 | 0 |
2012-10-15T21:49:00.000
| 3 | -0.132549 | false | 12,904,560 | 0 | 0 | 1 | 1 |
I'm working on a python/django app which, among other things, syncs data to a variety of other services, including samba shares, ssh(scp) servers, Google apps, and others. As such, it needs to store the credentials to access these services. Storing them as unencrypted fields would be, I presume, a Bad Idea, as an SQL injection attack could retrieve the credentials. So I would need to encrypt the creds before storage - are there any reliable libraries to achieve this?
Once the creds are encrypted, they would need to be decrypted before being usable. There are two use cases for my app:
One is interactive - in this case the user would provide the password to unlock the credentials.
The other is an automated sync - this is started by a cron job or similar. Where would I keep the password in order to minimise risk of exploits here?
Or is there a different approach to this problem I should be taking?
|
GAE: Instance shutdown from source code
| 12,921,291 | 1 | 0 | 144 | 0 |
python,google-app-engine
|
If can disable an entire application (from Application Settings page) for sometime and then reenable it (or you can delete it from that point onwards).
There is no way you can "shutdown" a particular instance. You can have different version of your application, but at any moment in time, you can only have only one instance as the active version of your application. You can however split traffic between different versions, but that does not change active versions.
In terms of performance, you can change the Max Idle Instances value to one so that only one of the instance is preloaded or active.
| 0 | 1 | 0 | 0 |
2012-10-16T18:28:00.000
| 2 | 1.2 | true | 12,921,120 | 0 | 0 | 1 | 2 |
In user interface of Google App Engine, in "Instances" I can shutdown selected instances by press button "Shutdown".
Can I do shutdown by program from source code?
|
GAE: Instance shutdown from source code
| 12,944,920 | 0 | 0 | 144 | 0 |
python,google-app-engine
|
Actually you can force an instance to shutdown within your code, but it's not pretty.
Just allocate more memory than you instance has, it will be then shutdown for you.
I have used this technique in some python2.5 M/S apps where a DeadlineExceeded during startup could cause problems with incomplete imports. If the next handled request gave me an ImportError somewhere I knew the instance was toast, so I would redirect the user to the site, and then create a really big string exhausting memory, and then that instance would be shutdown.
You could in theory do something similiar.
| 0 | 1 | 0 | 0 |
2012-10-16T18:28:00.000
| 2 | 0 | false | 12,921,120 | 0 | 0 | 1 | 2 |
In user interface of Google App Engine, in "Instances" I can shutdown selected instances by press button "Shutdown".
Can I do shutdown by program from source code?
|
Timezone for server
| 12,923,424 | 1 | 0 | 115 | 0 |
python,django,timezone
|
generally, the server is setup to run as UTC, which has no daylight saving time, etc, and each user's settings contains their preferred timezone. Then you do the time difference calculation. You DB might have that feature.
| 0 | 0 | 0 | 0 |
2012-10-16T21:04:00.000
| 2 | 1.2 | true | 12,923,402 | 0 | 0 | 1 | 1 |
I am building a application where the clients are supposed to pass a datetime field to query the database. But the client will be in different timezones, so how to solve that problem.
Should I make it a point that the client's timezone is used as deafult for each user or the server for every user. And if so than how can I do it?
|
Sphinx code validation for Django projects
| 14,034,222 | 1 | 1 | 148 | 0 |
django,testing,python-sphinx
|
Turned out that I had some python paths wrong. Everything works as expected - as noted by bmu in his comment. (I'm writing this answer so I can close the question in a normal way)
| 0 | 0 | 0 | 0 |
2012-10-17T09:35:00.000
| 1 | 1.2 | true | 12,931,338 | 0 | 0 | 1 | 1 |
I have a Django project with a couple of apps - all of them with 100% coverage unit tests. And now I started documenting the whole thing in a new directory using ReST and Sphinx. I create the html files using the normal approach: make html.
Since there are a couple of code snippets in these ReST files, I want to make sure that these snippets stay valid over time. So what is a good way to test these ReST files, so I get an error if some API changes made such a snippet invalid? I guess there have to be some changes in conf.py?
|
Git, Django and pluggable applications
| 12,944,156 | 4 | 0 | 154 | 0 |
python,django,git
|
The way I work:
Each website has it's own git repo and each app has it's own repo. Each website also has it's own virtualenv and requirements.txt. Even though 2 websites may share the most recent version of MyApp right now, they may not in the future (maybe you haven't gotten one of the websites up to date on some API changes).
If you really must have just one version of MyApp, you could install it at the system level and then symlink it into the virtualenv for each project.
For development on a local machine (not production) I do it a little differently. I symlink the app's project folder into a "src" folder in the virtualenv of the website and then to a python setup.py develop into the virtualenv so that the newest changes are always used on the website "in real time".
| 0 | 0 | 0 | 0 |
2012-10-17T21:40:00.000
| 1 | 1.2 | true | 12,943,846 | 0 | 0 | 1 | 1 |
I'm currently developing several websites on Django, which requiere several Django Apps. Lets say I have two Django projects: web1 and web2 (the websites, each has a git repo). Both web1 and web2 have a different list of installed apps, but happen to both use one (or more) application(s) developed by me, say "MyApp" (also has a git repo there). My questions are:
What is the best way to decouple MyApp from any particular website? What I want is to develop MyApp independently, but have each website use the latest version of the app (if it has it installed, of course). I have two "proposed" solutions: use symlinks on each website to a "master" MyApp folder, or use git to push from MyApp to each website's repo.
How to deploy with this setup? Right now, I can push the git repo of web1 and web2 to a remote repo in my shared hosting account, and it works like a charm. Will this scale adequately?
I think I have the general idea working in my head, but I'm not sure about the specifics. Won't this create a nested git repo issue? How does git deal with simlinks, specifically if the symlink destination has a .git folder in it?
|
Close TCP port 80 and 443 after forking in Django
| 12,947,246 | 2 | 5 | 256 | 0 |
python,django,linux,apache2
|
If you use the subprocess module to execute the script, the close_fds argument to the Popen constructor will probably do what you want:
If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed.
Assuming they weren't simply closed, the first three file descriptors are traditionally stdin, stdout and stderr, so the listening sockets in the Django application will be among those closed in the child process.
| 0 | 1 | 0 | 0 |
2012-10-18T03:38:00.000
| 1 | 1.2 | true | 12,946,708 | 0 | 0 | 1 | 1 |
I am trying to fork() and exec() a new python script process from within a Django app that is running in apache2/WSGI Python. The new python process is daemonized so that it doesn't hold any association to apache2, but I know the HTTP ports are still open. The new process kills apache2, but as a result the new python process now holds port 80 and 443 open, and I don't want this.
How do I close port 80 and 443 from within the new python process? Is there a way to gain access to the socket handle descriptors so they can be closed?
|
Regex routing in Python using Tornado framework
| 12,950,281 | 0 | 1 | 1,761 | 0 |
python,regex,routing,tornado
|
What about this: r'/admin/?' or r'/admin/{0,1}? Pay attention that I'm only talking about regex, don't know if this would work in Django.
| 0 | 0 | 0 | 0 |
2012-10-18T07:48:00.000
| 2 | 1.2 | true | 12,949,634 | 1 | 0 | 1 | 1 |
i currently have this in my routing:
(r"/admin", AdminController.Index ),
(r"/admin/", AdminController.Index ),
how do i merge them with just one line and have admin and admin/ go to AdminController.Index?
i know this could be achieved via regex, but it doesnt seem to work
|
Why does SimpleHTTPServer redirect to ?querystring/ when I request ?querystring?
| 25,786,569 | 5 | 13 | 6,427 | 0 |
python,simplehttpserver,webdev.webserver
|
The right way to do this, to ensure that the query parameters remain as they should, is to make sure you do a request to the filename directly instead of letting SimpleHTTPServer redirect to your index.html
For example http://localhost:8000/?param1=1 does a redirect (301) and changes the url to http://localhost:8000/?param=1/ which messes with the query parameter.
However http://localhost:8000/index.html?param1=1 (making the index file explicit) loads correctly.
So just not letting SimpleHTTPServer do a url redirection solves the problem.
| 0 | 0 | 1 | 0 |
2012-10-18T11:26:00.000
| 3 | 0.321513 | false | 12,953,542 | 0 | 0 | 1 | 1 |
I like to use Python's SimpleHTTPServer for local development of all kinds of web applications which require loading resources via Ajax calls etc.
When I use query strings in my URLs, the server always redirects to the same URL with a slash appended.
For example /folder/?id=1 redirects to /folder/?id=1/ using a HTTP 301 response.
I simply start the server using python -m SimpleHTTPServer.
Any idea how I could get rid of the redirecting behaviour? This is Python 2.7.2.
|
Merge (append) GAE datastores from different apps
| 12,958,199 | 0 | 1 | 52 | 0 |
python,google-app-engine,google-cloud-datastore
|
You can use UrlFetch in App2 to request all the data you need from App1 and proces it to create your merged result. It is quite easy to serve and serialize the entities (with a cursor) using JSON for the data exchange in App1.
| 0 | 1 | 0 | 0 |
2012-10-18T14:06:00.000
| 2 | 0 | false | 12,956,582 | 0 | 0 | 1 | 1 |
I have 2 different apps that are basically identical but I had to setup a 2nd app because of GAE billing issues.
I want to merge the datastore data from the 1st app with the 2nd apps data store. By merge, I simply want to append the 2 data stores. To help w/visualise
App1:
SomeModel
AnotherModel
App2:
SomeModel
Another Model
I want app 2's datastore to be the sum of app2 and app1. The only way I see to transfer data from one app to another on the app engine administration page will overwrite the target destination data... I don't want to overwrite. thx for any help
|
How do I make Django admin URLs accessible to localhost only?
| 12,960,567 | 2 | 3 | 2,696 | 0 |
python,django,apache,wsgi,django-wsgi
|
I'd go for the Apache configuration + run a proxy in front + restrict in WSGI :
I dislike Apache for communicating with web clients when dynamic content generation is involved. Because of it's execution model, a slow or disconnected client can tie up the Apache process. If you have a proxy in front ( i prefer nginx, but even a vanilla apache will do ), the proxy will worry about the clients and Apache can focus on a new dynamic content request.
Depending on your Apache configuration, a process can also slurp a lot of memory and hold onto it until it hits MaxRequests. If you have memory intensive code in /admin ( many people do ), you can end up with Apache processes that grab a lot more memory than they need. If you split your Apache config into /admin and /!admin , you can tweak your apache settings to have a larger number of /!admin servers which require a smaller potential footprint.
I'm paranoid server setups.
I want to ensure the proxy only sends /admin to a certain Apache port.
I want to ensure that Apache only receives /admin on certain apache port, and that it came from the proxy (with a secret header) or from localhost.
I want to ensure that the WSGI is only running the /admin stuff based on certain server/client conditions.
| 0 | 0 | 0 | 0 |
2012-10-18T17:19:00.000
| 2 | 0.197375 | false | 12,960,212 | 0 | 0 | 1 | 1 |
What is the simplest way to make Django /admin/ urls accessible to localhost only?
Options I have thought of:
Seperate the admin site out of the project (somehow) and run as a different virtual host (in Apache2)
Use a proxy in front of the hosting (Apache2) web server
Restrict the URL in Apache within WSGI somehow.
Is there a standard approach?
Thanks!
|
Session variables on Ruby when called through a Python web service
| 12,966,697 | 2 | 1 | 95 | 0 |
python,ruby,session-state,web.py
|
Usually a session cookie is used to keep track of the session between client (in your case your Python web.py part) and server (in this case the Ruby server). Make sure you save the the session cookie when in your Python part when making the first request to the Ruby service. When making a second request simple send the cookie information as well. I think Ruby will then treat as the same session.
| 0 | 0 | 0 | 0 |
2012-10-19T02:34:00.000
| 1 | 0.379949 | false | 12,966,573 | 0 | 0 | 1 | 1 |
I have an app running on python web.py. For a couple of data items, I contact a ruby service. I was wondering if I could, in some way, keep track of the session in ruby as well.
Every time the python calls the ruby service, it treats it as a different session.
Any help?
Thanks!
PS - This might not be trivial and I might have to pass the session variables as parameters to the ruby but I am curious. Having this will save me some time when I start refactoring.
|
Efficient way to do large IN query in Google App Engine?
| 12,980,347 | 0 | 3 | 176 | 1 |
python,google-app-engine
|
I misunderstood part of your problem, I thought you were issuing a query that was giving you 250 entities.
I see what the problem is now, you're issuing an IN query with a list of 250 phone numbers, behind the scenes, the datastore is actually doing 250 individual queries, which is why you're getting 250 read ops.
I can't think of a way to avoid this. I'd recommend avoiding searching on long lists of phone numbers. This seems like something you'd need to do only once, the first time the user logs in using that phone. Try to find some way to store the results and avoid the query again.
| 0 | 1 | 0 | 0 |
2012-10-19T14:43:00.000
| 3 | 0 | false | 12,976,652 | 0 | 0 | 1 | 1 |
A user accesses his contacts on his mobile device. I want to send back to the server all the phone numbers (say 250), and then query for any User entities that have matching phone numbers.
A user has a phone field which is indexed. So I do User.query(User.phone.IN(phone_list)), but I just looked at AppStats, and is this damn expensive. It cost me 250 reads for this one operation, and this is something I expect a user to do often.
What are some alternatives? I suppose I can set the User entity's id value to be his phone number (i.e when creating a user I'd do user = User(id = phone_number)), and then get directly by keys via ndb.get_multi(phones), but I also want to perform this same query with emails too.
Any ideas?
|
Modify a Google App Engine entity id?
| 13,069,255 | 3 | 2 | 1,404 | 0 |
python,google-app-engine
|
The entity ID forms part of the primary key for the entity, so there's no way to change it. Changing it is identical to creating a new entity with the new key and deleting the old one - which is one thing you can do, if you want.
A better solution would be to create a PhoneNumber kind that provides a reference to the associated User, allowing you to do lookups with get operations, but not requiring every user to have exactly one phone number.
| 0 | 1 | 0 | 0 |
2012-10-19T15:05:00.000
| 1 | 1.2 | true | 12,977,110 | 0 | 0 | 1 | 1 |
I'm using Google App Engine NDB. Sometimes I will want to get all users with a phone number in a specified list. Using queries is extremely expensive for this, so I thought I'll just make the id value of the User entity the phone number of the user so I can fetch directly by ids.
The problem is that the phone number field is optional, so initially a User entity is created without a phone number, and thus no value for id. So it would be created user = User() as opposed to user = User(id = phone_number).
So when a user at a later point decides to add a phone number to his account, is there anyway to modify that User entity's id value to the new phone number?
|
Extending ArcGIS
| 32,124,775 | 0 | 0 | 727 | 0 |
java,python,visual-studio-2010,arcgis,arcobjects
|
Creating a Python AddIn is probably the quickest and easiest approach if you just want to do some geoprocessing and deploy the tool to lots of users.
But as soon as you need a user interface (that does more than simply select GIS data sources) you should create a .Net AddIn (using either C# or VB.net).
I've created many AddIns over the years and they are a dramatic improvement to the old ArcGIS "plugins" that involved lots of complicated COM registration. AddIns are easy to build and deploy. Easy for users to install and uninstall.
.Net has excellent, powerful features for creating rich user interfaces with the kind of drag and drop that you require. And there are great books, forums, samples to leverage.
| 0 | 0 | 0 | 1 |
2012-10-20T17:52:00.000
| 3 | 0 | false | 12,991,111 | 0 | 0 | 1 | 1 |
I've been tasked with a thesis project where i have to extend the features of ArcGis. I've been asked to create a model written in Python that can run out of ArcGIS 10. This model will have a simple user interface where the user can drag/drop a variety of shapefiles and enter the values for particular variables in order for the model to run effectively. Once the model has finished running, a new shapefile is created that lays out the most cost effective Collector Cable route for a wind turbine from point A to point B.
I'd like to know if such a functionality/ extension already exists in ArcGIS so i don't have to re-invent the wheel. If not then what is the best programming language to learn to extend ArcGIS for this (Python vs Visual basic vs Java). My background is Java, PHP, Jquery and Javascript. Also any pointers in the right direction i.e documentation, resources etc would be hugely appreciated
|
Boto's DynamoDB API returns wrong value when using item_count
| 13,003,481 | 5 | 3 | 592 | 0 |
python,amazon-web-services,boto,amazon-dynamodb
|
The item_count value is only updated every six hours or so. So, I think boto is returning you the value as it is returned by the service but that value is probably not up to date.
| 0 | 0 | 1 | 0 |
2012-10-21T15:33:00.000
| 1 | 1.2 | true | 12,999,262 | 0 | 0 | 1 | 1 |
I'm using the python framework Boto to interact with AWS DynamoDB. But when I use "item_count" it returns the wrong value. What's the best way to retrieve the number of items in a table? I know it would be possible using the Scan operation, but this is very expensive on resources and can take a long time if the table is quiet large.
|
What is a good cms that is postgres compatible, open source and either php or python based?
| 13,003,890 | 1 | 5 | 5,703 | 1 |
postgresql,content-management-system,python-2.7
|
Have you tried Drupal. Drupal supports PostgreSQL and is written in PHP and is open source.
| 0 | 0 | 0 | 0 |
2012-10-21T16:56:00.000
| 2 | 0.099668 | false | 13,000,007 | 0 | 0 | 1 | 1 |
Php or python
Use and connect to our existing postgres databases
open source / or very low license fees
Common features of cms, with admin tools to help manage / moderate community
have a large member base on very basic site where members provide us contact info and info about their professional characteristics. About to expand to build new community site (to migrate our member base to) where the users will be able to msg each other, post to forums, blog, share private group discussions, and members will be sent inivitations to earn compensation for their expertise. Profile pages, job postings, and video chat would be plus.
Already have a team of admins savvy with web apps to help manage it but our developer resources are limited (3-4 programmers) and looking to save time in development as opposed to building our new site from scratch.
|
Python: sending and receiving large files over POST using cherrypy
| 26,299,500 | 0 | 3 | 4,776 | 0 |
python,http,post,upload,cherrypy
|
Huge file uploads always problematic. What would you do when connection closes in the middle of uploading? Use chunked file upload method instead.
| 0 | 0 | 1 | 0 |
2012-10-21T22:07:00.000
| 2 | 0 | false | 13,002,676 | 0 | 0 | 1 | 1 |
I have a cherrypy web server that needs to be able to receive large files over http post. I have something working at the moment, but it fails once the files being sent gets too big (around 200mb). I'm using curl to send test post requests, and when I try to send a file that's too big, curl spits out "The entity sent with the request exceeds the maximum allowed bytes." Searching around, this seems to be an error from cherrypy.
So I'm guessing that the file being sent needs to be sent in chunks? I tried something with mmap, but I couldn't get it too work. Does the method that handles the file upload need to be able to accept the data in chunks too?
|
How can I defer the execution of Celery tasks?
| 36,787,909 | 2 | 18 | 25,567 | 0 |
python,django,celery,django-celery
|
I think you are trying to avoid race condition of your own scripts, not asking for a method to delay a task run.
Then you can create a task, and in that task, call each of your task with .apply(), not .apply_async() or .delay(). So that these tasks run sequentially
| 0 | 1 | 0 | 0 |
2012-10-22T06:42:00.000
| 3 | 0.132549 | false | 13,006,151 | 0 | 0 | 1 | 1 |
I have a small script that enqueues tasks for processing. This script makes a whole lot of database queries to get the items that should be enqueued. The issue I'm facing is that the celery workers begin picking up the tasks as soon as it is enqueued by the script. This is correct and it is the way celery is supposed to work but this often leads to deadlocks between my script and the celery workers.
Is there a way I could enqueue all my tasks from the script but delay execution until the script has completed or until a fixed time delay?
I couldn't find this in the documentation of celery or django-celery. Is this possible?
Currently as a quick-fix I've thought of adding all the items to be processed into a list and when my script is done executing all the queries, I can simply iterate over the list and enqueue the tasks. Maybe this would resolve the issue but when you have thousands of items to enqueue, this might be a bad idea.
|
Is sleep() blocking the handling of requests in Django?
| 13,017,620 | 2 | 5 | 1,668 | 0 |
python,django
|
I would image that calling sleep() should block the execution of all Django code in most cases. However it might depend on the deployment architecture (e.g. gevent, gunicorn, etc). For instance if you are using a server which fires Django threads for each request, then no it will not block all the code.
In most cases however using something like Celeri would like to be a much better solution because (1) don't reinvent the wheel and (2) it has been tested.
| 0 | 0 | 0 | 0 |
2012-10-22T18:24:00.000
| 1 | 0.379949 | false | 13,017,421 | 0 | 0 | 1 | 1 |
In Django, if the view uses a sleep() function while answering a request, does this block the handling of the whole queue of requests?
If so, how to delay an http answer without this blocking behavior? Can we do that out-of-the-box and avoid using a job queue like Celeri?
|
AuthAlreadyAssociated Exception in Django Social Auth
| 13,032,929 | 16 | 18 | 9,224 | 0 |
python,django,django-socialauth
|
DSA doesn't logout accounts (or flush sessions) at the moment. AuthAlreadyAssociated highlights the scenario where the current user is not associated to the current social account trying to be used. There are a couple solutions that might suite your project:
Define a sub-class of social_auth.middleware.SocialAuthExceptionMiddleware and override the default behavior (process_exception()) to redirect or setup the warning you like in the way you prefer.
Add a pipeline method (replacing social_auth.backend.pipeline.social.social_auth_user) that logouts the current user instead of raising an exception.
| 0 | 0 | 0 | 0 |
2012-10-22T19:12:00.000
| 5 | 1.2 | true | 13,018,147 | 0 | 0 | 1 | 1 |
After I create a user using say Facebook(let's say fbuser) or Google(googleuser). If I create another user through the normal django admin(normaluser), and try logging again using Facebook or Google while third user(normaluser) is logged in, it throws an error exception AuthAlreadyAssociated.
Ideally it should throw an error called you are already logged in as
user normaluser.
Or it should log out normal user, and try associating with the
account which is already associated with FB or Google, as the case
may be.
How do I implement one of these two above features? All advice welcome.
Also when I try customizing SOCIAL_AUTH_PIPELINE, it is not possible to login with FB or Google, and it forces the login URL /accounts/login/
|
Using IronPython at a hosting company
| 13,024,373 | 0 | 0 | 136 | 0 |
ironpython,web-hosting,shared-hosting
|
IronPython should work in shared hosting environments. I'm assuming they have some sort of partial-trust setup and not a full-trust environment; if it's full-trust, there's no issues. If not, it should still work, but it hasn't been as heavily tested. You have to deploy it with your project (in the bin directory), but aside from that, it should just work.
You can use NuGet to add is to your project ("IronPython"), or find the necessary files in the Platforms/Net40 directory of an installation or the zip file.
| 0 | 0 | 0 | 0 |
2012-10-22T23:20:00.000
| 1 | 0 | false | 13,021,375 | 0 | 0 | 1 | 1 |
Does anyone have experience running IronPython in a shared hosting environment? Am using one hosting company but they don't support it. It's a project mixing ASP.NET MVC 4 with IronPython.
I would do a VM somewhere if all else fails, but figured I give this a shot to save a few bucks. #lazystackoverflow
Thanks,
-rob
|
Can i use apache mahout with django application
| 13,696,473 | 1 | 1 | 326 | 0 |
python,django,apache,mahout
|
I think you could build an independent application with mahout, and you python application is just a client.
| 0 | 0 | 0 | 0 |
2012-10-23T03:21:00.000
| 1 | 0.197375 | false | 13,023,103 | 0 | 0 | 1 | 1 |
I am building the web application in python/django.
I need to apply some machine learning algorithms on some data. I know there are libraries available for python. But someone in my company was saying that Mahout is very good toll for that.
i want to know that can i use it with python/django. or i should do that with python libraries only
|
Django-Nonrel(mongo-backend):Model instance modification tracking
| 13,031,452 | 0 | 0 | 132 | 1 |
python,django,mongodb,django-models,django-nonrel
|
After some deep digging into the Django Models i was able to solve the problem. The save() method inturn call the save_base() method. This method saves the returned results, ids in case of mongo, into self.id. This _id field can then be picked by by over riding the save() method for the model
| 0 | 0 | 0 | 0 |
2012-10-23T06:01:00.000
| 1 | 1.2 | true | 13,024,361 | 0 | 0 | 1 | 1 |
I am using Django non-rel version with mongodb backends. I am interested in tracking the changes that occur on model instances e.g if someone creates/edits or deletes a model instance. Backend db is mongo hence models have an associated "_id" fields with them in the respective collections/dbs.
Now i want to extract this "_id" field on which this modif operation took place. The idea is to write this "_id" field to another db so someone can pick it up from there and know what object was updated.
I thought about overriding the save() method from Django "models.Model" since all my models are derived from that. However the mongo "_id" field is obviously not present there since the mongo-insert has not taken place yet.
Is there any possibility of a pseudo post-save() method that can be called after the save operation has taken place into mongo? Can django/django-toolbox/pymongo provide such a combination?
|
Overriding a Java class from Python using JCC. Is that possible?
| 13,026,184 | 0 | 0 | 148 | 0 |
java,python,pylucene,jcc
|
you could create a proxy class in Python that calls the Java class. Then on the proxy class you can override whatever you need.
| 0 | 0 | 0 | 0 |
2012-10-23T07:56:00.000
| 1 | 0 | false | 13,025,856 | 1 | 0 | 1 | 1 |
I'm using JCC to create a Python wrapper for a Java library and I need to override a method from a Java class inside the Python script. Is it possible? How can you do that if it is possible?
|
Django running under python 2.7 on AWS Elastic Beanstalk
| 14,729,865 | 1 | 6 | 1,731 | 0 |
django,python-2.7,mod-wsgi,amazon-elastic-beanstalk
|
To get around mod_wsgi limitation, you can deploy your application under your own wsgi container like uWSGI and add configuration to apache to serve as a reverse proxy for your WSGI container.
You can use container_commands to place your apache configuration files under /etc/httpd/...
| 0 | 0 | 0 | 0 |
2012-10-23T09:53:00.000
| 4 | 0.049958 | false | 13,027,848 | 0 | 0 | 1 | 1 |
According to the docs, AWS Elastic Beanstalk supports Python 2.6. I wonder if anyone has set up a custom AMI using the EBS backed 64 bit Linux AMI to run django under Python 2.7 on the beanstalk? While most aspects of a set up under 2.7 will probably be straightforward using virtualenv or changing the symlinks, I'm worried about the amazon build of mod_wsgi. I understand that depending on how mod_wsgi has been compiled there may be issues with running it in combination with Python 2.7. I also wonder if there will be any postgreSQL issues...
|
django add relationships to user model
| 13,034,305 | 0 | 3 | 7,527 | 0 |
python,django,django-models
|
Add a ManytoMany relationship in your article to the User model. Everytime a user likes one article add him into it. Length of that field will be the number of like in that article.
| 0 | 0 | 0 | 0 |
2012-10-23T15:28:00.000
| 4 | 0 | false | 13,033,979 | 0 | 0 | 1 | 1 |
I have a django project using the built in user model.
I need to add relationships to the user. For now a "like" relationship for articles the user likes and a "following" relationship for other users followed.
What's the best way to define these relationships? The django doc recommends creating a Profile model with a one on one relation to the user to add fields to the user. but given no extra fields will be added to the user profile in my case this is overkill.
Any suggestions?
|
Scraping news sites with Python
| 13,040,391 | 2 | 2 | 570 | 0 |
python,json,reddit
|
Would it make sense to keep the Python scraper application running on
it's own server, which then writes the scraped URL's to the database?
Yes, that is a good idea. I would set up a cron job to run the program every so often. Depending on the load you're expecting, it doesn't necessarily need to be on its own server. I would have it as its own application.
I heard it may make sense to split the application and one does the
reading while the other does the writing, whats this about?
I am assuming the person who said this meant that you should have an application to write to your database (your python script) and an application to read URLs from the database (your WordPress wrapper, or perhaps another Python script to write something WordPress can understand).
What would the flow of the Python code look like? I can fumble my way
around writing it but I just am not entirely sure on how it should
flow.
This is a somewhat religious matter among programmers. However I feel that your program should be simple enough. I would simply grab the JSON and have a query that inserts into the database if the entry doesn't exist yet.
What else am I not thinking of here, any tips?
I personally would use urllib2 and MySQLdb modules for the Python script. Good luck!
| 0 | 0 | 1 | 0 |
2012-10-23T22:04:00.000
| 1 | 1.2 | true | 13,040,048 | 0 | 0 | 1 | 1 |
I'm extremely new to Python, read about half a beginner book for Python3. I figure doing this will get me going and learning with something I actually want to do instead of going through some "boring" exercises.
I'm wanting to build an application that will scrape Reddit for the top URL's and then post these onto my own page. It would only check a couple times a day so no hammering at all here.
I want to parse the Reddit json (http://www.reddit.com/.json) and other subreddits json into URL's that I can organize into my own top list and have my own categories as well on my page so I don't have to keep visiting Reddit.
The website will be a Wordpress template with the DB hosted on it's own server (mysql). I will be hosting this on AWS using RDS, ELB, Auto-scaling, and EC2 instances for the webservers.
My questions are:
-Would it make sense to keep the Python scraper application running on it's own server, which then writes the scraped URL's to the database?
-I heard it may make sense to split the application and one does the reading while the other does the writing, whats this about?
-What would the flow of the Python code look like? I can fumble my way around writing it but I just am not entirely sure on how it should flow.
-What else am I not thinking of here, any tips?
|
Webapp2 + WTForms issue: How to pass values and errors back to user?
| 13,051,668 | 1 | 1 | 348 | 0 |
python,google-app-engine,webapp2,wtforms
|
I think this will work if the routes are part of the same app.
But why not using a single handler with get and put and a method _create, which can be called (self._create instead of a redirect) by get and put to render the template with the form. It is faster than a browser redirect and you can pass arguments in an easy way.
| 0 | 0 | 0 | 0 |
2012-10-24T12:45:00.000
| 1 | 1.2 | true | 13,049,515 | 0 | 0 | 1 | 1 |
I am having a problem with webapp2 and wtforms. More specifically I have defined two methods in two different handlers, called:
create, which is a GET method listening to a specific route
save, which is a POST method listening to another route
In the save method I validate my form and if fails, I want to redirect to the create method via the redirect_to method, where I can render the template with the form. Is this possible with any way? I found an example on how this can be done if the same handler with get and post methods, but is this possible in methods of different handlers?
Thanks in advance!
|
script to open web browser and enter data
| 13,073,177 | 1 | 1 | 1,062 | 0 |
python
|
There are a number of tools out there for this purpose. For example, Selenium, which even has a package on PyPI with Python bindings for it, will do the job.
| 0 | 0 | 1 | 0 |
2012-10-25T16:31:00.000
| 2 | 1.2 | true | 13,073,147 | 0 | 0 | 1 | 1 |
I am not sure if this is possible, but I was wondering if it would be possible to write a script or program that would automatically open up my web browser, go to a certain site, fill out information, and click "send"? And if so, where would I even begin? Here's a more detailed overview of what I need:
Open browser
Go to website
Fill out a series of forms
Click OK
Fill out more forms
Click OK
Thank you all in advance.
|
South initial migrations are not forced to have a default value?
| 13,085,822 | 1 | 0 | 107 | 1 |
python,django,postgresql,django-south
|
If you add a column to a table, which already has some rows populated, then either:
the column is nullable, and the existing rows simply get a null value for the column
the column is not nullable but has a default value, and the existing rows are updated to have that default value for the column
To produce a non-nullable column without a default, you need to add the column in multiple steps. Either:
add the column as nullable, populate the defaults manually, and then mark the column as not-nullable
add the column with a default value, and then remove the default value
These are effectively the same, they both will go through updating each row.
I don't know South, but from what you're describing, it is aiming to produce a single DDL statement to add the column, and doesn't have the capability to add it in multiple steps like this. Maybe you can override that behaviour, or maybe you can use two migrations?
By contrast, when you are creating a table, there clearly is no existing data, so you can create non-nullable columns without defaults freely.
| 0 | 0 | 0 | 0 |
2012-10-26T11:00:00.000
| 2 | 1.2 | true | 13,085,658 | 0 | 0 | 1 | 2 |
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something.
What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null/default error.
Is it normal?
Why is it so? And why is South checking this null/default thing anyway?
|
South initial migrations are not forced to have a default value?
| 13,085,826 | 0 | 0 | 107 | 1 |
python,django,postgresql,django-south
|
When you have existing records in your database and you add a column to one of your tables, you will have to tell the database what to put in there, south can't read your mind :-)
So unless you mark the new field null=True or opt in a default value it will raise an error. If you had an empty database, there are no values to be set, but a model field would still require basic properties. If you look deeper at the field class you're using you will see django sets some default values, like max_length and null (depending on the field).
| 0 | 0 | 0 | 0 |
2012-10-26T11:00:00.000
| 2 | 0 | false | 13,085,658 | 0 | 0 | 1 | 2 |
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something.
What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null/default error.
Is it normal?
Why is it so? And why is South checking this null/default thing anyway?
|
A Django library for Gmail
| 13,087,381 | 0 | 3 | 1,086 | 0 |
python,django,gmail,gmail-imap
|
I would suggest you to look at context.io, I've used it before and it works great.
| 0 | 0 | 0 | 1 |
2012-10-26T11:18:00.000
| 2 | 0 | false | 13,085,946 | 0 | 0 | 1 | 1 |
I'm looking for an API or library that gives me access to all features of Gmail from a Django web application.
I know I can receive and send email using IMAP or POP3. However, what I'm looking for are all the GMail features such as marking emails with star or important marker, adding or removing tags, etc.
I know there is a Settings API that allows me to create or delete labels and filters, but I haven't found anything that actually allows me to set labels to emails, or set emails as starred, and so on.
Can anyone give me a pointer?
|
zipped packages and in memory storage strategies
| 13,126,025 | 1 | 1 | 57 | 0 |
google-app-engine,memory-management,python-2.7,multi-tenant
|
i agree with nick. there should be no python code in the tenant specific zip. to solve the memory issue i would cache most of the pages in the datastore. to serve them you don't need to have all tenants loaded in your instances. you might also wanna look in pre generating html views on save rather then on request.
| 0 | 0 | 0 | 0 |
2012-10-26T16:06:00.000
| 1 | 0.197375 | false | 13,090,476 | 1 | 0 | 1 | 1 |
i have a multitenant app with a zipped package for each tenant/client which contains the templates and handlers for the public site for each of them. right now i have under 50 tenants and its fine to keep the imported apps in memory after the first request to that specific clients domain.
this approach works well but i have to redeploy the app with the new clients zipped package every time i make changes and/or a new client gets added.
now im working to make it possible to upload those packages via web upload and store them into the blobstore.
my concerns now are:
getting the packages from the blobstore is of course slower than importing a zipped package in the filesystem.
but this is not the biggest issue.
how do i load/import a module that is not in the filesystem and has no path?
if every clients package is around 1mb its not a problem as long as the client base is low but what if it raises
to 1k or even more? obviously there i dont have enough memory to store a few GB of data in memory.
what is the best way to deal with this?
if i use the instance memory to store the previously tenant package in memory how would
invalidate the data in memory if there would be a newly uploaded package?
i would appreciate some thougts about how to deal this kind of situation.
|
ManyToMany in Django admin: select none
| 13,090,557 | 11 | 3 | 2,338 | 0 |
python,django,django-admin
|
This sounds like a browser issue rather than a Django issue.
To unselect an element in a multiple select, press the Ctrl key (linux / windows) or the Command key (mac) when you click on it.
| 0 | 0 | 0 | 0 |
2012-10-26T16:06:00.000
| 1 | 1.2 | true | 13,090,479 | 0 | 0 | 1 | 1 |
Having A = ManyToManyField(B, null=True, blank=True), when I go in A's admin page, it seems I can't unselect every entries in the ManyToMany box after having clicked on a B element.
And even if I don't click on any entry, there is a related B element selected after saving (the first B element I guess).
But I want to add A elements without having to relate them to any one of B...
Is there any way to say to Django admin to select no element? (other than creating a dummy B element for those situations)
|
Django get client's domain name/host name
| 38,819,664 | 2 | 2 | 10,741 | 0 |
python,django,gethostbyaddr
|
You can just print HttpRequest.META and find what you want and I think req.META['HTTP_ORIGIN'] is the thing you need, It's the same as the browser address bar value.
| 0 | 0 | 0 | 0 |
2012-10-26T20:25:00.000
| 3 | 0.132549 | false | 13,093,951 | 0 | 0 | 1 | 1 |
What is the easiest way to obtain the user's host/domain name, if available?
Or is there a function to lookup the IP address of the user to see if it is bound to a named address? (i.e. gethostbyaddr() in PHP)
HttpRequest.get_host() only returns the IP address of the user.
|
Do as many copies as the number of revisions exist for a file in plone?
| 13,676,795 | 0 | 4 | 122 | 0 |
python,plone,zope
|
If you change the content in any way (or just re-save it) a duplicate of the object is created (which allows you to undo later). If you change only the metadata (like the title) the object is usually not duplicated.
These duplicated "backup" copies are removed (and the undo option for them) whenever the database is packed.
There rules dependent on the object being persistent: that's almost all normal Zope (and Plone) objects. Some exceptions may exist, but they are rare.
| 0 | 0 | 0 | 0 |
2012-10-27T06:25:00.000
| 2 | 0 | false | 13,097,843 | 1 | 0 | 1 | 1 |
In plone, how many physical copies of a file (or any content) exist if it is revised say 4 times? I am using plone 4.1 wherein the files and images are stored on the file system.
|
Can I deploy (update) Single Python file to existing Google App Engine application?
| 13,102,795 | 5 | 4 | 1,137 | 0 |
python,google-app-engine,deployment
|
No, there isn't. If you change one file, you need to package and upload the whole application.
| 0 | 1 | 0 | 0 |
2012-10-27T06:52:00.000
| 1 | 1.2 | true | 13,097,975 | 0 | 0 | 1 | 1 |
Is it possible to update single py file in existing GAE app.something like we update cron.yaml using, appcfg.py update_cron
Is there any way to update .py file?
Regrads.
|
Installing Django with pip, django-admin not found
| 13,098,637 | 3 | 4 | 7,509 | 0 |
python,django,macos,pip
|
django-admin is not on path, you could search for it find / -name django-admin.py and add it to your .profile/.bashrc/.whatever. Let me recommend using virtualenv for everything python related you do though. Installing it in a local environment prevents this kind of problem.
Each environment comes with its own Python distribution, so you can keep different versions of Python in different environments. It also ignores globally installed packages with the --no-site-packages flag (which is default) but this doesn't work properly with packages installed using eg Ubuntu's apt-get (they are in dist-packages iirc). Any packages installed using pip or easy_install inside the environment are also only local. This lets you simulate different deployments. But most importantly, it keeps the global environment clean.
| 0 | 0 | 0 | 0 |
2012-10-27T08:19:00.000
| 4 | 0.148885 | false | 13,098,457 | 0 | 0 | 1 | 1 |
I installed python using: brew install python and then eventually pip install Django. However when I try to run django-admin.py startproject test I just get a file not found. What did I forget?
|
Fastest way to write to a log in python
| 13,099,158 | 2 | 1 | 2,514 | 0 |
python,logging,uwsgi,gevent
|
If latency is a crucial factor for your app, undefinitely writing to disk could make things really bad.
If you want to survive a reboot of your server while redis is still down i see no other solutions than writing to disk, otherwise you may want to try with a ramdisk.
Are you sure having a second server with a second instance of redis would not be a better choice ?
Regarding logging, i would simply use low-level I/O functions as they have less overhead (even if we are talking of very few machine cycles)
| 0 | 1 | 0 | 0 |
2012-10-27T09:50:00.000
| 2 | 1.2 | true | 13,099,032 | 0 | 0 | 1 | 1 |
I am using the gevent loop in uWSGI and I write to a redis queue. I get about 3.5 qps. On occasion, there will be an issue with the redis connection so....if fail, then write to a file where I will have a separate process do cleanup later. Because my app very latency aware, what is the fastest way to dump to disk in python? Will python logging suffice?
|
How can one test appengine/drive/google api based applications?
| 13,125,588 | 1 | 2 | 259 | 0 |
python,google-app-engine,google-drive-api,google-api-python-client
|
There are the mock http and request classes that the apiclient package uses for its own testing. They are in apiclient/http.py and you can see how to use them throughout the test suite.
| 0 | 0 | 1 | 1 |
2012-10-29T03:42:00.000
| 1 | 1.2 | true | 13,115,599 | 0 | 0 | 1 | 1 |
There are several components involved in auth and the discovery based service api.
How can one test request handlers wrapped with decorators used from oauth2client (eg oauth_required, etc), httplib2, services and uploads?
Are there any commonly available mocks or stubs?
|
Django Queryset for no-operation
| 13,116,348 | 4 | 0 | 126 | 0 |
python,django
|
Model.objects.none() always gives you an empty queryset
| 0 | 0 | 0 | 0 |
2012-10-29T05:27:00.000
| 2 | 1.2 | true | 13,116,301 | 0 | 0 | 1 | 1 |
Is there any way to specify a Django Queryset, which will do nothing but still be valid Queryset? Empty queryset should ideally not call DB and results would be empty.
|
UWSGI adding double slash to admin login form in Django
| 14,267,828 | 1 | 2 | 768 | 0 |
python,django,uwsgi,mezzanine
|
After a long time I've figured out what the problem is! I had followed some directions on how to set up uwsgi with nginx that said to include a line saying uwsgi_param SCRIPT_NAME /;. The purpose of SCRIPT_NAME is to provide the base path for the UWSGI application, so in this case it serves to double the slashes. I found the same problem occurring in pyramid. I suspect this will occur with any UWSGI application.
| 0 | 0 | 0 | 0 |
2012-10-29T15:43:00.000
| 1 | 1.2 | true | 13,124,913 | 0 | 0 | 1 | 1 |
Running Django behind UWSGI, I have set up an instance of Mezzanine that is almost working perfectly. The only problem is the admin login page does not work properly. If you just try to log in normally than the browser is redirected to http://admin/. The html form action attribute is set to //admin/ instead of /admin/ so the browser sees "admin" as being a domain name instead of a root directory of the current domain.
I've tried wading through the Django and Mezzanine package codes, but I can't see anything in there that should be causing an extraneous slash. I found one web page saying that changing settings.FORCE_SCRIPT_NAME to "/" could cause this, but I am not overriding the default value of None so this shouldn't be the cause.
In urls.py I have the following (which I think is the default):
urlpatterns = patterns("",
# Change the admin prefix here to use an alternate URL for the
# admin interface, which would be marginally more secure.
("^admin/", include(admin.site.urls)),
....
|
How to remove .upload file
| 13,127,985 | 0 | 0 | 594 | 0 |
python,django,ubuntu
|
If the file size is greater than 2.5MB Django will write the uploaded file to your /tmp directory (on Linux) before saving it. After the upload is complete you can remove the file manually or you can have a cron job (or something similar) to remove the temp files automatically.
| 0 | 0 | 0 | 0 |
2012-10-29T18:40:00.000
| 1 | 1.2 | true | 13,127,708 | 0 | 0 | 1 | 1 |
When I use a form to upload a large video to the server, there is a temp.upload created in the /tmp directory. Where does this .upload created? Can I remove it after the uploading is complete? I use Django and python on ubuntun.
I check Django documentation for file upload. It says that:
"If an uploaded file is too large, Django will write the uploaded file to a temporary file stored in your system's temporary directory. On a Unix-like platform this means you can expect Django to generate a file called something like /tmp/tmpzfp6I6.upload. If an upload is large enough, you can watch this file grow in size as Django streams the data onto disk."
How to let Django to remove this file automatically after the uploading is complete? How can I get this temporary .upload path information ?
Thanks
|
Setting default encoding Openerp/Python
| 13,135,341 | 4 | 0 | 803 | 0 |
python,encoding,openerp
|
The comment # -*- coding: utf-8 -*- tells the python parser the encoding of the source file. It affects how the bytecode compiler converts unicode literals in the source code. It has no effect on the runtime environment.
You should explicitly define the encoding when converting strings to unicode. If you are getting UnicodeDecodeError, post your problem scenario and I'll try to help.
| 0 | 0 | 0 | 0 |
2012-10-30T07:19:00.000
| 1 | 0.664037 | false | 13,134,353 | 0 | 0 | 1 | 1 |
Do you guys know how to change the default encoding of an openerp file?
I've tried adding # -*- coding: utf-8 -*- but it doesn't work (is there a setup that ignore this command? just a wild guess). When I try to execute sys.getdefaultencoding() still its in ASCII.
Regards
|
How to run a command on an app and get the exit code on cloudfoundry
| 13,139,010 | 1 | 2 | 127 | 0 |
python,ssh,cloud-foundry
|
I assume you mean get the result from outside of CloudFoundry (i.e. not one app launching another app and getting result, stdout and stderr).
You can only access CloudFoundry apps over http(s), so you would have to find a way to wrap your invocation into something that exposes everything you need as http.
| 0 | 1 | 0 | 0 |
2012-10-30T10:18:00.000
| 1 | 0.197375 | false | 13,136,885 | 0 | 0 | 1 | 1 |
We need to run arbitrary commands on cloudfoundry. (The deployed apps are Python/Django, but the language for this solution does not matter). Ideally over ssh, but the protocol does not matter.
We need a reliable way to get the exit code of the command that was run, as well as its stderr and stdout. If possible, the command running should be synchronous (as in, blocks the client until the command finished on the cloudfoundry app).
Is there a solution out there that allows us to do this, or what would be a good way to approach this issue?
|
Call Django view from a template
| 13,139,288 | 0 | 1 | 1,663 | 0 |
python,django,templates,view
|
Map an url to that view. Redirect to that url from template.
| 0 | 0 | 0 | 0 |
2012-10-30T12:40:00.000
| 2 | 0 | false | 13,139,192 | 0 | 0 | 1 | 1 |
I want to call a Django view from a Django template. Is there a tag to do that ?
For example, I want to create a view which manage my login form. So basically, this view will be used just to send the Django form to the html template.
Thank you for your help.
|
Django app life cycle: standart hooks to handle start and reload?
| 13,145,394 | 2 | 0 | 925 | 0 |
python,django
|
Short answer: no.
The longer version is that it really depends on how your application is deployed. In Java for example, it's not Spring (the equivalent of Django in this analogy) that gives you an onStart hook, it's Tomcat or Jetty.
The usual interface for deploying Django, WSGI, doesn't define such hooks. A WSGI process will generally be launched from a standalone process supervisor or service script, or via an external server such as Apache. In that case you might be able to hook into some lifecycle, but that is highly dependent on the server that's wrapping your requests.
It sounds like you're trying to do something unorthodox. What exactly are you looking to accomplish?
| 0 | 0 | 0 | 0 |
2012-10-30T18:18:00.000
| 1 | 1.2 | true | 13,145,282 | 0 | 0 | 1 | 1 |
I am trying to understand if there are standard means to handle the specific django application start (and reload). Currenly I would like to use it to start a parallel thread, but the question for me is more general: is this allowed or not allowed for some reason.
For example, such handlers are a part of the application interface in case of Java Servlets and .Net web applications. Are they a part of the interface of a django application?
UPD
In this case I am just trying to implement a small proxy which keeps an open connection. I do understand that the interface I want would initially be a part of WSGI, but it is not, and I though that django might provide its own solution, since in most cases (in all except plain CGI) the application serves more than a single request and obviously does have a life cycle.
|
Appengine SDK 1.7.3 not detecting updated files
| 13,638,216 | 3 | 3 | 294 | 0 |
python,google-app-engine
|
A similiar issue happens with appcfg.py in SDK 1.73, where it skips uploading some files sometimes. It looks like this only happens if appcfg.py is run under python 2.7.
The workaround is to simply run appcfg.py under python 2.5. Then the upload works reliably.
The code uploaded can still be 2.7 specific - it is only necessary to revert 2.5 in the step of running the uploader function in appcfg.py.
| 0 | 1 | 0 | 0 |
2012-10-30T22:29:00.000
| 2 | 0.291313 | false | 13,148,512 | 0 | 0 | 1 | 1 |
I just updated to SDK 1.7.3 running on Linux. At the same time I switched to the SQLite datastore stub, suggested by the depreciation message.
After this, edits to source files are not always detected, and I have to stop and restart the SDK after updating, probably one time in ten. Is anyone else seeing this? Any ideas on how to prevent it?
UPDATE: Changes to python source files are not being detected. I haven't made any modifications to yaml files, and I believe that jinja2 template file modifications are being detected properly.
UPDATE: I added some logging to the dev appserver and found that the file I'm editing is not being monitored. Continuing to trace what is happening.
|
Efficient way to store comments in Google App Engine?
| 13,151,123 | 2 | 2 | 171 | 0 |
python,google-app-engine
|
If comments are threaded, storing them as separate entities might make sense.
If comments can be the target of voting, storing them as separate entities makes sense.
If comments can be edited, storing them as separate entities reduces contention, and avoids having to either do pessimistic locking on all comments, or risk situations where the last edit overwrites prior edits.
If you can page through comments, storing them as separate entities makes sense for multiple reasons, indexing being one.
| 0 | 1 | 0 | 0 |
2012-10-31T00:41:00.000
| 2 | 0.197375 | false | 13,149,663 | 0 | 0 | 1 | 1 |
With Google App Engine, an entity is limited to 1 MB in size. Say I have a blog system, and expect thousands of comments on each article, some paragraphs in lengths. Typically, without a limit, you'd just store all the comments in the same entity as the blog post. But here, there would be concerns about reaching the 1 MB limit.
The other possible way, though far less efficient, is to store each comment as a separate entity, but that would require several, several reads to get all comments instead of just 1 read to get the blog post and its comments (if they were in the same entity).
What's an efficient way to handle a case like this?
|
async db access between requests on GAE/Python
| 13,181,950 | 0 | 0 | 82 | 0 |
python,google-app-engine,asynchronous,google-cloud-datastore
|
You cannot start an async API call in one request and get its result in another. The HTTP serving infrastructure will wait for all API calls started in a request to complete before the HTTP response is sent back; the data structure representing the async API call will be useless in the second request (even if it hits the same instance).
You might try Appstats to figure out what API calls your request is making and see if you can avoid some, use memcache for some, or parallellize.
You might also use NDB which integrates memcache in the datastore API.
| 0 | 1 | 0 | 0 |
2012-10-31T13:25:00.000
| 1 | 0 | false | 13,159,051 | 0 | 0 | 1 | 1 |
I'm trying to optimize my GAE webapp for latency.
The app has two requests which usually come one after another.
Is it safe to start an async db/memcache request during the first request and then use its results inside the following request?
(I'm aware that the second request might hit another instance. It would be handled as a cache miss)
|
Link to python modules in emacs
| 13,160,450 | 2 | 5 | 314 | 0 |
python,django,emacs,ide
|
I also switched from Eclipse to Emacs and I must say that after adjusting to more text-focused ways of exploring code, I don't miss this feature at all.
In Emacs, you can just open a shell prompt (M-x shell). Then run IPython from within the Emacs shell and you're all set. I typically split my screen in half horizontally and make the bottom window thinner, so that it's like the Eclipse console used to be.
I added a feature in my .emacs that lets me "bring to focus" the bottom window and swap it into the top window. So when I am coding, if I come across something where I want to see the source code, I just type C-x c to swap the IPython shell into the top window, and then I type %psource < code thing > and it will display the source.
This covers 95%+ of the use cases I ever had for quickly getting the source in Eclipse. I also don't care about the need to type C-x b or C-x C-f to open the code files. In fact, after about 2 or 3 hours of programming, I find that almost every buffer I could possibly need will already be open, and I just type C-x b < start of file name > and then tab-complete it.
Since I have become more proficient at typing and not needing to move attention away to the mouse, I think this is now actually faster than the "quick" mouse-over plus F3 tactic in Eclipse. And to boot, having IPython open at the bottom is way better than the non-interactive Eclipse console. And you can use things like M-p and M-n to get the forward-backward behavior of IPython in terms of going back through commands.
The one thing I miss is tab completion in IPython. And for this, I think there are some add-ons that will do it but I haven't invested the time yet to install them.
Let me know if you want to see any of the elisp code for the options I mentioned above.
| 0 | 0 | 0 | 1 |
2012-10-31T14:28:00.000
| 3 | 0.132549 | false | 13,160,217 | 0 | 0 | 1 | 1 |
I'm looking into emacs as an alternative to Eclipse. One of my favorite features in Eclipse is being able to mouse over almost any python object and get a listing of its source, then clicking on it to go directly to its code in another file.
I know this must be possible in emacs, I'm just wondering if it's already implemented in a script somewhere and, if so, how to get it up and running on emacs.
Looks like my version is Version 24.2.
Also, since I'll be doing Django development, it would be great if there's a plugin that understands Django template syntax.
|
Why use Tornado and Flask together?
| 13,219,183 | 2 | 43 | 46,076 | 0 |
python,web,webserver,flask,tornado
|
instead of using Apache as your server, you'll use Tornado (of course as blocking server due to the synchronous nature of WSGI).
| 0 | 1 | 0 | 0 |
2012-10-31T17:59:00.000
| 4 | 0.099668 | false | 13,163,990 | 0 | 0 | 1 | 1 |
As far as I can tell Tornado is a server and a framework in one. It seems to me that using Flask and Tornado together is like adding another abstraction layer (more overhead). Why do people use Flask and Tornado together, what are the advantages?
|
Can Django admin handle millions of records?
| 13,168,612 | 6 | 1 | 943 | 0 |
python,django,scalability
|
count is fast and admin pagination for objects is limited to 100 records per page. Sorting on 2 million records could take some time though, but with 2 million users you can afford some ram to put some indexes on these fields ;)
to be serious: this is no problem at all as long as your database can handle it.
| 0 | 0 | 0 | 0 |
2012-10-31T22:45:00.000
| 1 | 1.2 | true | 13,168,523 | 0 | 0 | 1 | 1 |
I am wondering if Django admin can handle a huge amount of records.
Say I have a 2 million users, how the pagination will be handled for example? Will it be dead slow because every time all the records will be counted to be provided to the offset?
Note: I plan to use PostgreSQL.
Thanks.
|
How django handles simultaneous requests with concurrency over global variables?
| 13,172,015 | 2 | 7 | 4,217 | 0 |
python,django,apache,concurrency,mod-wsgi
|
This partly depends on your mod_wsgi configuration. If you configure it to use only one thread per process, then global variables are safe--although I wouldn't recommend using them, for a variety of reasons. In a multi-thread configuration, there is nothing guaranteeing that requests won't get mixed up if you use global variables.
You should be able to find some more local place to stash the data you need between pre_save and post_save. I'd recommend putting some more thought into your design.
| 0 | 0 | 0 | 0 |
2012-11-01T06:11:00.000
| 1 | 1.2 | true | 13,171,860 | 0 | 0 | 1 | 1 |
I have a django instance hosted via apache/mod_wsgi. I use pre_save and post_save signals to store the values before and after save for later comparisons. For that I use global variables to store the pre_save values which can be accessed in the post_save signal handler.
My question is, if two requests A and B come together simultaneously requesting a same web service, will it be concurrent? The B should not read the global variable which is written by A and vice versa.
PS: I don't use any threading Lock on variables.
|
Using raw sql in django python
| 13,172,382 | 3 | 0 | 408 | 1 |
python,django
|
You need to use the database's table and field names in the raw query--the string you provide will be passed to the database, not interpreted by the Django ORM.
| 0 | 0 | 0 | 0 |
2012-11-01T06:54:00.000
| 1 | 0.53705 | false | 13,172,331 | 0 | 0 | 1 | 1 |
I have few things to ask for custom queries in Django
DO i need to use the DB table name in the query or just the Model name
if i need to join the various tables in raw sql. do i need to use db field name or model field name like
Person.objects.raw('SELECT id, first_name, last_name, birth_date FROM Person A
inner join Address B on A.address = B.id
')
or B.id = A.address_id
|
Python- How to flush the log? (django)
| 37,881,645 | 19 | 41 | 53,163 | 0 |
python,django,google-app-engine,logging,django-nonrel
|
If the use case is that you have a python program that should flush its logs when exiting, use logging.shutdown().
From the python documentation:
logging.shutdown()
Informs the logging system to perform an orderly
shutdown by flushing and closing all handlers. This should be called
at application exit and no further use of the logging system should be
made after this call.
| 0 | 0 | 0 | 0 |
2012-11-01T11:34:00.000
| 5 | 1 | false | 13,176,173 | 0 | 0 | 1 | 1 |
I'm working with Django-nonrel on Google App Engine, which forces me to use logging.debug() instead of print().
The "logging" module is provided by Django, but I'm having a rough time using it instead of print().
For example, if I need to verify the content held in the variable x, I will put
logging.debug('x is: %s' % x). But if the program crashes soon after (without flushing the stream), then it never gets printed.
So for debugging, I need debug() to be flushed before the program exits on error, and this is not happening.
|
Getting Responsive Django View
| 13,177,553 | 0 | 1 | 457 | 0 |
python,django,django-views,subprocess
|
Easiest I think will be use ajax to start he simulator. Response for start request can be updated on the same page.
However, you will still have to think about how to pause,resume and stop the simulator started by earlier requests. i.e how to manage and manipulate state of the simulator.
May be you want to update that in DB.
| 0 | 0 | 0 | 0 |
2012-11-01T12:28:00.000
| 2 | 0 | false | 13,177,087 | 0 | 0 | 1 | 1 |
I have a HTML page which has four buttons Start, Stop, Pause and Resume in it. The functionality of the buttons are:
Start Button: Starts the backend Simulator. (The Simulator takes around 3 minutes for execution.)
Stop Button: Stops the Simulator.
Pause Button: Pauses the Simulator.
Resume Button: Resumes the Simulator from the paused stage.
Each of the button when clicked goes on to calling a separate view function. The problem I'm facing is that when I click the start button, it starts up the Simulator through a function call in the Python view. But as I mentioned that the simulator takes around 3 minutes for completing it's execution. So, for the 3 minutes my UI is totally unresponsive. I cannot press Stop, Pause or Resume button untill the current view of Django is rendered.
So what is the best way to approach this problem ? Shall I spawn a non-blocking process for the Simulator and If so how can I get to know after the view has rendered that the new spawned process has completed it's execution.
|
Deleting Blobstore orphans
| 13,187,373 | 1 | 2 | 1,014 | 1 |
google-app-engine,python-2.7,google-cloud-datastore,blobstore
|
You can create an entity that links blobs to users. When a user uploads a blob, you immediately create a new record with the blob id, user id (or post id), and time created. When a user submits a post, you add a flag to this entity, indicating that a blob is used.
Now your cron job needs to fetch all entities of this kind where a flag is not equal to "true" and time created is more one hour ago. Moreover, you can fetch keys only, which is a more efficient operation that fetching full entities.
| 0 | 0 | 0 | 0 |
2012-11-01T22:29:00.000
| 4 | 0.049958 | false | 13,186,494 | 0 | 0 | 1 | 3 |
What is the most efficient way to delete orphan blobs from a Blobstore?
App functionality & scope:
A (logged-in) user wants to create a post containing some normal
datastore fields (e.g. name, surname, comments) and blobs (images).
In addition, the blobs are uploaded asynchronously before the resto
of the data is sent via a POST
This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale.
Possible, yet inefficient solutions:
Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted.
My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient.
Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions.
Thanks in advance!
|
Deleting Blobstore orphans
| 13,247,039 | 3 | 2 | 1,014 | 1 |
google-app-engine,python-2.7,google-cloud-datastore,blobstore
|
Thank for the comments. However, I understood those solutions well, I find them too inefficient. Querying thousands of entries for those that are flagged as "unused" is not ideal.
I believe I have come up with a better way and would like to hear your thoughts on it:
When a blob is saved, immediately a deferred task is created to delete the same blob in an hour’s time. If the post is created and saved, the deferred task is deleted, thus the blob will not be deleted in an hour’s time.
I believe this saves you from having to query thousands of entries every single hour.
What are your thoughts on this solution?
| 0 | 0 | 0 | 0 |
2012-11-01T22:29:00.000
| 4 | 1.2 | true | 13,186,494 | 0 | 0 | 1 | 3 |
What is the most efficient way to delete orphan blobs from a Blobstore?
App functionality & scope:
A (logged-in) user wants to create a post containing some normal
datastore fields (e.g. name, surname, comments) and blobs (images).
In addition, the blobs are uploaded asynchronously before the resto
of the data is sent via a POST
This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale.
Possible, yet inefficient solutions:
Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted.
My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient.
Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions.
Thanks in advance!
|
Deleting Blobstore orphans
| 16,378,785 | 0 | 2 | 1,014 | 1 |
google-app-engine,python-2.7,google-cloud-datastore,blobstore
|
Use Drafts! Save as draft after each upload. Then dont do the cleaning! Let the user for himself chose to wipe out.
If you're planning on posts in a Facebook style use drafts either or make it private. Why bother deleting users' data?
| 0 | 0 | 0 | 0 |
2012-11-01T22:29:00.000
| 4 | 0 | false | 13,186,494 | 0 | 0 | 1 | 3 |
What is the most efficient way to delete orphan blobs from a Blobstore?
App functionality & scope:
A (logged-in) user wants to create a post containing some normal
datastore fields (e.g. name, surname, comments) and blobs (images).
In addition, the blobs are uploaded asynchronously before the resto
of the data is sent via a POST
This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale.
Possible, yet inefficient solutions:
Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted.
My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient.
Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions.
Thanks in advance!
|
Search Between Two Models in Django
| 13,198,418 | 1 | 0 | 88 | 0 |
python,django,templates,gis
|
Perhaps you could put it in the view which renders the search page.
asuuming you have a view function like search you could:
get users radius request.user.get_radius
search for places based on that radius relevant_places = Places.get_all_places_in_radius
Render those places to a user
| 0 | 0 | 0 | 0 |
2012-11-02T15:22:00.000
| 3 | 0.066568 | false | 13,198,198 | 0 | 0 | 1 | 2 |
I apologize in advance is this question is too broad, but I need some help conceptualizing.
The end result is that I want to enable radius-based searching. I am using Django. To do this, I have two classes: Users and Places. Inside the Users class is a function that defines the radius in which people want to search. Inside the Places class I have a function that defines the midpoint if someone enters a city and state and not a zip (i.e., if someone enters New York, NY a lot of zipcodes are associated with that so I needed to find the midpoint).
I have those two parts down. So now I have a radius where people want to search and I know (the estimate) of the places. Now, I am having a tremendous amount of difficulty combining the two, or even thinking about HOW to do this.
I attempted doing the searching against each other in the view, but I ran into a lot of trouble when I was looping through one model in the template but trying to display results based on an if statement of the other model.
It seemed like a custom template tags would be the solution for that problem, but I wanted to make sure I was conceptualizing the problem correctly in the first place. I.e.,
Do I want to do the displays based on an if statement in the template?
Or should I be creating another class based on the other two in my models file?
Or should I create a new column for one of the classes in the models file?
I suppose my ultimate question is, based on what it is I want to do (enable radius based searching), where/how should most of the work be done? Again, I apologize if the question is overly broad.
|
Search Between Two Models in Django
| 13,202,154 | 0 | 0 | 88 | 0 |
python,django,templates,gis
|
I just decided to add the function to the view so that the information can be input directly into the model after a user enters it. Thanks for the help. I'll probably wind up looking into geodjango.
| 0 | 0 | 0 | 0 |
2012-11-02T15:22:00.000
| 3 | 0 | false | 13,198,198 | 0 | 0 | 1 | 2 |
I apologize in advance is this question is too broad, but I need some help conceptualizing.
The end result is that I want to enable radius-based searching. I am using Django. To do this, I have two classes: Users and Places. Inside the Users class is a function that defines the radius in which people want to search. Inside the Places class I have a function that defines the midpoint if someone enters a city and state and not a zip (i.e., if someone enters New York, NY a lot of zipcodes are associated with that so I needed to find the midpoint).
I have those two parts down. So now I have a radius where people want to search and I know (the estimate) of the places. Now, I am having a tremendous amount of difficulty combining the two, or even thinking about HOW to do this.
I attempted doing the searching against each other in the view, but I ran into a lot of trouble when I was looping through one model in the template but trying to display results based on an if statement of the other model.
It seemed like a custom template tags would be the solution for that problem, but I wanted to make sure I was conceptualizing the problem correctly in the first place. I.e.,
Do I want to do the displays based on an if statement in the template?
Or should I be creating another class based on the other two in my models file?
Or should I create a new column for one of the classes in the models file?
I suppose my ultimate question is, based on what it is I want to do (enable radius based searching), where/how should most of the work be done? Again, I apologize if the question is overly broad.
|
Silent failure of s3multiput (boto) upload to s3 from EC2 instance
| 13,203,938 | 1 | 3 | 515 | 0 |
python,crontab,boto
|
First, I would try updating boto; a commit to the development branch mentions logging when a multipart upload fails. Note that doing so will require using s3put instead, as s3multiput is being folded into s3put.
| 0 | 0 | 1 | 1 |
2012-11-02T22:13:00.000
| 1 | 0.197375 | false | 13,203,745 | 0 | 0 | 1 | 1 |
I'm trying to automate a process that collects data on one (or more) AWS instance(s), uploads the data to S3 hourly, to be retrieved by a decoupled process for parsing and further action. As a first step, I whipped up some crontab-initiated shell script (running in Ubuntu 12.04 LTS) that calls the boto utility s3multiput.
For the most part, this works fine, but very occasionally (maybe once a week) the file fails to appear in the s3 bucket, and I can't see any error or exception thrown to track down why.
I'm using the s3multiput utility included with boto 2.6.0. Python 2.7.3 is the default python on the instance. I have an IAM Role assigned to the instance to provide AWS credentials to boto.
I have a crontab calling a script that calls a wrapper that calls s3multiput. I included the -d 1 flag on the s3multiput call, and redirected all output on the crontab job with 2>&1 but the report for the hour that's missing data looks just like the report for the hour before and the hour after, each of which succeeded.
So, 99% of the time this works, but when it fails I don't know why and I'm having trouble figuring where to look. I only find out about the failure later when the parser job tries to pull the data from the bucket and it's not there. The data is safe and sound in the directory it should have uploaded from, so I can do it manually, but would rather not have to.
I'm happy to post the ~30-40 lines of related code if helpful, but wondered if anybody else had run into this and it sounded familiar.
Some grand day I'll come back to this part of the pipeline and rewrite it in python to obviate s3multiput, but we just don't have dev time for that yet.
How can I investigate what's going wrong here with the s3multiput upload?
|
Can you serve static HTML pages from Pyramid?
| 13,205,833 | 1 | 1 | 1,016 | 0 |
python,html,pyramid
|
You can serve static html from Pyramid by using views that return pre-fabricated responses. You'll have a more fun time doing it though by just having your web server serve static html if it finds it, otherwise proxying the request to your Pyramid app.
| 0 | 0 | 0 | 0 |
2012-11-03T00:42:00.000
| 1 | 1.2 | true | 13,204,907 | 0 | 0 | 1 | 1 |
I have a Pyramid app using Mako templates and am wondering if it is possible to serve static HTML pages within the app?
For the project I'm working on, we want to have relatively static pages for the public "front-facing" bits, and then the application will dynamically serve the meat of the site. We would like one of our internal users to be able to edit some of the HTML content for these pages to update them.
I have my static folder that I'm serving CSS and scripts from, but that doesn't seem to really fit what I'd like to do. I could create views for the pages and basically have static content in the mako templates themselves but I think the application would need to be restarted if someone were to update the template for the changes to appear? Maybe that's not the case?
Long term I would probably do something like store the content in a db and have it dynamically served but that's outside of the scope at this time.
Is there a reasonable way to accomplish this or should I not even bother and set up the public pages as just a regular static HTML site and just link to my app altogether?
Thanks!
|
python in azure website using webapp2
| 13,206,994 | -1 | 2 | 231 | 0 |
python,azure,webapp2
|
To my knowledge, Windows Azure Web Sites don't support Python.
| 0 | 0 | 0 | 0 |
2012-11-03T01:54:00.000
| 1 | -0.197375 | false | 13,205,245 | 0 | 0 | 1 | 1 |
I would like to try to develop website using python, upload to azure website. I saw there is a tutorial using django. Is there a tutorial using webapp2?
|
Google App Engine Update Issue
| 13,236,236 | 1 | 0 | 937 | 0 |
python,google-app-engine
|
After 3 days of endless searching, I have figured out the problem, if you are facing this issue, first thing you have to check is your system time, mine was incorrect due to daylight saving changes.
Thanks
| 0 | 1 | 0 | 0 |
2012-11-03T05:53:00.000
| 2 | 1.2 | true | 13,206,438 | 0 | 0 | 1 | 1 |
I am working on an application (python based), which is deployed on GAE, it was working fine till last day, but I can't seem to update any code on app engine since this morning, it is complaining about some sort of issue with password, I have double checked and email id and password are correct.
here is the stack trace which I receive:
10:47 PM Cloning 706 static files.
2012-11-03 22:47:07,913 WARNING appengine_rpc.py:542 ssl module not found.
Without the ssl module, the identity of the remote host cannot be verified, and
connections may NOT be secure. To fix this, please install the ssl module from
http://pypi.python.org/pypi/ssl .
To learn more, see https://developers.google.com/appengine/kb/general#rpcssl
Password for [email protected]: 2012-11-03 22:47:07,913 ERROR appcfg.py:2266 An unexpected error occurred. Aborting.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2208, in DoUpload
missing_files = self.Begin()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1934, in Begin
CloneFiles('/api/appversion/cloneblobs', blobs_to_clone, 'static')
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1929, in CloneFiles
result = self.Send(url, payload=BuildClonePostBody(chunk))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1841, in Send
return self.rpcserver.Send(url, payload=payload, **self.params)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 403, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 543, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 293, in _Authenticate
credentials = self.auth_function()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2758, in GetUserCredentials
password = self.raw_input_fn(password_prompt)
EOFError: EOF when reading a line
10:47 PM Rolling back the update.
2012-11-03 22:47:08,818 WARNING appengine_rpc.py:542 ssl module not found.
Without the ssl module, the identity of the remote host cannot be verified, and
connections may NOT be secure. To fix this, please install the ssl module from
http://pypi.python.org/pypi/ssl .
To learn more, see https://developers.google.com/appengine/kb/general#rpcssl
Password for [email protected]: Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 171, in
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 167, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4322, in
main(sys.argv)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4313, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2599, in Run
self.action(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4048, in __call__
return method()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3065, in Update
self.UpdateVersion(rpcserver, self.basepath, appyaml)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3047, in UpdateVersion
lambda path: self.opener(os.path.join(basepath, path), 'rb'))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2267, in DoUpload
self.Rollback()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2150, in Rollback
self.Send('/api/appversion/rollback')
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1841, in Send
return self.rpcserver.Send(url, payload=payload, **self.params)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 403, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 543, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 293, in _Authenticate
credentials = self.auth_function()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2758, in GetUserCredentials
password = self.raw_input_fn(password_prompt)
EOFError: EOF when reading a line
2012-11-03 22:47:09 (Process exited with code 1)
You can close this window now.
Any Help will be appreciated.
P.S, I have tried from command line as well as Google app engine launcher.
|
How can I "ignore" NoReverseMatch in Django template?
| 13,207,944 | 0 | 4 | 1,152 | 0 |
python,django,django-templates
|
Easiest/quickest would be update django's url tag to fail silently.
You can update function definition on def url(parser, token): in <your_django_path>/templatetags/future.py to have all code in try ... except and do not raise exception when there is any.
However, this is quickest hack that I could think of, I'm not sure there is any better solution.
| 0 | 0 | 0 | 0 |
2012-11-03T09:11:00.000
| 2 | 1.2 | true | 13,207,710 | 0 | 0 | 1 | 1 |
Is there any way to disable throwing NoReverseMatch exceptions from url tags in Django templates (just make it silently fail, return an empty string or smth... temporarily, for development, of course)?
(I'm working on a Django project that is kind of a mess as far as things are organized (bunch of remote-workers, contractors plus the local team with lots of overlapping tasks assigned to different people and even front-end and back-end work tends to get mixed as part of the same task...) and I really need to just ignore/hide/disable the NoReverseMatch thrown by template url tags in order to efficiently do my part of the job and not end up doing other peoples' jobs in order to be able to do mine...)
|
What are the best reusable django apps for form processing?
| 13,221,181 | 2 | 0 | 154 | 0 |
python,django,windows,pinax
|
Django allows you to do this super easily without any apps. I'd highly recommend you read up on its basic form processing features.
| 0 | 0 | 0 | 0 |
2012-11-04T16:47:00.000
| 3 | 0.132549 | false | 13,220,619 | 0 | 0 | 1 | 1 |
In my application I want to process some form data and show it later on a different page.
My question is: Can you recommend any django apps to quickly set up such a system?
I appreciate your answer!!!
|
Remove all user's cookies/sessions when password is reset
| 13,551,567 | 1 | 0 | 282 | 0 |
python,turbogears,repoze.who
|
Storing a timestamp of the last time password got changed inside request.identity['userdata'] should make possible to check it whenever the user gets back and log him out if it's different from the last time the password got changed for real.
| 0 | 0 | 0 | 0 |
2012-11-04T22:57:00.000
| 1 | 1.2 | true | 13,223,839 | 0 | 0 | 1 | 1 |
I'm interested in improving security of my TurboGears 2.2 application so that when user changes his password, it logs him out from all sessions and he must login again. The goal is when user changes password on browser 1, he must relogin on browser 2, too. Experiments show that this is not the case, especially if browser 2 had "remember me" enabled.
It's standard quickstarted app using repoze.who. It seems maybe I need to change AuthTktCookiePlugin, but don't see a way to do it without much rewiring.
|
Is there a way to atomically run a function as a transaction within google app engine when the function modifies more than five entity groups?
| 13,228,117 | 0 | 1 | 66 | 0 |
python,google-app-engine
|
XG transactions are limited to a few entity groups for performance reasons. Running an XG transaction across hundreds of entity groups would be incredibly slow.
Can you break your function up into many sub-functions, one for each entity group? If so, you should have no trouble running them individually, or on the task queue.
| 0 | 1 | 0 | 0 |
2012-11-05T03:25:00.000
| 2 | 0 | false | 13,225,540 | 0 | 0 | 1 | 1 |
I'm developing on google app engine. The focus of this question is a python function that modifies hundreds of entity groups. The function takes one string argument. I want to execute this function as a transaction because there are instances right now when the same function with the same string argument are simultaneously run, resulting in unexpected results. I want the function to execute in parallel if the string arguments are different, but not if the string arguments are the same, they should be run serially.
Is there a way to run a transaction on a function that modifies so many entity groups? So far, the only solution I can think of is flipping a database flag for each unique string parameter, and checking for the flag (deferring execution if the flag is set as True). Is there a more elegant solution?
|
Django default=timezone.now() saves records using "old" time
| 13,226,368 | 66 | 28 | 23,029 | 1 |
python,django,django-timezone
|
Just ran into this last week for a field that had default=date.today(). If you remove the parentheses (in this case, try default=timezone.now) then you're passing a callable to the model and it will be called each time a new instance is saved. With the parentheses, it's only being called once when models.py loads.
| 0 | 0 | 0 | 0 |
2012-11-05T04:23:00.000
| 2 | 1.2 | true | 13,225,890 | 0 | 0 | 1 | 1 |
This issue has been occurring on and off for a few weeks now, and it's unlike any that has come up with my project.
Two of the models that are used have a timestamp field, which is by default set to timezone.now().
This is the sequence that raises error flags:
Model one is created at time 7:30 PM
Model two is created at time 10:00 PM, but in the
MySQL database it's stored as 7:30 PM!
Every model that is created
has its time stamp saved under 7:30 PM, not the actual time, until a certain
duration passes. Then a new time is set and all the following models
have that new time... Bizzare
Some extra details which may help in discovering the issue:
I have a bunch of methods that I use to strip my timezones of their tzinfo's and replace them with UTC.
This is because I'm doing a timezone.now() - creationTime calculation to create a: "model was posted this long ago" feature
in the project. However, this really should not be the cause of the problem.
I don't think using datetime.datetime.now() will make any difference either.
Anyway, thanks for the help!
|
Resolve FB GraphAPI picture call to the final URL
| 13,228,977 | 1 | 0 | 872 | 0 |
python,django,facebook-graph-api
|
Make a Graph API call like this and you get the real URL:
https://graph.facebook.com/[fbid]?fields=picture
Btw, you don´t need an access token for this.
| 0 | 0 | 1 | 0 |
2012-11-05T09:02:00.000
| 3 | 0.066568 | false | 13,228,847 | 0 | 0 | 1 | 1 |
I'm developing an application that displays a users/friends photos. For the most part I can pull photos from the album, however for user/album cover photos, all that is given is the object ID for which the following URL provides the image:
https://graph.facebook.com/1015146931380136/picture?access_token=ABCDEFG&type=picture
Which when viewed redirects the user to the image file itself such as:
https://fbcdn-photos-a.akamaihd.net/hphotos-ak-ash4/420455_1015146931380136_78924167_s.jpg
My question is, is there a Pythonic or GraphAPI method to resolve the final image path an avoid sending the Access Token to the end user?
|
How to convert in OpenERP from one chart of account to another?
| 13,824,676 | 0 | 8 | 963 | 0 |
python,python-2.7,openerp,accounting
|
IMO its very difficult we are currently migrating some data and its proving to be difficult.
I would advice you to pick a date in the future and tell everyone to just use another db with the correct chart of accounts.
Your finance dept will be the one to suggest what date is perfect. How about when a period starts.
| 0 | 0 | 0 | 0 |
2012-11-05T10:29:00.000
| 4 | 0 | false | 13,230,195 | 0 | 0 | 1 | 4 |
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data.
Every advice will help me a lot. Thanks
|
How to convert in OpenERP from one chart of account to another?
| 13,369,353 | 0 | 8 | 963 | 0 |
python,python-2.7,openerp,accounting
|
I don't know of any way to install another chart of accounts after you've run the initial configuration wizard on a new database. However, if all you want to do is change the account numbers, names, and parents to match a different chart of accounts, then you should be able to do that with a bunch of database updates. Either manually edit each account if there aren't too many accounts, or write a SQL or Python script to update all the accounts. To do that, you'll need to map each old account to a new account code, name, and parent, then use that map to generate a script.
| 0 | 0 | 0 | 0 |
2012-11-05T10:29:00.000
| 4 | 0 | false | 13,230,195 | 0 | 0 | 1 | 4 |
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data.
Every advice will help me a lot. Thanks
|
How to convert in OpenERP from one chart of account to another?
| 15,182,814 | 0 | 8 | 963 | 0 |
python,python-2.7,openerp,accounting
|
I needed to do similar. It is possible to massage the chart from one form to another but I found in the end that creating a New Database, bringing in modules, assigning the new Chart and then importing all critical elements was the best and safest path.
If you have a lot of transactions that will be more difficult to do the import on. If that is the case, then massage your chart from one form to another.
I am sure there will be some way to do an active Migration sometime in the future. You defintely don't want to live with a bad chart or with out your history if you can help it.
| 0 | 0 | 0 | 0 |
2012-11-05T10:29:00.000
| 4 | 0 | false | 13,230,195 | 0 | 0 | 1 | 4 |
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data.
Every advice will help me a lot. Thanks
|
How to convert in OpenERP from one chart of account to another?
| 41,658,981 | 0 | 8 | 963 | 0 |
python,python-2.7,openerp,accounting
|
The fastest way to do so is using a ETL like Talend or Pentaho (provided there is a logic as to which account will map to which other during the process). If not you will have to do so by hand.
In case there is a logic, you would export it to a format you can transform and re import. Uninstall your account chart and install the new. Then import all the data that you formatted using those tools.
| 0 | 0 | 0 | 0 |
2012-11-05T10:29:00.000
| 4 | 0 | false | 13,230,195 | 0 | 0 | 1 | 4 |
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data.
Every advice will help me a lot. Thanks
|
using neo4J (server) from python with transaction
| 13,234,558 | 5 | 2 | 1,075 | 1 |
python,flask,neo4j,py2neo
|
None of the REST API clients will be able to explicitly support (proper) transactions since that functionality is not available through the Neo4j REST API interface. There are a few alternatives such as Cypher queries and batched execution which all operate within a single atomic transaction on the server side; however, my general approach for client applications is to try to build code which can gracefully handle partially complete data, removing the need for explicit transaction control.
Often, this approach will make heavy use of unique indexing and this is one reason that I have provided a large number of "get_or_create" type methods within py2neo. Cypher itself is incredibly powerful and also provides uniqueness capabilities, in particular through the CREATE UNIQUE clause. Using these, you can make your writes idempotent and you can err on the side of "doing it more than once" safe in the knowledge that you won't end up with duplicate data.
Agreed, this approach doesn't give you transactions per se but in most cases it can give you an equivalent end result. It's certainly worth challenging yourself as to where in your application transactions are truly necessary.
Hope this helps
Nigel
| 0 | 0 | 0 | 0 |
2012-11-05T13:27:00.000
| 2 | 1.2 | true | 13,233,107 | 0 | 0 | 1 | 1 |
I'm currently building a web service using python / flask and would like to build my data layer on top of neo4j, since my core data structure is inherently a graph.
I'm a bit confused by the different technologies offered by neo4j for that case. Especially :
i originally planned on using the REST Api through py2neo , but the lack of transaction is a bit of a problem.
The "embedded database" neo4j doesn't seem to suit my case very well. I guess it's useful when you're working with batch and one-time analytics, and don't need to store the database on a different server from the web server.
I've stumbled upon the neo4django project, but i'm not sure this one offers transaction support (since there are no native client to neo4j for python), and if it would be a problem to use it outside django itself. In fact, after having looked at the project's documentation, i feel like it has exactly the same limitations, aka no transaction (but then, how can you build a real-world service when you can corrupt your model upon a single connection timeout ?). I don't even understand what is the use for that project.
Could anyone could recommend anything ? I feel completely stuck.
Thanks
|
Why does calling get() on memcache increase item count in Google App Engine?
| 13,389,191 | 1 | 4 | 195 | 0 |
python,google-app-engine,memcached
|
memcache.get should not increase the item count in memcache statistic, and I'm not able to reproduce that behavior in production.
The memcache statistic page is global, so if you happen to have other requests (live, or through task queue) going to your application at the same time you're using the remote api, that could increase that count.
| 0 | 1 | 0 | 0 |
2012-11-05T14:26:00.000
| 1 | 0.197375 | false | 13,234,094 | 0 | 0 | 1 | 1 |
I'm looking at the Memcache Viewer in my admin console on my deployed Google App Engine NDB app. For testing, I'm using the remote api. I'm doing something very simple: memcache.get('somekey'). For some reason, everytime I call this line and hit refresh on my statistics, the item count goes up by 2. This happens whether or not the key exists in memcache.
Any ideas why this could happen? Is this normal?
|
Which GAE Database Property Fits a Tag Property?
| 13,250,634 | 1 | 0 | 72 | 0 |
python,google-app-engine,google-cloud-datastore
|
A repeated string property is your best option.
| 0 | 1 | 0 | 0 |
2012-11-05T22:34:00.000
| 2 | 0.099668 | false | 13,241,503 | 0 | 0 | 1 | 1 |
I want to have a property on a database model of mine in Google App Engine and I am not sure which category works the best. I need it to be a tag cloud similar to the Tags on SO. Would a text property be best or should I use a string property and make it repeated=True.
The second seems best to me and then I can just divide the tags up with a comma as a delimiter. My goal is to be able to search through these tags and count the total number of each type of tag.
Does this seem like a reasonable solution?
|
what does the cursor() method of GAE Query class return if the last result was already retrieved?
| 13,252,901 | 3 | 2 | 119 | 0 |
python,google-app-engine
|
There's still a cursor, even if the last result is retrieved. The query class doesn't know that, in any case: it knows what you've had already, but it doesn't know what else is still to come. The cursor doesn't represent any actual result, it's simply a way of resuming the query later. In fact, it's possible to use a cursor even in the case where you reach the end of the data set on your initial query, but later updates mean that new items are now found on a subsequent request: for example, if you're ordering by last update time.
(Good username, btw: gotta love some PKD.)
| 0 | 1 | 0 | 0 |
2012-11-06T14:03:00.000
| 2 | 0.291313 | false | 13,252,683 | 0 | 0 | 1 | 1 |
From the Google App Engine documentation:
"cursor() returns a base64-encoded cursor string denoting the position in the query's result set following the last result retrieved."
What does it return if the the last result retrieved IS the last result in the query set? Wouldn't this mean that there is no position that can 'follow' the last result retrieved? Therefore, is 'None' returned?
|
AES Encryption (Python and Java)
| 13,262,092 | 3 | 1 | 631 | 0 |
java,python,security,encryption,aes
|
Typically, you'd generate the IV randomly, and send it along with the encrypted message. The IV doesn't need to be secret--it just needs to be different for every message you send.
There are a wide variety of concerns to worry about when implementing crypto. Your block cipher mode matters, for instance--if you're using an IV you probably aren't using ECB, but that leaves quite a few other options open. Padding attacks and other subtle things are also a concern.
Generally, you don't want to implement crypto yourself if you can possibly avoid it. It's much too easy to get wrong, and usually quite important to get right. You may want to ask for more help on the Security StackExchange.
| 0 | 0 | 0 | 0 |
2012-11-07T01:34:00.000
| 1 | 1.2 | true | 13,262,047 | 1 | 0 | 1 | 1 |
I'm making a project in Java and Python that includes sending an encrypted string from one to the other. I can get the languages to understand each other and fully de-crypt / encrypt strings. However I was talking to somebody and was told that I am not being totally secure. I am using AES encryption for the project. Part of the problem is that I am distributing the software and need to come up with an effective way and secure way of making sure both the server side know the IV and 'Secret Key'. Right now the same string will always encrypt to be the same result. If I could change those two factors they would be different, so 2 users with the same password won't have the same encrypted password. Please do keep in mind that the server only needs to manage one account.
I appreciate your responses, and thank you very much ahead of time!
|
AWS glacier delete job
| 13,275,014 | 9 | 7 | 1,164 | 1 |
python,amazon-web-services,boto,amazon-glacier
|
The AWS Glacier service does not provide a way to delete a job. You can:
Initiate a job
Describe a job
Get the output of a job
List all of your jobs
The Glacier service manages the jobs associated with an vault.
| 0 | 0 | 0 | 0 |
2012-11-07T16:42:00.000
| 1 | 1.2 | true | 13,274,197 | 0 | 0 | 1 | 1 |
I have started a retrival job for an archive stored in one of my vaults on
Glacier AWS.
It turns out that I do not need to resurrect and download that archive any more.
Is there a way to stop and/or delete my Glacier job?
I am using boto and I cannot seem to find a suitable function.
Thanks
|
Get page (nested) level scrapy on each page(url, request) in spider
| 13,641,429 | 1 | 0 | 294 | 0 |
python,scrapy,web-crawler
|
After some time we found the solution - response.meta['depth']
| 0 | 0 | 1 | 0 |
2012-11-08T09:22:00.000
| 2 | 1.2 | true | 13,286,049 | 0 | 0 | 1 | 1 |
Subj.I want to get page (nested) level in scrapy on each page(url, request) in spider, is there any way to do that?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.