Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Django Override password encryption
| 15,960,639 | 0 | 2 | 2,355 | 0 |
python,django,encryption,passwords
|
Reconsider your decision about keeping your old password hashes.
EXCEPT if you already used some very modern and strong scheme for them (like pbkdf2, bcrypt. shaXXX_crypt) - and NOT just some (salted or not) sha1-hash.
I know it is tempting to just stay compatible and support the old crap, but these old (salted or unsalted, doesn't matter much for brute-forcing) sha1-hashes can be broken nowadays at a rate of > 1*10^9 guesses per second.
also, old password minimum length requirements might need a reconsideration due to same reasons.
the default django password hash scheme is a very secure one, btw, you should really use it.
| 0 | 0 | 0 | 0 |
2013-03-31T15:25:00.000
| 2 | 0 | false | 15,730,976 | 0 | 0 | 1 | 1 |
I am currently developing a tool in Python using Django, in which I have to import an existing User database. Obviously, password for these existing users have not the same encryption than the default password encryption used by Django.
I want to override the encryption for the password method to keep my passwords unmodified. I don't find how to override existing method in the documentation, just found how to add information about user (I don't find how to remove information - like first name or last name - about user either, so if someone knows, tell me please).
Thank you for your help.
|
Run python script for HTML web page
| 15,736,047 | 2 | 1 | 27,138 | 0 |
python,html,django,apache,ubuntu
|
You cannot "run python scripts in html web pages". Everyone tells you to use something like Django because if you want to make a dynamic web site that executes server-side code in response to user input you need something like Django or some other server-side web framework. So you have already been put into the right direction but ignored that.
| 0 | 0 | 0 | 0 |
2013-04-01T00:06:00.000
| 3 | 0.132549 | false | 15,736,017 | 0 | 0 | 1 | 2 |
i need to create a website.
The website needs to run a Python script once the user enters the data. I have searched the net for over a week now and haven't found any help.
Everybody just tells me to download DJango framework, but no body shows how to run python scripts in html web pages.
I dont have any experience in web design as it is not my field. But i do know a bit of python scripting and some html.
Any kind of help that puts me into right direction would be greatly appreciated.
|
Run python script for HTML web page
| 15,736,057 | 4 | 1 | 27,138 | 0 |
python,html,django,apache,ubuntu
|
If you wish to run Python scripts within a web page in the same way that Javascript runs within a web page, this is not possible, because web browsers don't natively understand Python.
If you want to run Python code that generates an HTML page, you can use a framework like Django or Flask, which will require a server that supports this kind of framework (long running processes). You can also use a CGI Python script to do this, which will require your web server to have Python installed and be set up to run CGI scripts.
Embedding Python in a HTML in the same way that PHP is embedded in an HTML page is generally not done in Python - it is considered an anti-pattern that leads to security problems and lots of bad practices. Python folks will generally not help you shoot yourself in the foot, unlike other communities, so you won't find much help for what is considered the wrong thing. Some template engines like Mako support using Python within templates to generate the HTML markup, but you will need to use it in conjunction with some other web framework to handle the HTTP request.
| 0 | 0 | 0 | 0 |
2013-04-01T00:06:00.000
| 3 | 0.26052 | false | 15,736,017 | 0 | 0 | 1 | 2 |
i need to create a website.
The website needs to run a Python script once the user enters the data. I have searched the net for over a week now and haven't found any help.
Everybody just tells me to download DJango framework, but no body shows how to run python scripts in html web pages.
I dont have any experience in web design as it is not my field. But i do know a bit of python scripting and some html.
Any kind of help that puts me into right direction would be greatly appreciated.
|
Making Pyramid application without session timeout
| 15,778,904 | 0 | 1 | 284 | 0 |
python,apache,session,mod-wsgi,pyramid
|
This entirely depends on the authentication policy that you use. The default AuthTktAuthenticationPolicy sets a cookie in the browser which (by default) does not expire. Again though, this depends on how you are tracking authenticated users.
| 0 | 0 | 0 | 1 |
2013-04-01T05:17:00.000
| 1 | 0 | false | 15,737,993 | 0 | 0 | 1 | 1 |
I am making a pyramid webapp running in apache webserver using mod_wsgi. Is there anyway I could make user session never timed out? (The idea is so that once user logged in, the system will never kicked them out unless they logged out themselves). I cant find any information regarding this in apache, mod_wsgi or pyramid documentation. Thanks!
|
Does PyCharm support Jinja2?
| 70,014,588 | 0 | 81 | 42,666 | 0 |
python,jinja2,pycharm
|
In community edition, the python template option is not available, so you can simply click on python packages next to the terminal present on the bottom. This will also add Jinja2
| 0 | 0 | 0 | 0 |
2013-04-01T19:44:00.000
| 5 | 0 | false | 15,750,551 | 1 | 0 | 1 | 1 |
A bottle project of mine uses Jinja2. PyCharm does not automatically recognize it and shows such lines as errors. Is there a way to make Jinja2 work?
|
django add another field via js with one model field
| 15,773,503 | 1 | 0 | 100 | 0 |
javascript,python,django
|
You're best bet is going to be to use Javascript. Have Javascript create the new field (or remove it) on button click. Let the user fill in the field as they need to. When they are ready to save you'll need to catch the submit (again using Javascript) and concatenate everything into the initial textarea field and then let it submit to the server where Django should handle it.
You'll also have to then have Javascript run on page load to check the textarea and split out the different sections of your textarea.
More or less that is how you are going to have go about it. If you're wanting to have someone write it for you then that would be a whole other discussion.
(I know it's easy to come along and say "why do that, that's not the best way". I often run into constraints where the best way isn't going to work, so I try not to knock others not knowing their constraints.)
| 0 | 0 | 0 | 0 |
2013-04-02T17:52:00.000
| 1 | 0.197375 | false | 15,770,850 | 0 | 0 | 1 | 1 |
I would like to add "add another field" and "remove field" button in django admin which results in to add/remove a text field respectively. And all these field should be concatenated(separated via some character) and assigned into one model textfield. How could I achieve this?
|
OpenERP Developer Mode modification
| 15,785,043 | 3 | 0 | 1,422 | 0 |
python,xml,openerp
|
No. That changes will not affect any xml file. Changes done directly from web-client will last until you update related module, those changes are not permanent.
For permanent changes you have to change in xml file and update that module.
| 0 | 0 | 0 | 0 |
2013-04-03T10:31:00.000
| 1 | 1.2 | true | 15,784,977 | 0 | 0 | 1 | 1 |
If we do some modification to our form views in Openerp 7 Developer mode (Ex : add onchange function and when it called some fields goes invisible)
is this modification automatically added to relevant view.xml files ?
I done such a thing.but my account_view.xml file not updated.but in database side ir_ui_view table saved with that record.please advice me to find that where is my missing point.?
EDITED :
actually my account_view.xml cannot change.but when i'm change some changes to other view.xml files, they shows result when i restart the server & upgrade the modules.
but only when i change this account_view.xml file its not gives a result.but when i do some changes to account view through "Developer mode" they affected.
help me to seek what is the issue with that ?
(seems not using my account_view.xml) i tried to some simple changes also like menu item text change.they also not affected.but when i do for other module's view.xml they affected
|
mvc controller single responsibility principle and POST requests
| 15,791,713 | 1 | 3 | 303 | 0 |
python,django,model-view-controller,post,controller
|
In general case you will want to have a separate controller for each 'type' of request. This helps in keeping the code simple, without having to deal with any 'special cases', and in general easier to reuse.
| 0 | 0 | 0 | 0 |
2013-04-03T15:28:00.000
| 1 | 1.2 | true | 15,791,623 | 0 | 0 | 1 | 1 |
I am interested in the correct/standard/advised approach for an django web application I am currently developing.
I have practical experience in programming, but I am worried about the efficiency of my implementation, since I don't have too much theoretical knowledge of MVC and related principles.
I have several forms that must fetch various amounts of information from my database (via AJAX POST requests), all related to a single action the user would take (for example: to buy a house, the system would need information about the house, the client, the previous owner, the method of payment, etc.).
Because of this, the POST requests would be very frequent in one page.
My question is:
should I have one controller responsible for each different "type" of POST request (one controller for the "house" requests, one for the "client", etc.), or is it ok to have one "sale data fetcher" controller that handles all related POST requests checks via one of the parameters of each request and communicate with the model accordingly?
I apologize if am not using the question system correctly, this is my first question here.
EDIT: total lapse in my question!!! Yes the App is in PYTHON, I did use PHP for a previous Project.
Thanks in advance! stack overflow has been a lifesaver so many times.
|
Can I add xmpp python library to my google app engine server
| 15,822,431 | 0 | 3 | 131 | 0 |
python,google-app-engine,xmpp,xmpppy
|
I don't know but your two biggest limitations will be the inability to use native libraries, and the fact that GAE only supports HTTP requests, with no access to the underlying sockets.
If your library uses native code, or relies on using sockets for communications, you won't be able to use it. If it's pure python and can work with an HTTP based transport, you should be ok.
| 0 | 1 | 0 | 0 |
2013-04-04T17:47:00.000
| 1 | 0 | false | 15,818,157 | 0 | 0 | 1 | 1 |
I'm trying to send messages between xmmp clients using gogle app engine as a server, for that reason I prefer to use xmpp library for python (xmpppy) instead the xmpp library of the google app engine API. Can I add the xmpp python library to my server? I mean can I use this library instead the xmpp library of google app engine?
|
Django stuck on default page
| 15,826,682 | 2 | 1 | 656 | 0 |
python,django,webserver
|
i am assuming you have django set up locally. if that is so, then you are using localhost. it is suppose to show the welcome page unless you configure on the url.py to redirect it to somewhere else. it would be useful if you showed us the url.py. if the welcome page or the url.py has not been altered, then i dont know what else it would change to besides the default welcome page.
| 0 | 0 | 0 | 0 |
2013-04-05T05:30:00.000
| 1 | 0.379949 | false | 15,826,598 | 0 | 0 | 1 | 1 |
I have django set up on a server, but it will only return the default "Welcome to Django" page.
I also have django set up on a local machine, and I use git to push the files to the server.
Both the server and local machine are configured with apache/wsgi.
On the local machine it will display the webapp as intended, but on deployment to the server, it shows nothing but the default page.
Restarting apache on the server and even an error in the django project have not made a difference.
Any ideas?
|
Django: I want to recognize a hash url
| 15,828,863 | 7 | 5 | 1,875 | 0 |
python,django,web-services
|
Well, it should be clear to you that the regex does not match the URL: it's looking for URLs in the form /user/hash/, whereas you have /user/?hash=hash.
In any case, query parameters (those after the ?) do not get processed by urls.py, they are passed in request.GET. So your URLconf should just be r'^user/$.
| 0 | 0 | 0 | 0 |
2013-04-05T07:59:00.000
| 1 | 1.2 | true | 15,828,765 | 0 | 0 | 1 | 1 |
I have a url like
http://localhost/user/?hash={hash value generated}
i need to configure urls.py so that any url of this form is recognized and does not give an error.
I currently wrote urls.py as
url(r'^user/(?P<hash>\w+)/$', 'Myapp.views.createnewpass'),
and this is giving a 404 error for a valid hash.
How can I correct this error?
Thanks in advance!
|
Textbox warning for large queries with Django
| 15,837,323 | 1 | 0 | 85 | 0 |
python,django,webserver
|
Yes, query object has the method like this. It is simply:
query.count()
| 0 | 0 | 0 | 0 |
2013-04-05T14:57:00.000
| 2 | 0.099668 | false | 15,837,147 | 0 | 0 | 1 | 1 |
I'm using Django to create a website for a project. The user fills out a form, then I run some queries with this data and display the results on another page. Currently it's a two page site.
I want to warn the user if their query result data is very large. Say if a user ends up getting 1000 rows in the results table, I want to warn the user that queries of this size might take a long time to load. I imagine that between the form page and the results page, I could make a popup textbox that displays the warning. I could have this box show if the query object size is greater than 1000.
Does Django have a method for me implementing this? How can I get this textbox to appear before the result page template is shown?
|
Unwanted replacement of html entities by BeautifulSoup
| 15,882,510 | 1 | 1 | 418 | 0 |
html,utf-8,python-2.7,beautifulsoup,html-entities
|
I missed part of the BeautifulSoup documentation. The default output formatters do the described behaviour: they turn html entities into the unicode characters. So, this behaviour can be changed by using a different output formatter. (D'oh)
"You can change this behavior by providing a value for the formatter argument to prettify(), encode(), or decode()...."
So if I pass in the formatter="html" Beautiful Soup will convert Unicode characters to HTML entities whenever possible! Yay! Thank you Beautiful Soup!
(And they have such great documentation. Pity I didn't read the whole thing sooner. :$)
| 0 | 0 | 0 | 0 |
2013-04-05T17:42:00.000
| 1 | 1.2 | true | 15,840,158 | 0 | 0 | 1 | 1 |
I have some html containing mml that I am generating from Word documents using MathType. I have a python script that uses BeautifulSoup to prettify it, but the problem is it takes something like ∠ and turns it into the actual byte sequence 0xE2 0x88 0xA0 which is the ∠ symbol. This is a problem because 0xE2 0x88 0xA0 won't display as ∠ in the browser. Instead the browser interprets it as a series of latin characters. This is happening with all the math entities as well, such as Δ ∠ − +... etc.
I looked through the BeautifulSoup documentation and I can see how to turn entities into the byte sequences, but I'm not using that command; all I'm using is prettify(). And I didn't see a way in the BeautifulSoup documentation to not turn entities into byte sequences.
Does anyone know if there's a setting in BeautifulSoup to tell it not to change entities to byte sequences? I hope so because it seems kind of dumb to have to undo the damage after prettify runs :)
Thanks in advance for your help!
|
django package organization
| 15,847,738 | 1 | 0 | 273 | 0 |
python,django,django-models,package
|
models.py file is used to define the structure of database. So you should leave it for defining your database entries. You can make an app named generals and put general.py in that app, and from there you can use it by calling it in any app.
| 0 | 0 | 0 | 0 |
2013-04-06T06:06:00.000
| 3 | 0.066568 | false | 15,847,652 | 0 | 0 | 1 | 2 |
I plan to build my project in Django framework. However, I noticed that all Django packages have models.py file. Now, let say I have a set of general purpose functions that I share between several apps in the project and I plan to put these functions definitions in a separate package (or app for that matter?). So, should I create an app "general" and copy-paste these functions into the models.py file? Or can I just create a general.py file in the "general" app directory and leave models.py empty? What is the "Django" way to do that?
Thanks.
|
django package organization
| 15,847,782 | 1 | 0 | 273 | 0 |
python,django,django-models,package
|
I usually create a utils.py file under my main app that is created from the django-admin.py when starting the project.
| 0 | 0 | 0 | 0 |
2013-04-06T06:06:00.000
| 3 | 0.066568 | false | 15,847,652 | 0 | 0 | 1 | 2 |
I plan to build my project in Django framework. However, I noticed that all Django packages have models.py file. Now, let say I have a set of general purpose functions that I share between several apps in the project and I plan to put these functions definitions in a separate package (or app for that matter?). So, should I create an app "general" and copy-paste these functions into the models.py file? Or can I just create a general.py file in the "general" app directory and leave models.py empty? What is the "Django" way to do that?
Thanks.
|
Django Oscar. How to add product?
| 22,601,474 | -1 | 5 | 6,316 | 0 |
python,django,e-commerce,django-oscar
|
You have to add atleast one product class /admin/catalogue/productclass/
| 0 | 0 | 0 | 0 |
2013-04-06T20:06:00.000
| 4 | -0.049958 | false | 15,855,468 | 0 | 0 | 1 | 1 |
I'm a beginner in Python and Django.
I have installed django-oscar. Then I Configured it and started the server, it works.
Now, I don't understand how to add a product?
At the dashboard there is a button Create new product. But in order to add new product it asks to select product class and I can not find any product class in the given dropdown options.
Provide me a demo example of how to add product in django-oscar.
|
How do I get a variable name from HTML back into my python
| 15,902,133 | 0 | 0 | 87 | 0 |
python,html,django
|
You can send variables to the server using methods POST or GET.
Perhaps you can get the right answer if you elaborate more or showing us some of your code.
| 0 | 0 | 0 | 0 |
2013-04-07T09:04:00.000
| 1 | 0 | false | 15,860,676 | 1 | 0 | 1 | 1 |
Hey guys how do I get a variable name from HTML back into my python. So I have this variable {{file}} in my html which I need to pass back into the python file in order the function to work.
|
How to queue up scheduled actions
| 15,870,589 | 1 | 0 | 169 | 0 |
python,django,heroku,celery,django-celery
|
It depends on how much accuracy you need. Do you want users to select the time down to the minute? second? or will allowing them to select the hour they wish to be emailed be enough.
If on the hour is accurate enough, then use a task that polls for users to mail every hour.
If your users need the mail to go out accurate to the second, then set a task for each user timed to complete on that second.
Everything in between comes down to personal choice. What are you more comfortable doing, and even more importantly: what produces the simplest code with the fewest failure modes?
| 0 | 1 | 0 | 0 |
2013-04-08T01:58:00.000
| 2 | 1.2 | true | 15,870,130 | 0 | 0 | 1 | 1 |
I am trying to set up some scheduled tasks for a Django app with celery, hosted on heroku. Aside from not know how everything should be configured, what is the best way to approach this?
Let's say users can opt to receive a daily email at a time of their choosing.
Should I have a scheduled job that run every, say 5 minutes. Looks up every user who wants to be emailed at that time and then fire off the emails?
OR
Schedule a task for each user, when they set their preference. (Not sure how I would actually implement this yet)
|
Access Model.objects methods from Django templates
| 15,879,987 | 1 | 2 | 1,519 | 0 |
python,django,templates
|
You save the output of Person.objects.count() in a variable and pass it on to your template from the corresponding view.
| 0 | 0 | 0 | 0 |
2013-04-08T13:08:00.000
| 1 | 1.2 | true | 15,879,911 | 0 | 0 | 1 | 1 |
Let's say I have a "Person" model.
How can I display the number of persons in my system in a template?
In standard code, I would do: Person.objects.count().
But how to do this in a template?
|
Clientside Scripting Language
| 15,887,967 | 2 | 0 | 541 | 0 |
python,browser,scripting,client-side
|
Internet Explorer has support for client-side VBScript, but nobody really uses it. Javascript is an implementation of ECMAScript, by Brendan Eich at Netscape. It became the de-facto standard.
However, most languages have libraries written that can traverse an html document in the server side. In Python a common one is called Beautiful Soup.
| 0 | 0 | 1 | 0 |
2013-04-08T20:03:00.000
| 3 | 0.132549 | false | 15,887,916 | 0 | 0 | 1 | 1 |
Is Javascript the only language that can utilise the DOM API? Is there a DOM wrapper for Python?
|
Running Boto on Google App Engine (GAE)
| 15,891,258 | 6 | 6 | 2,087 | 0 |
python,google-app-engine,amazon-ec2,boto
|
It sounds like you haven't copied the boto code to the root of your app engine directory.
Boto works with GAE but Google doesn't supply you with the code. Once you copy it into the root of your GAE directory, the dev server should work, and after your next upload it will work on the prod server as well.
| 0 | 1 | 0 | 0 |
2013-04-08T21:34:00.000
| 2 | 1.2 | true | 15,889,424 | 0 | 0 | 1 | 1 |
I'm new to Python and was hoping for help on how to 'import boto.ec2' on a GAE Python application to control Amazon EC2 instances. I'm using PyDev/Eclipse and have installed boto on my Mac, but using simply 'import boto' does not work (I get: : No module named boto.ec2). I've read that boto is supported on GAE but I haven't been able to find instructions anywhere. Thanks!
|
Handling redirections within a div instead of the page for 3D Secure operations
| 15,905,890 | 1 | 1 | 511 | 0 |
python,web-applications,3d-secure
|
It probably won't help, but IMO it's a bad idea to try to do that. You could maybe do it with an iframe, but even then, it's probably a bad idea.
And for a very simple reason: when typing their card security code (or whatever), many users will want to check that they are actually on their bank website -- URL, favicon, HTTPS certificate, etc. So don't hide all of this by embedding this in your page.
I usually hate popups, but for me they are totally acceptable in this situation.
| 0 | 0 | 0 | 0 |
2013-04-09T15:00:00.000
| 1 | 0.197375 | false | 15,905,460 | 0 | 0 | 1 | 1 |
I have a web based client application which heavily uses JavaScript and JQuery. While the client using the application, page content changes dynamically and refreshing the page causes the whole changed content to be lost.
Now, I have to add 3d Secure payment method to my application. Problem is, (as those who used 3d secure systems might know) after Credit Card number is validated, I was redirected to related bank's 3d security page, where the bank want me to make a validation by entering Credit card's security code and the pin code sent via SMS to predefined phone number. If related information is right, then bank redirects me to my success url, or fail url if the transaction had failed.
All is nice, but as I mentioned, I can not handle this redirection in my application page. Is it possible to start the process within a div?
I am using python and django as framework, if it would help.
|
Apache2 Ruby and Python load default website when *.domain.net is set in vhost file
| 15,921,626 | 0 | 0 | 39 | 0 |
php,python,ruby,apache2,vhosts
|
I don't think it has anything to do with the usage of mod_wsgi and Phusion Passenger. I think that's just how ServerAlias works.
You can try this alternative:
Remove the ServerAlias.
Setup a vhost for '*.domain.net' (or, if that doesn't work, '.domain.net' or 'domain.net') which redirects to site1.domain.net.
This also has the advantage that your users cannot bookmark a non-canonical subdomain name.
By the way did you know that Phusion Passenger also supports WSGI?
| 0 | 0 | 0 | 1 |
2013-04-09T15:02:00.000
| 1 | 0 | false | 15,905,487 | 0 | 0 | 1 | 1 |
5 sites setup using named vhosts.
site1.domain.net (PHP)
site2.domain.net (Python)
site3.domain.net (Ruby)
site4.domain.net (PHP)
site5.domain.net (PHP)
In the vhost for site1 I also have the ServerAlias set to *.domain.net as I want any undefined addresses to go to that address.
When I add the *.domain.net to that vhost, the python and the ruby sites redirect to site1 instead of their named vhost. All the php sites work fine.
My guess is the fact that the python and ruby sites are using wsgi and passenger respectively has something to do with why it is loading incorrectly.
I was reading something about UseCanonicalNames but I don't see how that impacts this.
I am not just interested in a solution but also a reason why (or how) these other two languages handle their vhost config and why such a change makes a difference.
Thank you for your time and help.
|
py2neo Batch Insert timing out for even 2k nodes
| 15,927,439 | 1 | 0 | 211 | 0 |
python,neo4j,py2neo
|
There is an existing bug with large batches due to Python's handling of the server streaming format. There will be a fix for this released in version 1.5 in a few weeks' time.
| 0 | 0 | 0 | 1 |
2013-04-10T08:16:00.000
| 3 | 1.2 | true | 15,920,449 | 0 | 0 | 1 | 1 |
I'm just trying to do a simple batch insert test for 2k nodes and this is timing out. I'm sure it's not a memory issue because I'm testing with a ec2 xLarge instance and I changed the neo4j java heap and datastore memory parameters. What could be going wrong?
|
OpenERP Payroll Loan Deduction
| 15,942,417 | 0 | 2 | 1,052 | 0 |
python,openerp
|
You can create a new Rule of Loan deduction in Salary Rule and specify Amount type there.
This new rule would be effective in Employee Salary sleep structure which has taken loan from the company.
| 0 | 0 | 0 | 0 |
2013-04-10T15:55:00.000
| 3 | 0 | false | 15,930,722 | 0 | 0 | 1 | 2 |
Can anyone help me with a formula or process to deduct loans taken by employees from their salary/wages please? Thanks
|
OpenERP Payroll Loan Deduction
| 15,942,760 | 0 | 2 | 1,052 | 0 |
python,openerp
|
Create a Contract for employee by configuring wage and Salary structure. In Salary structure, Add a new rule for Home loan with Deduction type and assign Fixed / Percentage amount with negative (-) sign as its Deduction. Go to the Employee Payslip, select an employee and click on Compute Sheet. You will see salary structure in Salary Computation tab.
| 0 | 0 | 0 | 0 |
2013-04-10T15:55:00.000
| 3 | 0 | false | 15,930,722 | 0 | 0 | 1 | 2 |
Can anyone help me with a formula or process to deduct loans taken by employees from their salary/wages please? Thanks
|
Django: i want to get a render a page according to specific userid obtained from another page
| 15,932,185 | 2 | 0 | 55 | 0 |
python,django,django-models
|
The simplest option would be to pass in the user ID as a query parameter to the next page, i.e. if the user starts at page http://myserver/userid.html, and enters a user ID of 1234, then they're redirected to the page http://myserver/sec.html?userid=1234.
The second page can access the query parameter via the HttpRequest.GET dictionary.
| 0 | 0 | 0 | 0 |
2013-04-10T16:42:00.000
| 2 | 1.2 | true | 15,931,684 | 0 | 0 | 1 | 1 |
I have a page called userid.html in which a user is entering the userid. If this user id exists, he is taken to next page, sec.html where he is asked a security question which he has already set.
this security question is a context variable, and i need to render this page according to user id given in the previous page(userid.html), as security question of each user will be different.
How can this be done in django?
Thanks in advance
|
google app engine python configuration
| 15,934,118 | 3 | 0 | 181 | 0 |
python,google-app-engine
|
In Launcher go to Edit -> Preferences and set Python Path to match your python 27 path.
| 0 | 1 | 0 | 0 |
2013-04-10T18:48:00.000
| 1 | 1.2 | true | 15,933,982 | 0 | 0 | 1 | 1 |
when I run my app using "Google App Engine Launcher" it gives me a warning sign.
in the log console I found it using Python 3.3, how can I configure it to use python 2.7
|
How to see django website schema
| 15,947,251 | 1 | 1 | 60 | 0 |
python,django
|
The models for one given app usually lives in the app's "models.py" module. Now Rails and Django might not have the same definition of what's an "app" is. In Django, you have a "project" which consists of one or more (usually more) "apps", and it's considered good practice to try to make apps as independant (hence potentially reusable) as possible, and there are indeed quite a few reusable apps available, so it's pretty uncommon to have all of the project's models in a single models.py module.
But anyway: if what you really want is "to see the entire database schema", then the best solution is to ask the database itself, whatever framework you use.
| 0 | 0 | 0 | 0 |
2013-04-11T10:59:00.000
| 1 | 1.2 | true | 15,947,024 | 0 | 0 | 1 | 1 |
I come from Ruby on rails world. In rails, there is a file called schema.rb. It lists all the tables, columns and their types of the entire rails app.
Is there anyway in django to see the entire database schema at one place?
|
django structure for multiple modules
| 15,951,290 | 2 | 3 | 1,660 | 0 |
python,django,module,tablename
|
There's really no correct answer to this. In general, the way in which you break down any programming task into 'modules' is very much a matter of personal taste.
My own view on the subject is to start with a single module, and only break it into smaller modules when it becomes 'necessary', e.g. when a single module becomes excessively large.
With respect to the apps, if all the apps share the same database tables, you'll probably find it easier to do everything in a single app. I think using multiple Django apps is only really necessary when you want to share the same app between multiple projects.
| 0 | 0 | 0 | 0 |
2013-04-11T14:11:00.000
| 4 | 0.099668 | false | 15,951,036 | 0 | 0 | 1 | 1 |
I'm very new to django and python as well. I want to try out a project written in django.
Let say the project have 3 modules
User
CRUD
Forgot password
login
Booking
CRUD
Search
Default (basically is for web users to view)
Home page
about us
All these have different business logic for the same entity.
Should I create 3 apps for this? If 3 different apps, then the table name is all different as it will automatic add a prefix for table name.
Any suggestion?
|
Will I get charge for transfering files between S3 accounts using boto's bucket.copy_key() function?
| 15,957,021 | 3 | 1 | 322 | 1 |
python,amazon-web-services,amazon-s3,boto,data-transfer
|
If you are using the copy_key method in boto then you are doing server-side copying. There is a very small per-request charge for COPY operations just as there are for all S3 operations but if you are copying between two buckets in the same region, there is no network transfer charges. This is true whether you run the copy operations on your local machine or on an EC2 instance.
| 0 | 0 | 1 | 0 |
2013-04-11T18:24:00.000
| 1 | 1.2 | true | 15,956,099 | 0 | 0 | 1 | 1 |
I wrote a little script that copies files from bucket on one S3 account to the bucket in another S3 account.
In this script I use bucket.copy_key() function to copy key from one bucket in another bucket.
I tested it, it works fine, but the question is: do I get charged for copying files between S3 to S3 in same region?
What I'm worry about that may be I missed something in boto source code, and I hope it's not store the file on my machine, than send it to another S3.
Also (sorry, if its to much questions in one topic) if I upload and run this script from EC2 instance will I get charge for bandwidth?
|
How can I run my python script from within a web browser and process the results?
| 15,960,018 | 1 | 5 | 20,845 | 0 |
python,django,web-applications,view
|
After following the django tutorial, as suggested in a comment above, you'll want to create a view that has a text field and a submit button. On submission of the form, your view can run the script that you wrote (either imported from another file or copy and pasted; importing is probably preferable if it's complicated, but yours sounds like it's just a few lines), then return the number that you calculated. If you want to get really fancy, you could do this with some javascript and an ajax request, but if you're just starting, you should do it with a simple form first.
| 0 | 0 | 0 | 0 |
2013-04-11T22:25:00.000
| 3 | 0.066568 | false | 15,959,936 | 0 | 0 | 1 | 2 |
I have a written a short python script which takes a text and does a few things with it. For example it has a function which counts the words in the text and returns the number.
How can I run this script within django?
I want to take that text from the view (textfield or something) and return a result back to the view.
I want to use django only to give the script a webinterface. And it is only for me, maybe for a few people, not for a big audience. No deployment.
Edit: When I first thought the solution would be "Django", I asked for it explicitly. That was of course a mistake because of my ignorance of WSGI. Unfortunately nobody advised me of this mistake.
|
How can I run my python script from within a web browser and process the results?
| 21,333,600 | 2 | 5 | 20,845 | 0 |
python,django,web-applications,view
|
What nobody told me here, since I asked about Django:
What I really needed was a simple solution called WSGI. In order to make your python script accessible from the webbrowser you don't need Django, nor Flask. Much easier is a solution like Werkzeug or CherryPy.
| 0 | 0 | 0 | 0 |
2013-04-11T22:25:00.000
| 3 | 0.132549 | false | 15,959,936 | 0 | 0 | 1 | 2 |
I have a written a short python script which takes a text and does a few things with it. For example it has a function which counts the words in the text and returns the number.
How can I run this script within django?
I want to take that text from the view (textfield or something) and return a result back to the view.
I want to use django only to give the script a webinterface. And it is only for me, maybe for a few people, not for a big audience. No deployment.
Edit: When I first thought the solution would be "Django", I asked for it explicitly. That was of course a mistake because of my ignorance of WSGI. Unfortunately nobody advised me of this mistake.
|
Django signal after whole model has been saved
| 16,020,917 | 3 | 7 | 588 | 0 |
python,django,signals
|
I feel like the only option is to process the data after every m2m_change, since there doesn't appear to be an event or signal that maps to "all related data on this model has finished saving."
If this is high cost, you could handle the processing asynchronously. When I encountered a similar situation, I added a boolean field to the model to handle state as it related to processing (e.g., MyModel.needs_processing) and a separate asynchronous task queue (Celery, in my case) would sweep through every minute and handle the processing/state resetting.
In your case, if m2m_changed and needs_processing is False, you could set needs_processing to True and save the model, marking it for processing by your task queue. Then, even when the second m2m_changed fired for the other m2m field, it wouldn't incur duplicate processing costs.
| 0 | 0 | 0 | 0 |
2013-04-12T12:47:00.000
| 1 | 1.2 | true | 15,971,821 | 0 | 0 | 1 | 1 |
I have a Django model with 2 ManyToMany fields. I want to process the data from the model each time it has been saved.
The post_save signal is sent before it saves the ManyToMany relations, so I can't use that one. Then you have the m2m_changed signal, but since I have 2 ManyToMany fields I cannot be sure on which ManyToMany field I should put the signal.
Isn't there a signal that is triggered after all the ManyToMany fields have been saved?
|
importing module without importing that models imports
| 15,977,341 | -1 | 1 | 1,523 | 0 |
python,import
|
If you want to use from module import * and not include everything imported within module then you should define the variable __all__ in module. This variable should be a list of strings naming the classes, variables, modules, etc. that should be imported.
From the python documentation
If the list of identifiers is replaced by a star (*), all public names defined in the module are bound in the local namespace of the import statement
and
The public names defined by a module are determined by checking the module’s namespace for a variable named __all__; if defined, it must be a sequence of strings which are names defined or imported by that module. The names given in __all__ are all considered public and are required to exist. If __all__ is not defined, the set of public names includes all names found in the module’s namespace which do not begin with an underscore character (_) [...] It is intended to avoid accidentally exporting items that are not part of the API (such as library modules which were imported and used within the module). (emphasis mine)
| 0 | 0 | 0 | 0 |
2013-04-12T17:05:00.000
| 2 | -0.099668 | false | 15,977,072 | 1 | 0 | 1 | 1 |
I am trying to import a python module without importing the imports of that module. I was digging around a bit, but the only way to exclude any command from being run when a file is being imported is the if __name__ == "__main__":
But the module is also imported by various other modules that ened that modules imports, so I cant place the imports below the if __name__ == "__main__":
Any idea how to solve that?
The reason why I dont want to import this modules imports that those modules get run also from a jar jython envioronment and import java.lang functions. I just need to access a few functions in that file without the whole and importing those modules break make script. The functions that I am trying to access dont need any dependencies that module ahs.
I import via 'from moduleX import f1,f2,f3'
|
Using sklearn and Python for a large application classification/scraping exercise
| 15,998,577 | 5 | 5 | 940 | 0 |
python,scrapy,classification,scikit-learn
|
Use the HashingVectorizer and one of the linear classification modules that supports the partial_fit API for instance SGDClassifier, Perceptron or PassiveAggresiveClassifier to incrementally learn the model without having to vectorize and load all the data in memory upfront and you should not have any issue in learning a classifier on hundreds of millions of documents with hundreds of thousands (hashed) features.
You should however load a small subsample that fits in memory (e.g. 100k documents) and grid search good parameters for the vectorizer using a Pipeline object and the RandomizedSearchCV class of the master branch. You can also fine tune the value of the regularization parameter (e.g. C for PassiveAggressiveClassifier or alpha for SGDClassifier) using the same RandomizedSearchCVor a larger, pre-vectorized dataset that fits in memory (e.g. a couple of millions of documents).
Also linear models can be averaged (average the coef_ and intercept_ of 2 linear models) so that you can partition the dataset, learn linear models independently and then average the models to get the final model.
| 0 | 0 | 0 | 0 |
2013-04-13T15:44:00.000
| 2 | 0.462117 | false | 15,989,610 | 0 | 1 | 1 | 1 |
I am working on a relatively large text-based web classification problem and I am planning on using the multinomial Naive Bayes classifier in sklearn in python and the scrapy framework for the crawling. However, I am a little concerned that sklearn/python might be too slow for a problem that could involve classifications of millions of websites. I have already trained the classifier on several thousand websites from DMOZ.
The research framework is as follows:
1) The crawler lands on a domain name and scrapes the text from 20 links on the site (of depth no larger than one). (The number of tokenized words here seems to vary between a few thousand to up to 150K for a sample run of the crawler)
2) Run the sklearn multionmial NB classifier with around 50,000 features and record the domain name depending on the result
My question is whether a Python-based classifier would be up to the task for such a large scale application or should I try re-writing the classifier (and maybe the scraper and word tokenizer as well) in a faster environment? If yes what might that environment be?
Or perhaps Python is enough if accompanied with some parallelization of the code?
Thanks
|
In PyCharm, webpages refresh in debug mode, not in run mode
| 30,849,574 | 0 | 1 | 550 | 0 |
python,google-app-engine,google-chrome,pycharm,webapp2
|
It turns out that this wasn't a refresh or caching issue but a timing issue. Under some circumstances, GAE uses an update algorithm that incurs a delay before transactions are applied. In Run mode, the new page was being requested before the update was completed; in Debug mode, enough time passed for the update to be completed.
One solution would have been to change the datastore architecture to eliminate reading an obsolete version of the data, but that caused other, more serious problems.
Another solution was to include a split-second delay, after an update but before displaying the updated record. Not ideal, since it's impossible to know how long that delay has to be, but for now this has been satisfactory.
| 0 | 1 | 0 | 0 |
2013-04-13T17:43:00.000
| 1 | 1.2 | true | 15,990,889 | 0 | 0 | 1 | 1 |
I'm writing a GAE webapp using Python 2.7, webapp2, and Jinja. In development, I run the app under PyCharm 2.7.1 on a Max OSX 10.7.5 (Lion). I'm currently using Chrome 26.0.1410.43 as my browser.
I don't know for sure that this is a PyCharm issue, but that's my best guess. Here's a description:
When I use the "Debug" control to start the app, webpages refresh automatically as I navigate from one page to another. That is, if I start at page A, navigate to page B, take some action that changes what A should look like, and navigate back to A, the change appears.
However, when I use the "Run" control to start the app, with no other changes, webpages do not automatically refresh. In that same scenario, when I navigate back to A, the old version of that webpage appears. I need to click my browser's Refresh control to see the updated page.
Please tell me how to stop the browser from displaying cached pages in Run mode. I haven't tried publishing this to our GAE website yet, and hopefully it won't happen there, but I need Run mode for performance on the video tutorial I'm creating.
Thanks for any suggestions!
|
How to translate a shared Model in Django without breaking uniqueness?
| 16,126,895 | 0 | 1 | 47 | 0 |
python,django,django-models,internationalization
|
Finally I've solved the problem by introducing a new layer, like follows:
There exist the models Area and Resource, both are now translatable and are created by admin staff, current users aren't allowed to create or modify them. These models will be more abstract to allow doing the matchings.
But, to allow more "precision" two new models have been added: AreaTag and ResourceTag. These two models are created by users, and they aren't translated. So we can have "application" and "applicación" areatags, but as the user has previously choosen an Area, we can do the searches using the Area, which will be common for both cases. So searching in the level Tag won't be necessary to find affinity.
Hope this helps someone trying to do something similar! :)
| 0 | 0 | 0 | 0 |
2013-04-15T14:32:00.000
| 1 | 1.2 | true | 16,017,830 | 0 | 0 | 1 | 1 |
I'm working on a Django (v. 1.5.1) website and I have several models, such as RegisteredUser and Startup. Each of these two models has a Many2ManyField to an Area model. Area contains two fields: name and slug (which is built from name).
This design works well to find users and startups within the same areas, but there is a big problem when trying to internationalizate it. In this design, users create areas and attach them to RegisteredUser and Startup instances by themselves. But, if the same area is created in different languages then an area called computer science will be different from informática (Spanish version).
The idea is that users created areas just as if they were tags, but I think this will bring problems as the one described above.
So I was wondering about possible solutions:
When the user creates an area instance, she has to fill in all the translated versions for the area and the slug'd be calculated from the English version. But this doesn't seem very atrractive for the user. Does it?
The user can only choose from a list of predefined areas, which have been previously introduced and translated. But what about new ones? This option seems really hard to maintain...
For the translations I'm using django-transmeta.
So, I'd be glad to read your opinions and suggestions on how to deal with this problem.
Thanks a lot!
|
Can we calculate Net Salary with openERp
| 16,029,533 | 1 | 1 | 1,340 | 0 |
python,postgresql,openerp,erp
|
Install hr_payroll module. Flow is as below:
Employee --> Contracts --> Salary Structure --> Salary Rules
In Contract, You can set Working Schedule for that employee with Wage. You need to configure Salary Structure with Salary Rules as per your need. Salary rules for Bonus, expenses, etc.
In that Salary structure, You need to add that rules. Now, go to the Employee Payslip menu, select that employee, Related information will be automatically come over there. Click on Compute sheet button. You will get Salary details in Salary Computation tab as per your Salary rules that you added in your Salary Structure.
For now, there is no link between attendance and payroll in OpenERP that you need to customize. It depends on requirements that how much time it will take for that! Hope this will help you.
| 0 | 0 | 0 | 0 |
2013-04-15T15:25:00.000
| 1 | 0.197375 | false | 16,018,995 | 0 | 0 | 1 | 1 |
In our company we decided to use openERP
We now working to customize openERP with our work ... we can use it successfully in warehouse dept. and sallies dept.
My question is how to make openERP calculate monthly Net Salary
with deduct if the Employee absence or leave the work or if we decided to add bonus
and if we can programming new model and add it How this difficult to work and what about expected time required to do
OR how we can access to fields related with attendance and building our own program to calculate Net Salary ?
|
Python split mp3 channel
| 21,548,733 | 1 | 0 | 1,282 | 0 |
python,mp3
|
I assume you want to split the channels losslessly, without decoding MP3 and re-encoding it - otherwise you would not have mentioned MP3 at all and would have easily found many tools like Audacity to do that.
There are 4 channel modes of MP3 frames - this means 4 types of MP3 files: simple stereo, joint-stereo, dual-channel, mono. joint-stereo files can't be split without loss. mono files doesn't need splitting. The rest: stereo and dual-channel, consists of less than 0.1% of all MP3 files, technically can be split into 2 files, each for a channel, without loss. However there is not any tool on the Internet to do that - not any command line tool nor any GUI tool, because few need the function.
There are not any python library for you neither. Most libraries abstracted MP3 files into a common audio which you can manipulate, after decoding. pymad is the only one specific to MP3 file, and it can tell if a file is using any of the 4 channel modes, but does not offer to extract a channel without decoding it. If you write a new tool, you will have to work on raw MP3 files or produce a library for it.
And it is not easy to write a tool or library for it. It's one stream with 2 channels and not two streams interleaved on a frame level. You cannot simply work on MP3 frames, drop some frames, keep others, and manage to extract a channel out that way. It's a task for a professional, and perhaps best happen in a decoder project (like lame or libmad) and not in a file manipulation project (like mp3info or the python eyeD3). In other words, this feature is likely written in C, not python.
Implementaiton Note:
The task to build such a tool thus suits well for a computer science C-programming language course project:
1. it takes a lot of time to do;
2. it requires every skill learned from C programming course;
3. it can get wrong easily;
4. it is likely built on the work of other projects, a lesson of adaptating existing work;
5. is a damn-hard endeavor that no-one did before and thus very rewarding
6. perhaps can be done in 300 difficult lines of code instead of bloated simple Visual Basic code, thus is a good lession of modesty and quality;
7. and finally: nobody is waiting in an hurry for a working implementation.
All condition fits perfectly for a C-programming course project.
Implementation Note 2:
some bit-rates are only possible in mono mode (80kbps), and some bit-rates are only possible in stereo mode (e.g. 320kpbs). Luckily this does not present a problem in this task, because all dual-mp3 bit-rate can be mapped into a fitting mono-mp3 bit-rate -- but not vice versa!
| 0 | 0 | 0 | 0 |
2013-04-15T15:33:00.000
| 1 | 0.197375 | false | 16,019,155 | 0 | 0 | 1 | 1 |
I'd like to seperate the channels of a mp3 file in Python and save it in two other files.
Does anybody know a library for this.
Thanks in advance.
|
web2py. no such table error
| 16,026,857 | 3 | 2 | 1,115 | 1 |
python,web2py
|
web2py keeps the structure it thinks the table has in a separate file. If someone has manually dropped the table, web2py will still think it exists, but of course you get an error when you try to actually use the table
Look for the *.mytable.table file in the databases directory
| 0 | 0 | 0 | 0 |
2013-04-16T00:21:00.000
| 1 | 1.2 | true | 16,026,776 | 0 | 0 | 1 | 1 |
I have an error no such table: mytable, even though it is defined in models/tables.py. I use sqlite. Interesting enough, if I go to admin panel -> my app -> database administration then I see a link mytable, however when I click on it then I get no such table: mytable.
I don't know how to debug such error?
Any ideas?
|
Can I set a specific default time for a Django datetime field?
| 54,221,361 | 4 | 30 | 26,326 | 0 |
python,django,datetime,django-models
|
datetime.time(16, 00) does not work.
Use datetime.time(datetime.now()) instead if you are trying to get the current time or datetime.time(your_date_time)
Where your_date_time = datetime.datetime object
| 0 | 0 | 0 | 0 |
2013-04-16T01:53:00.000
| 6 | 0.132549 | false | 16,027,516 | 0 | 0 | 1 | 2 |
I have a model for events which almost always start at 10:00pm, but may on occasion start earlier/later. To make things easy in the admin, I'd like for the time to default to 10pm, but be changeable if needed; the date will need to be set regardless, so it doesn't need a default, but ideally it would default to the current date.
I realize that I can use datetime.now to accomplish the latter, but is it possible (and how) to I set the time to a specific default value?
Update: I'm getting answers faster than I can figure out which one(s) does what I'm trying to accomplish...I probably should have been further along with the app before I asked. Thanks for the help in the meantime!
|
Can I set a specific default time for a Django datetime field?
| 71,781,104 | 0 | 30 | 26,326 | 0 |
python,django,datetime,django-models
|
What if the form.TimeField it contains a widget?
start_time = forms.TimeField(widget=TimeInput)
i've used the answers above but I cant setup the default value for it and also bear in mind I'm using django-bootstrap tags so I don't have a HTML form inputs code.
| 0 | 0 | 0 | 0 |
2013-04-16T01:53:00.000
| 6 | 0 | false | 16,027,516 | 0 | 0 | 1 | 2 |
I have a model for events which almost always start at 10:00pm, but may on occasion start earlier/later. To make things easy in the admin, I'd like for the time to default to 10pm, but be changeable if needed; the date will need to be set regardless, so it doesn't need a default, but ideally it would default to the current date.
I realize that I can use datetime.now to accomplish the latter, but is it possible (and how) to I set the time to a specific default value?
Update: I'm getting answers faster than I can figure out which one(s) does what I'm trying to accomplish...I probably should have been further along with the app before I asked. Thanks for the help in the meantime!
|
How to easily switch to another SVN branch in PyCharm Django Project
| 16,057,411 | 1 | 0 | 2,174 | 0 |
python,django,svn,version-control,pycharm
|
You can open trunk/branches with multiple pycharm window. The project root should be trunk/branch root.
Normally, you don't need to switch among branches frequently.
| 0 | 0 | 0 | 0 |
2013-04-17T08:32:00.000
| 2 | 0.099668 | false | 16,055,254 | 0 | 0 | 1 | 1 |
I have a PyCharm Django project, versioned with SVN. Project itself been created via "Checkout from version control" function, and root of a project is a root of repository, so it includes trunk and branches.
My questions is:
- How to easily switch between feature-branches?
- Maybe Im missing something - what in that case good style to work with PyCharm and SVN?
ps
Branches in my case are created frequently - new one for every specific feature-set, and by completeness they reintegrates into trunk.
|
File upload and store, then proccessing with remote worker
| 16,065,078 | 0 | 0 | 229 | 0 |
python,django,ironmq
|
There are few interesting options.
As example, you can add additional reupload workers step for deploy process. It'll guarantee
consistence between deployed application and workers.
Using own (rest) api is great idea, i like it even more than sharing models between different beings
| 0 | 0 | 0 | 0 |
2013-04-17T08:44:00.000
| 3 | 0 | false | 16,055,489 | 0 | 0 | 1 | 2 |
My question is about web application architecture.
I have a website, my users can upload files and from this files I need to create some kind of reports for users. When user upload file it stored on my server where website hosted. File path stored in Django model field. Worker is on another server and i need to get access to my database and procces that file. I know how to use django ORM itself without URLs and other parts of django.
My question: If I need to create workers on another server, which use different django models from my website i need to copy all of the models into every worker?
For example, one worker proccess file and it need models "Report" and "User". Other worker do other actions and need "User" and "Link" models. Everytime i change model in my main website i need to change same models in my workers, also different workers can have same duplicate models. I think it's not good from architecture point.
Any suggestions on how to organize my website and workers?
|
File upload and store, then proccessing with remote worker
| 16,055,809 | 0 | 0 | 229 | 0 |
python,django,ironmq
|
Why do you really need the exact same models in your workers? You can design the worker to have a different model to perform it's own actions on your data. Just design API's for your data and access it separately from your main site.
If it really necessary, Django app can be shared across multiple projects. So you can just put some generic code in a separate app (like your shared models) and put them in sourcecontrol. After an update in your main website, you can easily update the workers also.
| 0 | 0 | 0 | 0 |
2013-04-17T08:44:00.000
| 3 | 0 | false | 16,055,489 | 0 | 0 | 1 | 2 |
My question is about web application architecture.
I have a website, my users can upload files and from this files I need to create some kind of reports for users. When user upload file it stored on my server where website hosted. File path stored in Django model field. Worker is on another server and i need to get access to my database and procces that file. I know how to use django ORM itself without URLs and other parts of django.
My question: If I need to create workers on another server, which use different django models from my website i need to copy all of the models into every worker?
For example, one worker proccess file and it need models "Report" and "User". Other worker do other actions and need "User" and "Link" models. Everytime i change model in my main website i need to change same models in my workers, also different workers can have same duplicate models. I think it's not good from architecture point.
Any suggestions on how to organize my website and workers?
|
django, get an object from a list of filtered objects which has the maximum value for a field
| 16,064,152 | 0 | 0 | 131 | 0 |
python,django
|
'objs=Modelname.objects.filter(student__in=malerep).order_by('-Votes','-student__Percentage')'
then using objs[0] i was able to access the object of the student with max votes and max percentage in case of a tie.
| 0 | 0 | 0 | 0 |
2013-04-17T13:57:00.000
| 2 | 0 | false | 16,062,092 | 0 | 0 | 1 | 2 |
Modelname.objects.filter(student__in=malerep).order_by('-Votes','-student__Percentage')
Here i want to specifically get the student who has the maximum number of votes.
What i am trying to do is, to get the list of students who stood for an election and got some votes and then sort them based on the number of votes they got. Now if i can access the first object in this list , i can make necessary modifications for the student in the database who won the election and also get the details of the rest.
I am stuck at this point. Please help
Thanx in advance.
|
django, get an object from a list of filtered objects which has the maximum value for a field
| 16,063,068 | 1 | 0 | 131 | 0 |
python,django
|
At this point, you have two choice:
First, use "max" to get the max value of certain field and use that max value to retrieve the particular student instance.
Good side, this is easy to implement; bad side, you need to hit database twice to achieve your goal.
Second, use raw(). You can perform all the complicate query you want. And that will solve your problem.
Good side, you have the maximal freedom to preform any query; bad side, require sql knowledges.
| 0 | 0 | 0 | 0 |
2013-04-17T13:57:00.000
| 2 | 1.2 | true | 16,062,092 | 0 | 0 | 1 | 2 |
Modelname.objects.filter(student__in=malerep).order_by('-Votes','-student__Percentage')
Here i want to specifically get the student who has the maximum number of votes.
What i am trying to do is, to get the list of students who stood for an election and got some votes and then sort them based on the number of votes they got. Now if i can access the first object in this list , i can make necessary modifications for the student in the database who won the election and also get the details of the rest.
I am stuck at this point. Please help
Thanx in advance.
|
Android Audio API in Python
| 16,287,866 | 1 | 1 | 1,422 | 0 |
android,python,kivy
|
I'm looking for the same thing. I too am looking at Kivy. The possible solutions I can see to audio is hooking in a 3rd party application as a "recipe" in Kivy.
There is aubio, which apparently can be compiled for iOS/Android (see stackoverflow question regarding this), but I believe you have to get your own audio source for it, which could be potentially handled by the audiostream subproject in kivy.
Kivy/audiostream imports the core libpd project it appears, so you can use libpd python bindings. I think this is the path of least resistance, but I had issues when trying to run the examples.
Both of these approaches, I think could work but both need some effort to be able to start using.
| 1 | 0 | 0 | 0 |
2013-04-18T16:48:00.000
| 1 | 0.197375 | false | 16,088,764 | 0 | 0 | 1 | 1 |
I am trying to write a metronome application in Python, and I intend to publish the application for Android and iOS. I have found a few cross-platform frameworks like Kivy, but their audio support is lacking. More specifically, I need very precise audio timing and I can't rely on thread timing or events. I want to write audio data directly to the device's audio output, or create a MIDI file that can be played on the fly. The problem is, I cannot find any suitable framework for this task.
I know that many games have been written for Android in Python, and those games have excellent and precise sound timing. I need help finding either:
a way to create and play MIDI files on the fly in Android with Python,
a Python framework for Android with a suitable audio API to write sound directly to an audio device, or at least play audio with very accurate timing.
Thanks!
|
What is the fastest way to get scraped data from so many web pages?
| 16,131,039 | 0 | 0 | 402 | 1 |
python,mysql,google-app-engine,google-cloud-datastore,web-scraping
|
Based on what I know about your app it would make sense to use memcache. It will be faster, and will automatically take care of things like expiring stale cache entries.
| 0 | 1 | 0 | 0 |
2013-04-19T06:29:00.000
| 1 | 0 | false | 16,098,570 | 0 | 0 | 1 | 1 |
I need to scrap about 40 random webpages at the same time.These pages vary on each request.
I have used rpcs in python to fetch the urls and scraped the data using BeautifulSoup. It takes about 25 seconds to scrap all the data and display on the screen.
To increase the speed i stored the data in appengine datastore so that each data is scraped only once and can be accessed from there quickly.
But the problem is-> as the size of the data increases in the datastore, it is taking too long to fetch the data from the datastore(more than the scraping).
Should i use memcache Or shift to mysql? Is mysql faster than gae-datastore?
Or is there any other better way to fetch the data as quickly as possible?
|
Embed a celery worker in my own code
| 16,154,276 | 0 | 2 | 515 | 0 |
python,django,rabbitmq,celery
|
Replace this names: coordinator with rabbitmq (or some other broker kombu supports) and users with celery workers.
I am pretty sure you can do all you need (and much more) just by configuring celery / kombu and rabbitmq and without writing too many (if any) lines of code.
small note: Celery features scheduled tasks.
| 0 | 1 | 0 | 0 |
2013-04-19T15:49:00.000
| 1 | 0 | false | 16,108,560 | 0 | 0 | 1 | 1 |
I have a service that needs a sort of coordinator component. The coordinator will manage entities that need to be assigned to users, taken away from users if the users do not respond on a timely manner, and also handle user responses if they do response. The coordinator will also need to contact messaging services to notify the users they have something to handle.
I want the coordinator to be a single-threaded process, as the load is not expected to be too much for the first few years of usage, and I'd much rather postpone all the concurrency issues to when I really need to handle them (if at all).
The coordinator will receive new entities and user responses from a Django webserver. I thought the easiest way to handle this is with Celery tasks - the webserver just starts a task that the coordinator consumes on its own time.
For this to happen, I need the coordinator to contain a celery worker, and replace the current worker mainloop with my own version (one that checks the broker for a new message and handles the scheduling).
How feasible is it? The alternative is to avoid Celery and use RabbitMQ directly. I'd rather not do that.
|
Tracing GET/POST calls
| 16,115,090 | 0 | 0 | 98 | 0 |
php,python,html,networking
|
Chrome provides a built-in tool for seeing the network connections. Press Ctrl+Shift+J to open the JavaScript Console. Then open the Network tab to see all of the GET/POST calls.
| 0 | 0 | 1 | 1 |
2013-04-19T22:23:00.000
| 4 | 0 | false | 16,114,358 | 0 | 0 | 1 | 1 |
is there a way to trace all the calls made by a web page when loading it? Say for example I went in a video watching site, I would like to trace all the GET calls recursively until I find an mp4/flv file. I know a way to do that would be to follow the URLs recursively, but this solution is not always suitable and quite limitative( say there's a few thousand links, or the links are in a file which can't be read). Is there a way to do this? Ideally, the implementation could be in python, but PHP as well as C is fine too
|
How to Combine Html + CSS code with python function?
| 16,121,984 | 1 | 0 | 3,480 | 0 |
python,html,web-applications,cgi
|
First you will need to understand HTTP. It is a text based protocol.
I assume by "web site" you mean User-Agent, like FireFox.
Now, your talking about an input box, well this will mean that you've already handled an HTTP request for your content. In most web applications this would have been several requests (one for the dynamically generated application HTML, and more for the static css and js files).
CGI is the most basic way to programmatically inspect already parsed HTTP requests and create HTTP responses from objects you've set.
Now your application is simple enough where you can probably do all the HTTP parsing yourself to gain a basic understanding of what's going on, but you will still need to understand how to develop a server that can listen on a socket.
To avoid all that just find a Python application server that has already implemented all of the above and much more. There are many python application servers to choose from. Use one with a small learning curve for something as simple as above. Some are labeled as "micro-frameworks" in this genre.
| 0 | 0 | 0 | 0 |
2013-04-20T13:06:00.000
| 2 | 0.099668 | false | 16,120,717 | 0 | 0 | 1 | 1 |
I have zero experience with website development but am working on a group project and am wondering whether it would be possible to create an interaction between a simple html/css website and my python function.
Required functionality:
I have to take in a simple string input from a text box in the website, pass it into my python function which gives me a single list of strings as output. This list of strings is then passed back to the website. I would just like a basic tutorial website to achieve this. Please do not give me a link to the CGI python website as I have already read it and would like a more basic and descriptive view as to how this is done. I would really appreciate your help.
|
Getting POST values from a Django request when they start with the same string
| 32,982,612 | 0 | 0 | 1,678 | 0 |
python,django
|
yeah, like Daniel said, so I add the next
posted = [{k.replace('person__',''):v} for k, v in request.POST.items() if k.startswith('person__')]
then I can use a model form with posted data
| 0 | 0 | 0 | 0 |
2013-04-20T21:25:00.000
| 2 | 0 | false | 16,125,352 | 0 | 0 | 1 | 1 |
I have a form on my site that allows the user to add names to an object. The user can hit a plus button to add another name or a minus button to remove a name. I need to be able to easily pull all POST variables that start with a name.
For example. A user adds two names so we have two text boxes names 'name0' and 'name1'. Is there a way that I can pull those two values without know how many I may have?
One reason I want to do this without knowing is because they could do any number of add and remove functions on the list of names. So I could end up with this:
'name2', 'name10', 'name11'
for the names that come in. I don't want to have to know the exact values. I just want to pull all POST variables that start with 'name'.
Is this possible?
|
processing large cloud storage files in app engine
| 16,135,479 | 0 | 3 | 281 | 0 |
python,google-app-engine,google-cloud-storage
|
The data file doesnt "need to be in memory" and if you try that you will run oom.
If you can process it sequentially open it as a filestream. Ive done that with blobstore, should be similar
| 0 | 1 | 0 | 0 |
2013-04-21T07:20:00.000
| 2 | 0 | false | 16,128,864 | 0 | 0 | 1 | 1 |
I'm required to process large files, up to 2GB, in GAE (I'm using Python).
of course I'll be running the code on a backend, however since a local storage isn't available the data will need to be in memory.
is there a file descriptor like wrapper for boto or other cloud storage supported protocol?
or other recommended technique?
Thanks,
Shay
|
NDB/DB NoSQL Injection Google Datastore
| 16,140,194 | 7 | 2 | 973 | 1 |
python,security,google-app-engine,nosql,google-cloud-datastore
|
Standard SQL injection techniques rely on the fact that SQL has various statements to either query or modify data. The datastore has no such feature. The GQL (the query language for the datastore) can only be used to query, not modify. Inserts, updates, and deletes are done using a separate method that does not use a text expression. Thus, the datastore is not vulnerable to such injection techniques. In the worst case, an attacker could only change the query to select data you did not intend, but never change it.
| 0 | 1 | 0 | 0 |
2013-04-21T18:51:00.000
| 1 | 1.2 | true | 16,134,927 | 0 | 0 | 1 | 1 |
Is there any SQL injection equivalents, or other vulnerabilities I should be aware of when using NoSQL?
I'm using Google App Engine DB in Python2.7, and noticed there is not much documentation from Google about security of Datastore.
Any help would be appreciated!
|
How to determine if user is currently active on site in Django
| 16,141,722 | 0 | 0 | 799 | 0 |
python,django
|
There are several ways to determine user activity:
1) Use javascript to send periodic requests to server.
+: You will be able to determine user activity despite he actively working on site or just keep open window.
-: Too many requests.
2) Use django middleware
-: Can`t determine user activity if he keeps only open window
3) Use asynchronous framework to keep long-lived connection. For example tornado.
It is cleanest way but the most labor-intensive.
| 0 | 0 | 0 | 0 |
2013-04-22T03:54:00.000
| 1 | 1.2 | true | 16,139,169 | 0 | 0 | 1 | 1 |
Is there a way to see if a user is currently logged in and active on a site?
For example, I know you can check the authentication token of the user and see if he is still 'actively logged in', but this doesn't tell me much, since the user could technically be logged in for two weeks, though only actively on the site for one minute of that duration. last_login would be equally unhelpful.
What would be a good method in django to check to see if the user is currently active? I have done this in Google Analytics, but was wondering how I could do an entirely-django approach.
|
Django: Risks in handling data directly from request.POST["item"]
| 16,148,219 | 1 | 2 | 159 | 0 |
python,django,security,django-models,django-forms
|
Yes. If these cases were NOT secure they would be security issues and be patched quickly if discovered.
| 0 | 0 | 0 | 0 |
2013-04-22T10:06:00.000
| 1 | 1.2 | true | 16,144,431 | 0 | 0 | 1 | 1 |
Sometimes I have a form that is a bit complicated in logic, and needs validation beyond just type checking or regex, so I end up handling data directly from request.POST['item'], like:
datetime.strptime(request.POST['item'], FORMAT)
MyModel.objects.filter(name=request.POST['item2']
As far as I know, the first example would throw an exception at worst, so no security problems, and for the second example, the Django ORM would prevent SQLi. Is that correct?
I also have regex in the URLConf, so I guess it would be safe to handle the data taken from the URL in views.py because URLConf already validated it with regex, right?
|
automatically extract text from pdf for many files
| 16,153,090 | 1 | 0 | 901 | 0 |
java,python,pdf,text
|
For java: have a look at iText
For python I would use PDFMiner
| 0 | 0 | 0 | 0 |
2013-04-22T17:20:00.000
| 3 | 1.2 | true | 16,152,965 | 0 | 0 | 1 | 1 |
I have about 10,000 of pdf files(conf papers) and I need to extract text from certain section (like the experimental section) of these papers and save in a file.
Does anyone know a java tool or some python tool which can help me do this?
Thanks in advance
Ayush
|
Is there some sort of way to roll back the initialize_project_db script in pyramid?
| 16,159,421 | 2 | 0 | 139 | 1 |
python,database,pyramid
|
initialize_db is not a migration script. It is for bootstrapping your model and that's that. If you want to tie in migrations with upgrade/rollback support, look at alembic for SQL schema migrations.
| 0 | 0 | 0 | 0 |
2013-04-22T21:36:00.000
| 1 | 1.2 | true | 16,157,144 | 0 | 0 | 1 | 1 |
I am a rails developer that is learning python and I am doing a project using the pyramid framework. I am used to having some sort of way of rolling back the database changes If I change the models in some sort of way. Is there some sort of database rollback that works similar to the initialize_project_db command?
|
special character encoding C# and Ironpython
| 16,273,312 | 0 | 0 | 387 | 0 |
c#,special-characters,ironpython
|
Probably you are doing something wrong. There is no issues with encoding and IronPython. Check encoding for script you load before..
| 0 | 0 | 0 | 1 |
2013-04-22T22:13:00.000
| 1 | 0 | false | 16,157,636 | 1 | 0 | 1 | 1 |
I am facing an encoding issue while trying to pass a string from two C# modules using Ironpython code as a bridge.
Special characters like € , © gets distorted when the string is received by the recipient module.
Can anyone please advise if its a IronPython issue ? and how to fix this type of issue
Thanks,
Amit
|
Remove django sites app
| 16,158,406 | 1 | 0 | 182 | 0 |
python,django,django-1.5
|
If you are sure that there are not any dependencies from some other files in your project or in your apps you can safely remove them.
First comment out one by one and every time check the project in your browser if it is running correctly. Also check the logs for warnings and errors.
| 0 | 0 | 0 | 0 |
2013-04-22T23:17:00.000
| 1 | 1.2 | true | 16,158,308 | 0 | 0 | 1 | 1 |
I'd like to know if I can just ditch the Sites default app (comment it out from INSTALLED_APPS and so on) without breaking anything?
It's written in the doc that some other parts of django use it (redirects framework, comments, flatpages, syndic, auth, shortcut and view on site), but it's not explicitly said if it's going to break them. Is it?
Django 1.5
|
Differentiating logged-in users as admin and normal user in Google App Engine?
| 16,191,993 | 1 | 0 | 145 | 0 |
python,google-app-engine,google-cloud-datastore
|
Why is what you describe not possible? The object representing the logged-in user is an instance of google.appengine.api.users.User. That object has a user_id property. You can use that ID as a field in your own user model, to which you can add a field determining whether or not they are an admin.
| 0 | 1 | 0 | 0 |
2013-04-24T11:53:00.000
| 1 | 1.2 | true | 16,191,318 | 0 | 0 | 1 | 1 |
I am developing an app in GAE. Application provides different view depending upon whether logged-in user is admin or normal user. I want to use 'Google Apps domain' as Authentication Type. So that all user of my domain can login into the application and use it.
Here, application can't differentiate a logged-in as admin or normal user. Somehow I should make an user as admin and as soon as that user logs in, application should use admin view for that user. Is there any way to tell application that a particular user is admin?
If we have our own USER table, we can mark any user as admin. Whenever a user logs into the app, app can consult USER table to check if user is admin or not? But in my scenario, it is not possible.
|
How to Connect My Django App with incoming data on a TCP Port
| 16,217,531 | 0 | 0 | 797 | 0 |
python,django,tcp-ip
|
You'll probably have to listen on the port from another process, or thread. Save the incoming data somewhere, whether it be a log file, database, or whatever. Then have Django use this data when it prepares the web page to send in response to requests on the URL.
| 0 | 0 | 0 | 0 |
2013-04-25T10:18:00.000
| 1 | 1.2 | true | 16,212,161 | 0 | 0 | 1 | 1 |
I am trying to connect a django URL to incoming data on TCP/IP Port.
It would be great if someone, could shed some light on this
|
Adding extra properties to the User class in App Engine datastore?
| 16,215,227 | 2 | 0 | 395 | 0 |
python,google-app-engine,google-cloud-datastore,flask
|
As I explained in your other question, you need a separate class. User is not a model, and it is not stored in the datastore. It's simply a combination of the user_id and email that are obtained from Google's accounts system when you log in. If you want to store something about the user, you need to create your own model class, and use store user_id and/or email as fields which you compare against the logged-in user.
| 0 | 1 | 0 | 0 |
2013-04-25T12:28:00.000
| 1 | 1.2 | true | 16,214,803 | 0 | 0 | 1 | 1 |
I am working on an App using Flask and App Engine, which requires extra information to be stored in User Object apart from nickname, email and user_id.
Is it possible to extend User class in datastore?
If not, is there is any workaround? I am planning to have my own User model. So, once user logs into the app(using google authentication), I would collect user info using users.get_current_user() function and also add some other extra fields I require. All these information will get stored in my own User model. Is it the right way to handle this situation?
|
Is it possible to run selenium script from web browser?
| 16,219,793 | 0 | 0 | 114 | 0 |
java,python,selenium
|
The script has to run first and create the browser session. Currently AFAIK there is no way to take webdriver control of a browser that is already open.
| 0 | 0 | 1 | 0 |
2013-04-25T15:43:00.000
| 1 | 0 | false | 16,219,178 | 0 | 0 | 1 | 1 |
I have a scrip (using Python) that submits to a form on www.example.com/form/info.php
currently my script will:
- open Firefox
- enter name, age, address
- press submit
what I want to do is have a web form (with name, age, address) on LAMP and when the user press submit it adds those options to the selenium script (to be put into www.example.coom/form/info.php) and submits it directly in the browser. Is this possible?
UPDATE:
I know this is possible using mechanize, because i have tested it out, but it doesnt so well with javascript which is why i am using selenium.
|
DateTime field OpenErp
| 16,229,121 | 4 | 0 | 422 | 0 |
python,module,field,openerp
|
What you want currently not possible in openerp. But you can use one trick, you should use two fields one is integer for giving interval and other in char fields for giving months, days etc. You can get this example on Scheduler , ir.cron object of opener.
| 0 | 0 | 0 | 0 |
2013-04-25T19:18:00.000
| 1 | 1.2 | true | 16,223,006 | 1 | 0 | 1 | 1 |
I need a field that can contain a number and text on it, like for example "6 months", i've used the datetime field, but it only takes a formatted date on it, and if use integer or float it takes a number, and char takes only a character, so how can i have an integer and a char on the same field?
|
How to make Striped RML table in openerp?
| 16,248,525 | 1 | 0 | 280 | 0 |
python,openerp,rml
|
that's a little bit tricky thing, you should refer to "survey" module's report of openerp, which has such reports. I hope it helps.
Cheers,
Parthiv
| 0 | 0 | 0 | 0 |
2013-04-26T12:22:00.000
| 1 | 0.197375 | false | 16,236,364 | 0 | 0 | 1 | 1 |
I am going to implement BlockTable, I have taken a BlockTable inside
section and I want <tr> color dynamic, like striped(one white & one grey)
but have no idea how to do.
Can anyone help me.
|
Authenticate a server versus an AppEngine application
| 16,270,248 | 1 | 0 | 77 | 0 |
python,google-app-engine,authentication,google-cloud-endpoints
|
That is exactly what you need to do. On the server, generate a key (you choose the length), and store it in the datastore. When the other server makes a request, use HTTPS and include the key. Its like an API key (it is actually).
| 0 | 1 | 0 | 0 |
2013-04-26T13:30:00.000
| 1 | 1.2 | true | 16,237,742 | 0 | 0 | 1 | 1 |
I cannot see how I could authenticate a server with vs GAE.
Let's say I have an application on GAE which have some data and I somehow need this data on another server.
It is easy to enable OAuth authentication on GAE but here I cannt use this since there is no "account" binded to my server.
Plus GAE doesn't support client certificate.
I could generate a token for each server that needs to access the GAE Application, and transfe them on the server. It would then use it to access the GAE Application by adding it in the URL (using HTTPS)...
Any other idea?
|
how to write a client/server app in heroku
| 16,245,012 | 3 | 0 | 569 | 1 |
python,heroku
|
Heroku is for developing Web (HTTP, HTTPS) applications. You can't deploy code that uses socket to Heroku.
If you want to run your app on Heroku, the easier way is to use a web framework (Flask, CherryPy, Django...). They usually also come with useful libraries and abstractions for you to talk to your database.
| 0 | 0 | 0 | 0 |
2013-04-26T20:44:00.000
| 1 | 0.53705 | false | 16,244,924 | 0 | 0 | 1 | 1 |
I am quite new to heroku and I reached a bump in my dev...
I am trying to write a server/client kind of application...on the server side I will have a DB(I installed postgresql for python) and I was hoping I could reach the server, for now, via a python client(for test purposes) and send data/queries and perform basic tasks on the DB.
I am using python with Heroku, I manage to install the DB and it seems to be working(i.e i can query, insert, delete, etc...)
now all i want is to write a server(in python) that would be my app and would listen on a port and receive messages and then perform whatever tasks it is asked to do...I tought about using sockets for this and have managed to write a basic server/client locally...however when I deploy the app on heroku i cannot connect to the server and my code is basically worthless
can somebody plz advise on the basic framework for this sort of requirements...surely I am not the first guy to want to write a client/server app...if you could point to a tutorial/doc i would be much obliged.
Thx
|
Django-cms: Can't publish 2 pages with the same slug in different levels
| 16,262,956 | 2 | 4 | 660 | 0 |
python,django,django-cms,slug
|
Try to delete the pages and readd the pages after it.
Start to publish from the root and then the child.
If it still failed, you can check the db table cms_titles. To fix your paths manually or post it here
| 0 | 0 | 0 | 0 |
2013-04-27T07:10:00.000
| 1 | 1.2 | true | 16,249,380 | 0 | 0 | 1 | 1 |
I'm having an issue with slugs in child pages. Imagine I have a page called "Something" in the main tree with the slug "something", and another page named "Anything" with the slug "anything". This second page (Anything) has a child page also called "Something" with the slug "something" again, that should result in an /anything/something/ url, and it was working on django-cms 2.3.5 but it's not working anymore on 2.4.1, i get an error saying i've already used that url (Page 'Something' has the same url 'something' as current page "Something"). It's the only thing stoping me from updating to 2.4.1 (latest release in the moment). Thank you. Note that It will let me create the duplicate page if its not published. The problem is when I try to publish them.
|
What happens when you have an infinite loop in Django view code?
| 16,252,661 | 0 | 7 | 2,964 | 0 |
python,django,web,infinite-loop
|
Yes, your analysis is correct. The worker thread/process will keep running. Moreover, if there is no wait/sleep in the loop, it will hog the CPU. Other threads/process will get very little cpu, resulting your entire site on slow response.
Also, I don't think server will send any timeout error to client explicitly. If the TCP timeout is set, TCP connection will be closed.
Client may also have some timeout setting to get response, which may come into picture.
Avoiding such code is best way to avoid such code. You can also have some monitoring tool on server to look for CPU/memory usage and notify for abnormal activity so that you can take action.
| 0 | 0 | 0 | 0 |
2013-04-27T13:11:00.000
| 3 | 0 | false | 16,252,538 | 1 | 0 | 1 | 1 |
Something that I just thought about:
Say I'm writing view code for my Django site, and I make a mistake and create an infinite loop.
Whenever someone would try to access the view, the worker assigned to the request (be it a Gevent worker or a Python thread) would stay in a loop indefinitely.
If I understand correctly, the server would send a timeout error to the client after 30 seconds. But what will happen with the Python worker? Will it keep on working indefinitely? That sounds dangerous!
Imagine I've got a server in which I've allocated 10 workers. I let it run and at some point, a client tries to access the view with the infinite loop. A worker will be assigned to it, and will be effectively dead until the next server restart. The dangerous thing is that at first I wouldn't notice it, because the site would just be imperceptibly slower, having 9 workers instead of 10. But then it might happen again and again throughout a long span of time, maybe months. The site would just get progressively slower, until eventually it would be really slow with just one worker.
A server restart would solve the problem, but I'd hate to have my site's functionality depend on server restarts.
Is this a real problem that happens? Is there a way to avoid it?
Update: I'd also really appreciate a way to take a stacktrace of the thread/worker that's stuck in an infinite loop, so I could have that emailed to me so I'll be aware of the problem. (I don't know how to do this because there is no exception being raised.)
Update to people saying things to the effect of "Avoid writing code that has infinite loops": In case it wasn't obvious, I do not spend my free time intentionally putting infinite loops into my code. When these things happen, they are mistakes, and mistakes can be minimized but never completely avoided. I want to know that even when I make a mistake, there'll be a safety net that will notify me and allow me to fix the problem.
|
Run background jobs with elastic beanstalk
| 16,341,391 | 5 | 6 | 2,006 | 0 |
python,amazon-web-services,config,web-worker,amazon-elastic-beanstalk
|
fixed it, just needed to write this command instead:
command: "nohup ./workers.py > foo.out 2> foo.err < /dev/null &"
| 0 | 1 | 0 | 0 |
2013-04-28T09:20:00.000
| 1 | 1.2 | true | 16,261,413 | 0 | 0 | 1 | 1 |
I am trying to start a background job in elastic beanstalk, the background job has an infinite loop so it never returns a response and so I receive this error:" Some instances have not responded to commands.Responses were not received from [i-ba5fb2f7]."
I am starting the background job in the elastic beanstalk .config file like this:
06_start_workers:
command: "./workers.py &"
Is there any way to do this? I don't want elastic beanstalk to wait for a return value of that process ..
|
GAE: planning for exportability and relational databases
| 16,268,751 | 1 | 0 | 48 | 1 |
google-app-engine,python-2.7,google-cloud-datastore
|
GAE's datastore just doesn't export well to SQL. There's often situations where data needs to be modeled very differently on GAE to support certain queries, ie many-to-many relationships. Denormalizing is also the right way to support some queries on GAE's datastore. Ancestor relationships are something that don't exist in the SQL world.
In order to import export data, you'll need to write scripts specific to your data models.
If you're planning for compatibility with SQL, use CloudSQL instead of the datastore.
In terms of moving data between dev/production, you've already identified the ways to do it. There's no real "easy" way.
| 0 | 1 | 0 | 0 |
2013-04-28T19:40:00.000
| 1 | 1.2 | true | 16,266,979 | 0 | 0 | 1 | 1 |
I'm building a web app in GAE that needs to make use of some simple relationships between the datastore entities. Additionally, I want to do what I can from the outset to make import and exportability easier, and to reduce development time to migrate the application to another platform.
I can see two possible ways of handling relationships between entities in the datastore:
Including the key (or ID) of the related entity as a field in the entity
OR
Creating a unique identifier as an application-defined field of an entity to allow other entities to refer to it
The latter is less integrated with GAE, and requires some kind of mechanism to ensure the unique identifier is in fact unique (which in turn will rely on ancestor queries).
However, the latter may make data portability easier. For example, if entities are created on a local machine they can be uploaded (provided the unique identifier is unique) without problem. By contrast, relying on the GAE defined ID will not work as the ID will not be consistent from the development to the deployed environment.
There may be data exportability considerations too that mean an application-defined unique identifier is preferable.
What is the best way of doing this?
|
Send image as base64 via Google Endpoints
| 16,276,321 | 1 | 1 | 406 | 0 |
google-app-engine,python-2.7,base64,google-cloud-endpoints,protorpc
|
The way it finally worked was just open the file with the (open().read()) and save it in the NDB.
The response message was a BytesField just sending the string of the open().read(), without any encoding.
The console in my browser was not reading the value of the field in the answer, but it works normal in my app.
| 0 | 0 | 1 | 0 |
2013-04-29T07:22:00.000
| 1 | 1.2 | true | 16,273,227 | 0 | 0 | 1 | 1 |
I have an endpoint that must send an image in the response.
The original image is a file in the server that I open with python (open().read()) and save it in the NDB as BlobProperty (ndb.BlobProperty()).
My protoRPC message is a BytesField.
If I go in the apis-explorer the picture comes with the correct value, but it doesn't work in my JS Client.
I've been trying to just read the file, encode and decode base64 but the JS is still not recognizing it.
Does anyone have an idea how to solve it? How can I send the base64 image via Endpoints?
Thank you!
|
Lucene or Python: Select both "Hilary Clinton" and "Clinton, Hilary" name entries
| 16,290,406 | 1 | 2 | 194 | 1 |
python,regex,neo4j,lucene
|
Can you just use OR? "Hilary Clinton" OR "Clinton, Hilary"?
| 0 | 0 | 0 | 0 |
2013-04-30T00:09:00.000
| 2 | 0.099668 | false | 16,290,237 | 1 | 0 | 1 | 1 |
Let's say I have some free form entries for names, where some are in the format "Last Name, First Name" and others are in the format "First Name Last Name" (eg "Bob MacDonald" and "MacDonald. Bob" are both present).
From what I understand, Lucene indexing does not allow for wildcards in the beginning of the sentence, so what would be some ways in which I could find both. This is for neo4j and py2neo, so solutions in either lucene pattern matching, or in python regex matching are welcome.
|
Installing django on two existing versions on python
| 24,737,475 | 1 | 0 | 66 | 0 |
django,python-2.7,python-3.x
|
The real answer here is that you should be using virtual environments, as they both solve this particular problem, and are the industry standard because they solve many problems.
Simple process:
install python-virtualenv :
$>sudo apt-get install python-virtualenv # on Ubuntu
create a virtual-env against the Python you want, and activate it
$> virtualenv -p /usr/bin/pythonX ve
$> source ve/bin/activate
install django into this virtual-env
$> pip install django
...
PROFIT!
| 0 | 0 | 0 | 0 |
2013-04-30T04:33:00.000
| 2 | 1.2 | true | 16,292,220 | 1 | 0 | 1 | 1 |
I have both python 2.7 and 3.2 installed on my computer and I wanted to install django. And when I did it automatically installed django 1.4 on python 3. Is there a way I can install it on python 2.7?
|
OpenErp - External Id Bulk Update
| 16,336,050 | 1 | 1 | 143 | 0 |
python,postgresql,constraints,openerp
|
You can do this with a module by loading the data through xml or csv. Have a look at any module with a security\ir.model.access.csv file.
So to load data, create new module and add a csv file with the name of the table you want to load into (eg ir.model.data.csv) and add it to the __openerp__.py file under 'update_xml'.
| 0 | 0 | 0 | 0 |
2013-04-30T19:16:00.000
| 1 | 1.2 | true | 16,307,347 | 0 | 0 | 1 | 1 |
I need a way to add, some external ID's in the system without having to add them manually or by .csv
Is there any way to do this by a module, that maybe updates all the ir.model.data tables of the db?
If so, what module should i look for? Is there any in existence, so i can make a new one based on it?
Thanks in advance
|
Python web app deployment of multiple app instances
| 16,318,079 | 1 | 0 | 289 | 0 |
python,web-applications,deployment,web-deployment
|
The app will be modified in that way, that one instance may serve for
more customers. Based on requested domain, it will prepare (load) the
right settings, database connection etc. for that customer.
Is this good idea?
Well, I've used a similar system in production whereby there are n instances of the app, but each instance can serve any customer, based on the HTTP Host header, and it works quite well.
Given a sufficiently large number of customers, it may not be cost-effective, or even practical, to have one instance per customer.
| 0 | 0 | 0 | 0 |
2013-05-01T12:13:00.000
| 1 | 1.2 | true | 16,317,886 | 0 | 0 | 1 | 1 |
I have python web app (WSGi) which is deployed using uwsgi and nginx. I am going to provide this app to many users (customers) - each user will have his own settgins, database, templates, data folder etc. The code of the app can be shared.
My original idea was to have one uwsgi process per customer. But it is quite wasteful approach, because currently the app has about 100MB memory footprint. I expect that most of these instances will be sleeping most of the time (max 500 requests per day).
I came up with this solution:
The app will be modified in the way, that one instance may serve for more customers. Based on the requested domain, it will prepare (load) the right settings, database connection etc. for that customer.
Is this good idea? Or should I rather focus on lowering the memory footprint?
Thank you for your answers!
|
Is it bad practice to write a whole Flask application in one file?
| 16,321,335 | 3 | 12 | 3,546 | 0 |
python,flask,coding-style,standards,project-structure
|
There is no right or wrong answer to this. One file may be easy to manage if it is a very small project and you probably are the only one working on it. Some of the reasons you split the project into multiple source files however are:
You only change and commit what requires change. What I mean here is if you have a large single file with all your code in it, any change in the file will mean saving/updating the entire file. Imagine if you made a mistake, the entire codebase could get screwed up.
You have a large team possibly with different set of duties and responsibilities. For example, you could have a designer who only takes care of the design/front end (HTML,CSS etc.). If you have all the code in one file, they are exposed to the other stuff that they don't need to worry about. Also, they can independently work on their portion without needing to worry about anything else. You minimize the risk of mistakes by having multiple source files here.
Easier to manage as the codebase gets bigger. Can you imagine looking through 100,000 lines of code in one single file and trying to debug a problem ?
| 0 | 0 | 0 | 0 |
2013-05-01T14:59:00.000
| 3 | 0.197375 | false | 16,320,603 | 0 | 0 | 1 | 2 |
I'm currently writing a web application in Python using the Flask web framework. I'm really getting used to just putting everything in the one file, unlike many other projects I see where they have different directories for classes, views, and stuff. However, the Flask example just stuff everything into the one file, which is what I seem to be going with.
Is there any risks or problems in writing the whole web app in the one single file, or is it better to spread out my functions and classes across separate files?
|
Is it bad practice to write a whole Flask application in one file?
| 16,326,032 | -2 | 12 | 3,546 | 0 |
python,flask,coding-style,standards,project-structure
|
As it is a micro framework, you should not rely on it to build full blown applications since it's not designed for it.
As long a you keep your project small (a few forms, a few tables and mostlys static contents) you will be fine. But if you want to have a bigger application, you might "out program" the capacities of the framework in terms of modularity and reuse of code. In that case you might want to migrate toward a full blown framework where everything is separated in their own module.
| 0 | 0 | 0 | 0 |
2013-05-01T14:59:00.000
| 3 | -0.132549 | false | 16,320,603 | 0 | 0 | 1 | 2 |
I'm currently writing a web application in Python using the Flask web framework. I'm really getting used to just putting everything in the one file, unlike many other projects I see where they have different directories for classes, views, and stuff. However, the Flask example just stuff everything into the one file, which is what I seem to be going with.
Is there any risks or problems in writing the whole web app in the one single file, or is it better to spread out my functions and classes across separate files?
|
Troubleshooting 404 received by python script
| 16,342,227 | 0 | 0 | 52 | 0 |
python,macos
|
So I figured out what the problem was.
The website is returning an erroneous response code for these 6 pages. Even though it's returning a 404, it's also returning the web page. Chrome and Safari seem to ignore the response code and display the page anyways, my script aborts on the 404.
| 0 | 0 | 1 | 0 |
2013-05-02T14:42:00.000
| 2 | 0 | false | 16,341,001 | 0 | 0 | 1 | 1 |
I have a python script that pings 12 pages on someExampleSite.com every 3 minutes. It's been working for a couple months but today I started receiving 404 errors for 6 of the pages every time it runs.
So I tried going to those urls on the pc that the script is running on and they load fine in Chrome and Safari. I've also tried changing the user agent string the script is using and that also didn't change anything. Also I tried removing the ['If-Modified-Since'] header which also didn't change anything.
Why would the server be sending my script a 404 for these 6 pages but on that same computer I can load them in Chrome and Safari just fine? (I made sure to do a hard refresh in Chrome and Safari and they still loaded)
I'm using urllib2 to make the request.
|
Webapp2 Routing and Python inclusion
| 16,353,977 | -1 | 0 | 106 | 0 |
google-app-engine,python-2.7,jinja2,webapp2
|
Copied Here from Comment:
It defines a raw string. Please read docs docs.python.org/2/reference/lexical_analysis.html#literals paying special attention to raw strings. You may use a raw string to more easily define a regular expression as a literal string.
| 0 | 0 | 0 | 0 |
2013-05-03T03:52:00.000
| 1 | -0.197375 | false | 16,351,319 | 0 | 0 | 1 | 1 |
In the webapp2 URI routing there are some examples using webapp2.Route(r'/', handler='...'), and some aren't using r'/' -- so my question is, what is the R for, and should I be using it?
Also, if you use webapp2_extras.APIs you need to pass the config to the WSGIApplication(), is it possible to define the config lists elsewhere?
As in, is it possible to do config['webapp2_extras.API'] = ['option':'value'] in one file, then include that file inside your "router" and use the variable/list
Thanks in advance!!
|
Firefox produces unsearchable pdfs
| 16,444,728 | 1 | 0 | 338 | 0 |
python,firefox,pdf,selenium,automation
|
Firefox does not control the way your content is being printed to the PDF. Your PDF Printer Driver is responsible for creating the PDF file as a Bitmap snapshot of your page, instead of composing it from the elements in your page. The reason that you find a different behavior in Chrome compared to Firefox, is that Chrome has a built in "Save as PDF" which is different from your installed PDF drivers. So it really comes down to what PDF Printer Driver you are using.
| 0 | 0 | 1 | 0 |
2013-05-03T12:48:00.000
| 1 | 1.2 | true | 16,359,326 | 0 | 0 | 1 | 1 |
Currently I'm writing software for web automation using selenium and autoit.
I've found a strange issue, that for some pages when printing to pdf with firefox I get unsearchable pdfs. I've tried ff 3.5, 4.0, 20, 22, 23 - all have the same issue.
You can reproduce it by printing any linkedin profile - you'll get unsearchable pdf.
Did anyone encounter the same behaviour? How can I bypass it (using python, selenium)?
I've tried chrome driver, but it's increadibly slow.
I'm running windows 7 x64 ultimate
It does not deppend on printer used - I have tried a lot of different versions.
By searchable I mean that I should be able to search text in it like in most pdf files.
Update - I still don't understand why it happens. I've tried printing the same web page from IE 9 - it gives exactly the same print dialog as firefox and uses the same pdf printer driver. Nevertheless, it produces searchable pdfs. Guess the problem is related to the way firefox prints documents.
|
Getting near objects. Django + Badoo
| 16,360,039 | 0 | 0 | 259 | 0 |
python,django
|
You keep track of all users, and you show all users whose coordinates are of a similar value.
Yes, that's a generic answer, but it is a very generic question.
I would not be surprised if there are modules for Django doing this already.
| 0 | 0 | 0 | 0 |
2013-05-03T13:17:00.000
| 2 | 0 | false | 16,359,847 | 0 | 0 | 1 | 1 |
I have the following situation.
I keep track of a user's longitude and latitude using ios.
I send the longitude and latitude coordinates to a django server.
How can I use these longitude and latitude coordinates to determine what objects are near?
Basically how can I use these coordinates to determine the list of other users near a user?
|
Consuming from queues based upon external event (event queues)
| 18,451,136 | 0 | 0 | 689 | 0 |
python,erlang,rabbitmq,celery,eventqueue
|
Two more exotic options to consider: (1) define a custom exchange type in the Rabbit layer. This allows you to create routing rules that control which tasks are sent to which queues. (2) define a custom Celery mediator. This allows you to controls which tasks move when from queues to worker pools.
| 0 | 1 | 0 | 0 |
2013-05-03T21:40:00.000
| 2 | 0 | false | 16,367,953 | 0 | 0 | 1 | 1 |
I am running into a use case where I would like to have control over how and when celery workers dequeue a task for processing from rabbitmq. Dequeuing will be synchronized with an external event that happens out of celery context, but my concern is whether celery gives me any flexibility to control dequeueing of tasks? I tried to investigate and below are a few possibilities:
Make use of basic.get instead of basic.consume, where basic.get is triggered based upon external event. However, I see celery defaults to basic.consume (push) semantics. Can I override this behavior without modifying the core directly?
Custom remote control the workers as and when the external event is triggered. However, from the docs it isn't very clear to me how remote control commands can help me to control dequeueing of the tasks.
I am very much inclined to continue using celery and possibly keep away from writing a custom queue processing solution on top of AMQP.
|
GAE: Logs from tasks does not appears in dashboard
| 16,476,151 | 0 | 0 | 63 | 0 |
python,google-app-engine
|
Is your application running on the appspot.com domain or your own custom domain? In the former case it should work without you specifiying the target. In the case of a custom domain we are aware of problems with this scenario. Please file a bug in either case.
| 0 | 1 | 0 | 0 |
2013-05-04T01:23:00.000
| 2 | 0 | false | 16,369,685 | 0 | 0 | 1 | 1 |
I'm working with the Google App Engine Tasks Queue feature (Push).
In local, with the dev server, everything is working fine but once deployed my task fails.
I have put logs in it (logging python module) but they do not appear in my dashboard logs.
Is there anything to do to make it works?
Thanks for your help.
|
Evaluate javascript on a local html file (without browser)
| 16,385,053 | 2 | 1 | 1,441 | 0 |
javascript,python,html,screen-scraping,eval
|
Well, in the end I came down to the following possible solutions:
Run Chrome headless and collect the html output (thanks to koenp for the link!)
Run PhantomJS, a headless browser with a javascript api
Run HTMLUnit; same thing but for Java
Use Ghost.py, a python-based headless browser (that I haven't seen suggested anyyyywhere for some reason!)
Write a DOM-based javascript interpreter based on Pyv8 (Google v8 javascript engine) and add this to my current "half-solution" with mechanize.
For now, I have decided to use either use Ghost.py or my own modification of the PySide/PyQT Webkit (how ghost works) to evaluate the javascript, as apparently they can run quite fast if you optimize them to not download images and disable the GUI.
Hopefully others will find this list useful!
| 0 | 0 | 1 | 0 |
2013-05-04T14:16:00.000
| 2 | 0.197375 | false | 16,375,251 | 0 | 0 | 1 | 1 |
This is part of a project I am working on for work.
I want to automate a Sharepoint site, specifically to pull data out of a database that I and my coworkers only have front-end access to.
I FINALLY managed to get mechanize (in python) to accomplish this using Python-NTLM, and by patching part of it's source code to fix a reoccurring error.
Now, I am at what I would hope is my final roadblock: Part of the form I need to submit seems to be output of a JavaScript function :| and lo and behold... Mechanize does not support javascript. I don't want to emulate the javascript functionality myself in python because I would ideally like a reusable solution...
So, does anyone know how I could evaluate the javascript on the local html I download from sharepoint? I just want to run the javascript somehow (to complete the loading of the page), but without a browser.
I have already looked into selenium, but it's pretty slow for the amount of work I need to get done... I am currently looking into PyV8 to try and evaluate the javascript myself... but surely there must be an app or library (or anything) that can do this??
|
Get current locale in Jinja2
| 16,385,180 | 4 | 3 | 4,159 | 0 |
python,jinja2,gettext,python-babel
|
Finally, I used this solution: add get_locale function, which should be defined anyway, to Jinja2 globals, and then call it in template like any other function.
| 0 | 0 | 0 | 0 |
2013-05-05T11:45:00.000
| 3 | 1.2 | true | 16,384,192 | 1 | 0 | 1 | 1 |
On my website use Flask + Jinja2, and Flask-Babel for translation. The site has two languages (depending on the URL), and I'd want to add a link to switch between them. To do this correctly I need to get the name of the current locale, but I didn't find such function in the docs. Does it exist at all?
|
Python & Django on a Mac: Illegal hardware instruction
| 16,386,760 | 0 | 4 | 3,921 | 0 |
python,django,homebrew
|
that kind of problem smells like architecture mess. You may try to execute a 64bit library from a 32bit interpreter or vice versa… As you're using homebrew, you shall be really careful of which interpreter you're using, what is your path etc… Maybe you shall trace your program to know more exactly where it fails, so you can pinpoint what is actually failing. It is very unlikely django that fails, but more something that django uses. For someone to help you, you need to dig more closely to your failing point, and give more context about what is failing beyond django.
| 0 | 1 | 0 | 0 |
2013-05-05T16:36:00.000
| 2 | 1.2 | true | 16,386,707 | 0 | 0 | 1 | 2 |
Here is my issue:
I installed Python and Django on my mac. When I run "django-admin.py startproject test1" I get this error:
1 11436 illegal hardware instruction django-admin.py startproject
test1 (the number is always different)
I've tested with multiple Django versions, and this only happens with version 1.4 and higher...1.3 works fine.
I've been searching the web like crazy for the past week, and couldn't find anything regarding this issue with django so I assume the problem is not Django itself but something else. This is only on my mac at home, at work where I user Ubuntu works fine.
I tried to reinstall my entire system and this are the only things I have installed right now:
- Command line tools
- Homebrew
- Python & pip (w/ Homebrew)
- Git (w/ Homebrew)
- zsh (.oh-my-zsh shell)
I set up my virtualenv and install django 1.5.1 -- the same issue still appears.
I'm out of options for now since nothing I found resolves my problem, I'm hoping someone has some knowledge about this error.
I appreciate all the help, and thanks.
This is the python crash log:
Process: Python [2597] Path: /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python Version: 2.7.4 (2.7.4) Code Type:
X86-64 (Native) Parent Process: zsh [2245] User ID: 501
Date/Time: 2013-05-05 20:53:19.899 +0200 OS Version: Mac OS
X 10.8.3 (12D78) Report Version: 10
Interval Since Last Report: 16409 sec Crashes Since Last
Report: 2 Per-App Crashes Since Last Report: 1 Anonymous
UUID: D859C141-544F-3473-1A13-F984DB2F8CBE
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes:
0x0000000000000001, 0x0000000000000000
|
Python & Django on a Mac: Illegal hardware instruction
| 68,527,402 | 0 | 4 | 3,921 | 0 |
python,django,homebrew
|
I had the same, but went around this issue by using Docker/docker-compose.
| 0 | 1 | 0 | 0 |
2013-05-05T16:36:00.000
| 2 | 0 | false | 16,386,707 | 0 | 0 | 1 | 2 |
Here is my issue:
I installed Python and Django on my mac. When I run "django-admin.py startproject test1" I get this error:
1 11436 illegal hardware instruction django-admin.py startproject
test1 (the number is always different)
I've tested with multiple Django versions, and this only happens with version 1.4 and higher...1.3 works fine.
I've been searching the web like crazy for the past week, and couldn't find anything regarding this issue with django so I assume the problem is not Django itself but something else. This is only on my mac at home, at work where I user Ubuntu works fine.
I tried to reinstall my entire system and this are the only things I have installed right now:
- Command line tools
- Homebrew
- Python & pip (w/ Homebrew)
- Git (w/ Homebrew)
- zsh (.oh-my-zsh shell)
I set up my virtualenv and install django 1.5.1 -- the same issue still appears.
I'm out of options for now since nothing I found resolves my problem, I'm hoping someone has some knowledge about this error.
I appreciate all the help, and thanks.
This is the python crash log:
Process: Python [2597] Path: /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python Version: 2.7.4 (2.7.4) Code Type:
X86-64 (Native) Parent Process: zsh [2245] User ID: 501
Date/Time: 2013-05-05 20:53:19.899 +0200 OS Version: Mac OS
X 10.8.3 (12D78) Report Version: 10
Interval Since Last Report: 16409 sec Crashes Since Last
Report: 2 Per-App Crashes Since Last Report: 1 Anonymous
UUID: D859C141-544F-3473-1A13-F984DB2F8CBE
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes:
0x0000000000000001, 0x0000000000000000
|
Close GAE channel from server side Python
| 16,394,921 | 0 | 0 | 115 | 0 |
python,google-app-engine,channel-api
|
Since the socket exists on the client you would have to send a close command from the server to the client
| 0 | 0 | 1 | 0 |
2013-05-06T00:42:00.000
| 1 | 0 | false | 16,390,603 | 0 | 0 | 1 | 1 |
In the client side is possible close the socket with connection.close(), but its possible close it from the server side?
|
python urllib2 - login and specify a form in page
| 16,397,349 | 0 | 0 | 98 | 0 |
python,web
|
9000 said: "I'd try to sniff/track a real exchange between browser and the site; both Chrome and FF have tools for that. I'd also consider using mechanize instead of raw urrlib2"
This is the answer - mechanize is really easy to use and supports multiple forms.
Thanks!
| 0 | 0 | 1 | 0 |
2013-05-06T04:46:00.000
| 2 | 0 | false | 16,392,113 | 0 | 0 | 1 | 1 |
I am trying to log in a forum using Python/URLLib2. But I can't seem to succeed. I think it might be because there are several form objects in the login page, and I submit the incorrect one (the same code worked for a different forum, with a single form).
Is there a way to specify which form to submit in URLLib2?
Thanks.
|
Sync django with south
| 16,395,687 | 0 | 1 | 469 | 0 |
python,django,django-south
|
Suppose, that your old project's models definition are consistent with database.
And you want to edit some model(s) in app named 'myappName'.
Then your algorithm will be following:
Before making any changes to you model, do:
python manage.py convert_to_south myappName
Modify model(s) in app 'myappName'.
Create migrations:
python manage.py schemamigration myappName --auto
Apply migrations:
python manage.py migrate myappName
| 0 | 0 | 0 | 0 |
2013-05-06T09:07:00.000
| 1 | 0 | false | 16,395,391 | 0 | 0 | 1 | 1 |
Have installed south to my old django project.
Have run.
¤ syncdb
¤ convert_to_south myappName
But the did not sync all.
Runned:
¤ Migrate myappNAme
Did not sync 100 %, still have one column that is not found.
Runned:
¤ schemamigration -auto myappName
But not synced to 100 %....
Any ideas?
|
Django changes pub_date when I do .save()
| 16,396,999 | 2 | 0 | 158 | 0 |
python,django,pubdate
|
You can just remove auto_now=True and set the field manually when you want to, in your view.
| 0 | 0 | 0 | 0 |
2013-05-06T10:39:00.000
| 2 | 0.197375 | false | 16,396,966 | 0 | 0 | 1 | 2 |
I have a model Question with an IntegerField named flags and a datetime Field called pub_date. pub_date is set to be auto_now=True.
I have a view for changing the flags field. And when I change the flags and do .save() to the Question object, its pub date changes to now.
I wan't the pub_date to be set only when it's being created and not when I'm changing some data in the record. How can I do this?
If you need to see my code, please tell me because I don't think you need to here.
|
Django changes pub_date when I do .save()
| 16,397,041 | 3 | 0 | 158 | 0 |
python,django,pubdate
|
you should set auto_now_add = True
| 0 | 0 | 0 | 0 |
2013-05-06T10:39:00.000
| 2 | 1.2 | true | 16,396,966 | 0 | 0 | 1 | 2 |
I have a model Question with an IntegerField named flags and a datetime Field called pub_date. pub_date is set to be auto_now=True.
I have a view for changing the flags field. And when I change the flags and do .save() to the Question object, its pub date changes to now.
I wan't the pub_date to be set only when it's being created and not when I'm changing some data in the record. How can I do this?
If you need to see my code, please tell me because I don't think you need to here.
|
Refresh a local web page using Python
| 52,434,003 | 0 | 11 | 66,470 | 0 |
python,html,refresh
|
The LivePage extension for Chrome. You can write to a file, then LivePage will monitor it for you. You can also optionally refresh on imported content like CSS. Chrome will require that you grant permissions on local file:// urls.
(I'm unaffiliated with the project.)
| 0 | 0 | 1 | 0 |
2013-05-06T13:01:00.000
| 8 | 0 | false | 16,399,355 | 0 | 0 | 1 | 1 |
I'm using Python to gather some information, construct a very simple html page, save it locally and display the page in my browser using webbrowser.open('file:///c:/testfile.html'). I check for new information every minute. If the information changes, I rewrite the local html file and would like to reload the displayed page.
The problem is that webbrowser.open opens a new tab in my browser every time I run it. How do I refresh the page rather than reopen it? I tried new=0, new=1 and new=2, but all do the same thing. Using controller() doesn't work any better.
I suppose I could add something like < META HTTP-EQUIV="refresh" CONTENT="60" > to the < head > section of the html page to trigger a refresh every minute whether or not the content changed, but would prefer finding a better way.
Exact time interval is not important.
Python 2.7.2, chrome 26.0.1410.64 m, Windows 7 64.
|
Use Datastore Entity's ID or Key in ProtoRPC.Message
| 16,449,936 | 3 | 1 | 386 | 0 |
python,google-app-engine,google-cloud-datastore,google-cloud-endpoints
|
It depends on what your goal is and whether or not you're using db or nbd.
If you use str(key) you'll get an entity key and will need to construct a new key (on the server depending on that value). Using ndb, I would recommend using key.urlsafe() to be explicit and then ndb.Key(urlsafe=value) to create the new key. Unfortunately the best you can do with db is str(key) and db.Key(string_value).
Using key.id() also depends on ndb or db. If you are using db you know this value will be an integer (and that key.name() will be a string) but if you are using ndb it could be either an integer or a string. In that case, you should use key.integer_id() or key.string_id(). In either case, if you turn integers into strings, this will require manually casting back to an integer before retrieving entities or setting keys; e.g. MyModel.get_by_id(int(value))
If I were to make a recommendation, I would advise you to be explicit about your IDs, pay attention to the way they are allocated and give these opaque values to the user in the API. If you want to let App Engine allocate IDs for you use protorpc.messages.IntegerField to represent these rather than casting to a string.
Also, PLEASE switch from db to ndb if you haven't already.
| 0 | 0 | 0 | 0 |
2013-05-06T14:01:00.000
| 1 | 1.2 | true | 16,400,412 | 0 | 0 | 1 | 1 |
When transmitting references to other Datastore entities using the Message class from ProtoRPC, should I use str(key) or key.id(). The first one is a String the second one is a long.
Does it make any difference in the end? Are there any restrictions?
It appears that when filtering queries, the same results come out.
Thanks
|
Python Flask running background functions returning values
| 16,738,343 | 0 | 1 | 419 | 0 |
python-2.7,background-process,robotics
|
This may not answer your question directly, as I have a similar question about using Flask with 'background' data.
I just wanted to make sure you know about ROS.... ROS.org. It has tools to help with what you are doing.
Check out Flask-Script as well, though I'm not sure it will solve your problem. I'm currently using an SQLite DB to pass data between my applications (Flask for user status/interaction, a program that reads the hardware into the DB, and another program that makes decisions based on hardware inputs..) but my system is slow, 1Hz update rate is plenty.
| 0 | 0 | 0 | 0 |
2013-05-06T17:38:00.000
| 1 | 0 | false | 16,404,100 | 0 | 0 | 1 | 1 |
I have a Python Flask application that I'm planning to use as a monitor for a robot that drives around (using Player/Stage). Now I'm able to connect with the robot and request information and so on.
Player/Stage sends data about the position of the robot every time interval. I'm stuck with the following:
The information about the position should be displayed in HTML, I was thinking about a jQuery POST that requests the position every 500ms and then updates the html (easy). Is there a better solution?
Player/Stage also sends the actual location estimation of the robot, I want a background process that can save the data so I can display it (like number 1), I don't see how a cerely background job saves the information it calculates. How can I display the output of a background job and send it to the user (html/json)?
I actually need to manage a couple of background jobs and let them quit depending on the output of another background job. So for example, when the robot drives to a specific point, I quit a job, start another, display its data to the user, and so on.
I hope my explanation was helpful, I'm looking for advice, code samples and anything related.
Regards,
|
Google App Engine urlfetch loop
| 16,406,284 | 1 | 0 | 81 | 0 |
python,google-app-engine,loops
|
Take a look at the GAE cron functionality.
| 0 | 1 | 0 | 0 |
2013-05-06T19:46:00.000
| 2 | 0.099668 | false | 16,406,080 | 0 | 0 | 1 | 1 |
Can I make a loop in Google App Engine that fetches information from a site?
I have made a small code that already gets the information I want from the site, but I don't know how to make this code run every lets say, 20 minutes.
Is there a way to do this?
P.S.: I have looked at TaskQueue, but I'm not sure if it is meant for things like this.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.