Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
web2py cross applications global setting good practice
| 17,322,358 | 0 | 1 | 202 | 0 |
python,django,settings,web2py
|
Although they might not work the same under the hood, you can still put your application-wide global setting inside this file applications/yourapp/models/0_whatever_name.py
Content in this file will be defined before (each) request reaches your app.
Or you can simply append your app-wide global definitions into applications/yourapp/models/db.py By default it already contains lots of settings for this app.
| 0 | 0 | 0 | 0 |
2013-03-04T22:29:00.000
| 1 | 0 | false | 15,212,377 | 0 | 0 | 1 | 1 |
I understand there is no such a thing like setting.py in Django for web2py . But is there a good place for these global setting for web2py ?
I would like to put things like secret key, global constant and others.
|
How to catch all the errors in all views?
| 15,218,682 | 5 | 2 | 228 | 0 |
python,django,exception-handling
|
yes, can handle all exceptions from any view. try Googling "django middleware exception", you'll find many solutions .
| 0 | 0 | 0 | 0 |
2013-03-05T08:00:00.000
| 1 | 0.761594 | false | 15,218,566 | 0 | 0 | 1 | 1 |
I know we can use try and except function to catch the error. But everyday I monitor the sentry, the system always get an exception in any views. As usual I put try and except to catch the errors in the views.
My question is. Is it possible to catch all the errors from any views in just one function? Then the user will redirect to another page. Where is the best place to do this? I'm thinking about middleware but I don't have knowledge about it.
|
build steps for javascript/css in heroku python app
| 15,227,844 | 0 | 1 | 342 | 0 |
python,web-applications,heroku
|
I would generally recommend your less than perfect approach, especially if you have a small number of files.
Simplicity is always better than functionality.
| 0 | 0 | 0 | 0 |
2013-03-05T08:05:00.000
| 2 | 0 | false | 15,218,649 | 0 | 0 | 1 | 1 |
I'm working on a Heroku app built in Python, and I can't find a recommended way to add a step to deployment for concatenating/processing/minifying JavaScript and CSS assets. For example, I might like to use tools like r.js or less.
I've seen something called "collectstatic" that Heroku knows to run for Django apps, but my application is using web.py, not Django.
One less-than-perfect approach would be to use these tools locally, on my development machine, to produce combined/compressed static assets. I could then check those compiled files into the git repository and push them to Heroku.
Is there any support for this kind of step built in to Heroku? What is the best way of handling javascript/css files for Heroku web apps in Python?
|
Get instance of an running Python program
| 15,221,754 | 1 | 0 | 160 | 0 |
python,multithreading,instance
|
You can't (easily) "get an instance of a running program" from outside. You can certainly instrument your program so that it communicates its statistics somehow, eg via a socket, or as an even lower-tech solution you could get it to store the relevant data periodically in a file on disk or in a database, which your web app could read.
| 0 | 0 | 0 | 0 |
2013-03-05T10:13:00.000
| 1 | 0.197375 | false | 15,221,155 | 1 | 0 | 1 | 1 |
Suppose a python program is running. Say the object of an Class in that program can give you some stats. So if i have to develop a web UI to display the stats, how do i get the instance of that class which is running[as an separate desktop app] and display the stats in web, which I would be using web2py or django.
|
Can I use uwsgi + (tornado, gevent, etc) at the same time?
| 15,225,962 | 2 | 3 | 1,970 | 0 |
python,django,tornado,uwsgi
|
uWSGI+gevent is a solid combo, while there is currently no-way to run uWSGI with the tornado api (and as uWSGI dropped support in 1.9 for callback based approach, i think that we will never see that combo working).
The problem you need to solve before starting working with gevent, is ensuring that all of your pieces are gevent friendly (redis and celery are ok, you need to check your database adapter). After that simply add --gevent to your uWSGI instance, where is the maximum number of concurrent requests per worker.
| 0 | 1 | 0 | 0 |
2013-03-05T13:43:00.000
| 2 | 0.197375 | false | 15,225,320 | 0 | 0 | 1 | 2 |
why? because I have a django project that capture data from user and consume many webservices displaying the results to the user in order to compare information, something like aggregators websites who search flights tickets via airlines webservices and show the result in real time in order to compare tickets.
nowaday im doing this in a "waiting page", where celery hits webservices while jquery is asking every 5 seconds if all results are ready, so when ready redirect to the results page.
what I want to do is not to use this "waiting page", I want to feed the results page in real time as the results are comming, and I want to make it following the best practices, I mean I dont want to jquery get the results each X seconds to feed the table.
I think some coroutine-based python library can help me with this, but I want to learn more about your experience first and see some examples, I am confused because this part of the project was designed to run asynchronously, I mean, consuming webservices with celery-chords, but not designed for dispatching the information in real time through the app server.
actual architecture:
python 2.7, django 1.3, postgresql 9, celery 3 + redis, uwsgi, nginx, all hosted on aws.
thank you in advance.
|
Can I use uwsgi + (tornado, gevent, etc) at the same time?
| 45,253,612 | 0 | 3 | 1,970 | 0 |
python,django,tornado,uwsgi
|
I don't know about uWSGI+gevent, but you can use tornado with uWSGI. Tornado basically gives you an inbuilt WSGI support in tornado.wsgi.WSGIContainer module to make it compactable with other WSGI servers like uWSGI and gunicorn.But it depends on your use and I think it's not a good idea to use an Asynchronous framework with a synchronous server(like uWSGI). Tornado has this warning for this.
WSGI is a synchronous interface, while Tornado’s concurrency model is based on single-threaded asynchronous execution. This means that running a WSGI app with Tornado’s WSGIContainer is less scalable than running the same app in a multi-threaded WSGI server like gunicorn or uwsgi. Use WSGIContainer only when there are benefits to combining Tornado and WSGI in the same process that outweigh the reduced scalability.
| 0 | 1 | 0 | 0 |
2013-03-05T13:43:00.000
| 2 | 0 | false | 15,225,320 | 0 | 0 | 1 | 2 |
why? because I have a django project that capture data from user and consume many webservices displaying the results to the user in order to compare information, something like aggregators websites who search flights tickets via airlines webservices and show the result in real time in order to compare tickets.
nowaday im doing this in a "waiting page", where celery hits webservices while jquery is asking every 5 seconds if all results are ready, so when ready redirect to the results page.
what I want to do is not to use this "waiting page", I want to feed the results page in real time as the results are comming, and I want to make it following the best practices, I mean I dont want to jquery get the results each X seconds to feed the table.
I think some coroutine-based python library can help me with this, but I want to learn more about your experience first and see some examples, I am confused because this part of the project was designed to run asynchronously, I mean, consuming webservices with celery-chords, but not designed for dispatching the information in real time through the app server.
actual architecture:
python 2.7, django 1.3, postgresql 9, celery 3 + redis, uwsgi, nginx, all hosted on aws.
thank you in advance.
|
python open web page and get source code
| 15,227,528 | 0 | 0 | 2,468 | 0 |
python,pyqt
|
Have a look at the nltk module---they have some utilities for looking at web pages and getting text. There's also BeautifulSoup, which is a bit more elaborate. I'm currently using both to scrape web pages for a learning algorithm---they're pretty widely used modules, so that means you can find lots of hints here :)
| 0 | 0 | 1 | 0 |
2013-03-05T14:43:00.000
| 3 | 0 | false | 15,226,643 | 0 | 0 | 1 | 1 |
We have developed a web based application, with user login etc, and we developed a python application that have to get some data on this page.
Is there any way to communicate python and system default browser ?
Our main goal is to open a webpage, with system browser, and get the HTML source code from it ? We tried with python webbrowser, opened web page succesfully, but could not get source code, and tried with urllib2, in that case, i think we have to use system default browser's cookie etc, and i dont want to this, because of security.
|
Deleting Django apps with South dependency
| 15,230,769 | -3 | 3 | 426 | 0 |
python,django,django-south
|
There isn't a great way. This is why I avoid South dependencies at all costs.
Don't use dependencies.
| 0 | 0 | 0 | 0 |
2013-03-05T17:48:00.000
| 2 | 1.2 | true | 15,230,638 | 0 | 0 | 1 | 1 |
Let's say we have two apps in a project: app1 and app2. Both have South migrations and in this particular case, migration app1.0002_something depends on app2.0001_initial. Everything is nice and fine until you decide that app2 is obsolete and should be deleted (since its utility has been put into app3 and app4 a long time ago).
And here lies the problem: after removing app2 from INSTALLED_APPS ./manage.py migrate returns south.exceptions.DependsOnUnmigratedApplication: Migration 'app1:0002_something' depends on unmigrated application 'app2'.
In this case, I'd probably "reset" the migrations of app1 and go on coding, however, I don't see how I can avoid this situation in the future short of not using dependencies at all. So the questions are:
How can I resolve this situation more gracefully than "resetting" migration history?
How do I prevent this situation from happening and still be able to delete obsolete apps?
|
Design of CBVs in Django
| 15,234,241 | 1 | 5 | 233 | 0 |
python,django
|
Any answer to this question is open for discussion. That said, views are just Python classes, so you could overwrite any method to customize things accordingly.
It is also perfectly legit to create an extra method on your class to handle data processing.
| 0 | 0 | 0 | 0 |
2013-03-05T20:54:00.000
| 2 | 0.099668 | false | 15,233,928 | 0 | 0 | 1 | 2 |
I'm currently trying to get into "Class Based Views" with Django 1.5.
From the design perspective i wonder where to put the logic to process data comming from a form in a simple FormView.
I know that all form validation code comes into the method form_valid(). But where to put things which processes data of the form. I read that its somehow inappropriate to put too much logic into the form_valid() method.
There are the get(), post(), get_context_data(), head(), etc. methods... which should I use in which case?
|
Design of CBVs in Django
| 15,238,942 | 1 | 5 | 233 | 0 |
python,django
|
Form validation, data cleaning, etc goes with the form class in the clean methods
Processing of a valid form should go in an overridden form_valid method
That's it! If your use-case is more complicated you can call out to other methods of your creation from form_valid...
| 0 | 0 | 0 | 0 |
2013-03-05T20:54:00.000
| 2 | 1.2 | true | 15,233,928 | 0 | 0 | 1 | 2 |
I'm currently trying to get into "Class Based Views" with Django 1.5.
From the design perspective i wonder where to put the logic to process data comming from a form in a simple FormView.
I know that all form validation code comes into the method form_valid(). But where to put things which processes data of the form. I read that its somehow inappropriate to put too much logic into the form_valid() method.
There are the get(), post(), get_context_data(), head(), etc. methods... which should I use in which case?
|
Serve Static Pages from S3 using Django
| 15,782,517 | 0 | 2 | 795 | 0 |
python,django,amazon-s3
|
You can just create index.html inside /static-pages/12345/ folder and it will be served.
| 0 | 0 | 0 | 0 |
2013-03-06T02:07:00.000
| 2 | 0 | false | 15,237,706 | 0 | 0 | 1 | 1 |
I'm planning to build a Django app to generate and later server static pages (probably stored on S3). When users visit a url like mysite.com/static-pages/12345 the static file in my S3 bucket named 12345.html should be served. That static file might be the static html page of a blog page my site has generated for the user, for example.
This is different from including static resources like CSS/Javascript files on a page that is rendered as a Django template since I already know how to use Django templates and SQL databases - what's unfamiliar to me is that my "data" is now a file on S3 rather than an entry in a database AND that I don't actually need to use a template.
How exactly can I retrieve the requested data (i.e. a static page) and return it to the user? I'd like to minimize performance penalties within reason, although of course it would be fastest if users directly requested their static pages from S3 (I don't want them to do this)".
A few additional questions:
I've read elsewhere about a django flatpages app which stores html pages in a database, but it seems like static html pages are best stored on a filesystem like S3, no?
Is there a way to have the request come in to my Django application and have S3 serve the file directly while making it appear to have come from my application (i.e. the browser url still says mysite.com/static-pages/12345, but the page did not go through my Django server)?
Thanks very much!
|
Flask: View, model and business logic segration
| 15,255,903 | 1 | 0 | 2,086 | 0 |
python,flask
|
If you need to unit test the code separate from the View then you should defiantly separate it into another module or class.
As there seems to be three parts to your business logic then I would say starting by splitting the view into three functions of a module seems a good place to start.
| 0 | 0 | 0 | 0 |
2013-03-06T17:31:00.000
| 1 | 1.2 | true | 15,254,100 | 0 | 0 | 1 | 1 |
Please help me how to solve following task in "pythonic" way:
There are several model classes, which are mapped to the DB with the help of SQLAlchemy.
There is a Flask view, which handles the "POST" request.
The business logic of this method contains complex logic which includes:
Getting input parameters from input JSON
Validation
Creation of several different models and the saving to database.
Is it a good idea to leave this logic in the "View"? Or it would be much better to separate this this logic into different modules or classes, for instance by introducing business logic class?
|
standard way to handle user session in tornado
| 15,265,556 | 14 | 14 | 14,722 | 1 |
python,tornado
|
Tornado designed to be stateless and don't have session support out of the box.
Use secure cookies to store sensitive information like user_id.
Use standard cookies to store not critical information.
For storing large objects - use standard scheme - MySQL + memcache.
| 0 | 1 | 0 | 0 |
2013-03-06T17:55:00.000
| 4 | 1.2 | true | 15,254,538 | 0 | 0 | 1 | 3 |
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.
The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.
The ideal answer would point to example code, of course.
|
standard way to handle user session in tornado
| 16,320,593 | 17 | 14 | 14,722 | 1 |
python,tornado
|
Here's how it seems other micro frameworks handle sessions (CherryPy, Flask for example):
Create a table holding session_id and whatever other fields you'll want to track on a per session basis. Some frameworks will allow you to just store this info in a file on a per user basis, or will just store things directly in memory. If your application is small enough, you may consider those options as well, but a database should be simpler to implement on your own.
When a request is received (RequestHandler initialize() function I think?) and there is no session_id cookie, set a secure session-id using a random generator. I don't have much experience with Tornado, but it looks like setting a secure cookie should be useful for this. Store that session_id and associated info in your session table. Note that EVERY user will have a session, even those not logged in. When a user logs in, you'll want to attach their status as logged in (and their username/user_id, etc) to their session.
In your RequestHandler initialize function, if there is a session_id cookie, read in what ever session info you need from the DB and perhaps create your own Session object to populate and store as a member variable of that request handler.
Keep in mind sessions should expire after a certain amount of inactivity, so you'll want to check for that as well. If you want a "remember me" type log in situation, you'll have to use a secure cookie to signal that (read up on this at OWASP to make sure it's as secure as possible, thought again it looks like Tornado's secure_cookie might help with that), and upon receiving a timed out session you can re-authenticate a new user by creating a new session and transferring whatever associated info into it from the old one.
| 0 | 1 | 0 | 0 |
2013-03-06T17:55:00.000
| 4 | 1 | false | 15,254,538 | 0 | 0 | 1 | 3 |
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.
The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.
The ideal answer would point to example code, of course.
|
standard way to handle user session in tornado
| 16,346,968 | 4 | 14 | 14,722 | 1 |
python,tornado
|
The key issue with sessions is not where to store them, is to how to expire them intelligently. Regardless of where sessions are stored, as long as the number of stored sessions is reasonable (i.e. only active sessions plus some surplus are stored), all this data is going to fit in RAM and be served fast. If there is a lot of old junk you may expect unpredictable delays (the need to hit the disk to load the session).
| 0 | 1 | 0 | 0 |
2013-03-06T17:55:00.000
| 4 | 0.197375 | false | 15,254,538 | 0 | 0 | 1 | 3 |
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.
The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.
The ideal answer would point to example code, of course.
|
Creating a frame that can read Python and convert it to HTML
| 15,256,392 | 0 | 0 | 109 | 0 |
javascript,python,html
|
The simplest way is to create a page with an iframe, call it result. Then create a simple <form> and give the form tag an attribute target="result". Include a textarea in your form. Your students' code is transfered to the webserver via a simple form submit. Then when your server produces an HTML response it is rendered in the iframe.
No javascript is needed. KISS!
| 0 | 0 | 0 | 0 |
2013-03-06T19:27:00.000
| 1 | 1.2 | true | 15,256,240 | 0 | 0 | 1 | 1 |
Hi all this is my first post as I have exhausted google and am asking for a little bit of help.
I'm a school teacher and I'm creating an ELearning website for my school, at present I'm attempting to create a code window where my students can input some python syntax and onlick the code is converted and shown in the adjacent window. Very similar to what is used in W3schools for its examples and various elearning websites.
Does anybody have any knowledge as to where I should start or have any links about creating this? I would probably go along the lines of when the 'submit code' button is pressed, a javascript object is created of the user input code, then rendered into the adjacent result box using some sort of ajax.
Thanks all for your kind, regards Andrew
|
How to make Jython work with PIG?
| 20,953,873 | 0 | 2 | 432 | 0 |
python,jython,apache-pig
|
From my short experience in Pig there are two ways of doing this: you can either place the jar in your Pig's lib folder, somewhere about /usr/share/pig/lib/, or register the jar using its specific location from grunt (Pig shell), using:
REGISTER /path/to/your/jar/jython.jar;
Once available, register your UDF from grunt using:
REGISTER '/path/to/your/udf/udf.py' USING jython as py_udf;
And you can use it like this: py_udf.my_method(*)
my_method being the name of the python method you created.
| 0 | 1 | 0 | 0 |
2013-03-06T22:37:00.000
| 1 | 0 | false | 15,259,499 | 0 | 0 | 1 | 1 |
I have jython jar and Pig installed on the server. Have Pig jars as well.
Can someone help me out with the proper steps to bundle them so that I can use my Python UDFs ?
Thanks
|
How can I generate a random url of a certain length every time a page is created?
| 15,261,264 | 1 | 4 | 7,292 | 0 |
python,url,data-structures,pyramid
|
I am new to Python and programming but here a few issues I can see with 'random string' idea:
You will most probably end up generating same string over and over if you are using shorter strings. On the other side if you are using longer strings, the changes of getting the same string is less. However you will want to watch out for duplicates on either case. Therefore my suggestion is to make some estimations of how many urls you will need, and use an optimal string length for it.
Easiest way is to keep these urls in a list, and use a simple if check before registering new ones:
if new_url in url_list:
generate_new_url()
else:
url_list.append(new_url)
However it also sounds like you will want to employ a database to permanently store your urls. In most sql based databases you can setup your url column to be 'unique'; therefore database stops you from having dublicate urls.
I am not sure but with the database you can probably do this:
try:
#insert value to database
except:
generate_new_url()
| 0 | 0 | 0 | 0 |
2013-03-07T00:38:00.000
| 2 | 0.099668 | false | 15,261,000 | 0 | 0 | 1 | 1 |
In my python/pyramid app, I let users generate html pages which are stored in an amazon s3 bucket. I want each page to have a separate path like www.domain.com/2cxj4kl. I have figured out how to generate the random string to put in the url, but I am more concerned with duplicates. how can I check each of these strings against a list of existing strings so that nothing is overwritten? Can I just put each string in a dictionary or array and check the ever growing array/dict each time a new one is created? Are there issues with continuing to grow such an object, and will it permanently exist in the app memory somehow? How can I do this?
|
Calling Python script from JAVA MySQLdb imports
| 15,318,731 | 0 | 3 | 534 | 0 |
java,python,jakarta-ee
|
So, I discovered that the issue was with the arguments that I was passing in Java to run the python program.
The first argument was - python 2.6 but it should have rather been just python not some version number because there was compatibility issue with MySQLdB and python.
I finally decided to use MySQL Python connector instead of MySQLdB in python code. It worked like charm and the problems got solved !
| 0 | 1 | 0 | 1 |
2013-03-07T05:33:00.000
| 1 | 1.2 | true | 15,263,854 | 0 | 0 | 1 | 1 |
I am calling a Python script from my Java code. This is the code :
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class JavaRunCommand {
public static void main(String args[]) throws IOException {
// set up the command and parameter
String pythonScriptPath = "my-path";
String[] cmd = new String[2];
cmd[0] = "python2.6";
cmd[1] = pythonScriptPath;
// create runtime to execute external command
Runtime rt = Runtime.getRuntime();
Process pr = rt.exec(cmd);
// retrieve output from python script
BufferedReader bfr = new BufferedReader(new InputStreamReader(
pr.getInputStream()));
String line = "";
while ((line = bfr.readLine()) != null) {
// display each output line form python script
System.out.println(line);
}
}
}
python.py which works
import os
from stat import *
c = 5
print c
python.py which does not works
import MySQLdb
import os
from stat import *
c = 5
print c
# some database code down
So, I am at a critical stage where I have a deadline for my startup and I have to show my MVP project to the client and I was thinking of calling Python script like this. It works when I am printing anything without dB connection and MySQLdb library. But when I include them, it does not run the python script. Whats wrong here. Isnt it suppose to run the process handling all the inputs. I have MySQLdb installed and the script runs without the java code.
I know this is not the best way to solve the issue. But to show something to the client I need this thing working. Any suggestions ?
|
Does Heroku no longer support Celery?
| 15,734,662 | 3 | 13 | 6,436 | 0 |
python,heroku,celery,amqp
|
I think there are issues with Celery as a background task on Heroku. We tried to create such tasks and they take all memory after running for about 20 minutes, even with DEBUG=False on Redis or RabbitMQ. Worse still, the memory is NEVER released: every time we have to restart the worker.
The same code runs flawlessly on bare Linux or on Mac with Foreman.
It happens with very simple tasks, like reading a text file in a loop, writing to a Django model.
| 0 | 1 | 0 | 0 |
2013-03-07T07:21:00.000
| 5 | 0.119427 | false | 15,265,319 | 0 | 0 | 1 | 1 |
I was finally getting to the point where I had some free time and wanted to add Celery to my Python/Flask project on Heroku. However, almost all mentions of Celery from the Heroku docs are gone. There used to be article with a tutotial in the "Getting started with Django", but it's gone.
Will "just doing it" myself work? What's a good AMQP addon to use as backend on Heroku?
|
Django Static Setup
| 15,282,357 | 0 | 1 | 528 | 0 |
python,django,python-2.7
|
The created project should have a static folder. Put all resources (images, ...) in there.
Then, in your HTML template, you can reference STATIC_ROOT and add the resource path (relative to the static folder)
| 0 | 0 | 0 | 0 |
2013-03-07T21:41:00.000
| 4 | 0 | false | 15,282,318 | 0 | 0 | 1 | 2 |
I've looked through countless answers and questions trying to find a single definitive guide or way to do this, but it seems that everyone has a different way. Can someone please just explain to me how to serve static files in templates?
Assuming I've just created a brand new project with Django 1.4, what all do I need to do to be able to render images? Where should I put the media and static folders?
|
Django Static Setup
| 15,283,333 | 2 | 1 | 528 | 0 |
python,django,python-2.7
|
Put your static files into <app>/static or add an absolute path to STATICFILES_DIRS
Configure your web server (you should not serve static files with Django) to serve files in STATIC_ROOT
Point STATIC_URL to the base URL the web server serves
Run ./manage.py collectstatic
Be sure to use RequestContext in your render calls and {{ STATIC_URL }} to prefix paths
Coffee and pat yourself on the back
A little bit more about running a web server in front of Django. Django is practically an application server. It has not been designed to be any good in serving static files. That is why it actively refuses to do that when DEBUG=False. Also, the Django development server should not be used for production. This means that there should be something in front of Django at all times. It may be a WSGI server such as gunicorn or a 'real' web server such as nginx or Apache.
If you are running a reverse proxy (such as nginx or Apache) you can bind /static to a path in the filesystem and the rest of the traffic to pass through to Django. That means your STATIC_URL can be a relative path. Otherwise you will need to use an absolute URL.
| 0 | 0 | 0 | 0 |
2013-03-07T21:41:00.000
| 4 | 1.2 | true | 15,282,318 | 0 | 0 | 1 | 2 |
I've looked through countless answers and questions trying to find a single definitive guide or way to do this, but it seems that everyone has a different way. Can someone please just explain to me how to serve static files in templates?
Assuming I've just created a brand new project with Django 1.4, what all do I need to do to be able to render images? Where should I put the media and static folders?
|
Quick writing to log file after http request
| 15,291,684 | 1 | 0 | 457 | 0 |
php,python,performance,logging
|
One possible option what I can think of is a separate logging process. So that your web.py can be shielded for performance issue. This is classical way of handling logging module. You can use IPC or any other bus communication infrastructure. With this you will be able to address two issues -
Logging will not be a huge bottle neck for high capacity call flows.
A separate module can ensure/provide switch off/on facility.
As such there would not be any huge/significant process memory usage.
However, you should bear in mind below points -
You need be sure that logging is restricted to just logging. It must not be a data store for business processing. Else you may have many synchronization problem in your business logic.
The logging process (here I mean actual Unix process) will become critical and slightly complex (i.e you may have to handle a form of IPC).
HTH!
| 0 | 1 | 0 | 0 |
2013-03-08T10:01:00.000
| 1 | 0.197375 | false | 15,291,294 | 0 | 0 | 1 | 1 |
I currently finished building a Web server who's main responsibility is to simply take the contents of the body data in each http post request and write it to a log file. The contents of the post data is obfuscated when received. So i'm un obfuscating the post data and writing it to a log file on the server. The contents after obfuscated is a series of random key value pairs that differ between every request. It is not fixed data.
The server is running Linux with 2.6+ kernel. Server is configured to handle heavy traffic (open files limit 32k, etc). The application is written in Python using web.py framework. The http server is Gunicorn behind Nginx.
After using Apache Benchmark to do some load testing, I noticed that it can handle up to about 600-700 requests per second without any log writing issues. Linux natively does a good job at buffering. Problems start to occur when more than this many requests per second attempt to write to the same file at same moment. Data will not get written and information will be lost. I know that "the writing directly to a file" design might not have been the right solution from the get go.
So i'm wondering if anyone can propose a solution that I can implement quickly without altering too much infrastructure and code that can overcome this problem?
I have read about in memory storage like Redis, but I have realized that if data is sitting in memory during server failure then that data is lost. I have read in the docs that redis can be configured as a persistent store, there just needs to be enough memory on the server for Redis to do it. This solution would mean that I would have to write a script that would dump the data from Redis (memory) to the Log file at a certain interval.
I am wondering if there is even a quicker solution? Any help would be greatly appreciated!
|
When to transition from Datastore to NDB?
| 15,311,815 | 0 | 1 | 151 | 0 |
google-app-engine,python-2.7,google-cloud-datastore,gql
|
To add to Dan's correct answer, remember ndb and the older db are just APIs so you can seamlessly begin switching to ndb without worrying about schema changes etc.. You're question asks about switching from datastore to NDB, but you're not switching from the datastore as NDB still uses the datastore. Make sense?
| 0 | 1 | 0 | 0 |
2013-03-08T17:35:00.000
| 2 | 0 | false | 15,299,975 | 0 | 0 | 1 | 1 |
From what I have heard, it is better to move to NDB from Datastore. I would be doing that eventually since I hope my website will be performance intensive. The question is when. My project is in its early stages.
Is it better to start in NDB itself? Does NDB take care of Memcache also. So I don't need to have an explict Memcache layer?
|
Difference between ManyToOneRel and ForeignKey?
| 17,047,519 | 47 | 32 | 23,393 | 0 |
python,database,django
|
Django relations model exposes (and documents) only OneToOneField, ForeignKey and ManyToManyField, which corresponds to the inner
OneToOneField -> OneToOneRel
ForeignKey -> ManyToOneRel
ManyToManyField -> ManyToManyRel
See source of django.db.models.fields.related for further details.
| 0 | 0 | 0 | 0 |
2013-03-08T18:04:00.000
| 2 | 1 | false | 15,300,422 | 0 | 0 | 1 | 2 |
In django, what's the difference between a ManyToOneRel and a ForeignKey field?
|
Difference between ManyToOneRel and ForeignKey?
| 15,300,763 | 41 | 32 | 23,393 | 0 |
python,database,django
|
ManyToOneRel is not a django.db.models.fields.Field, it is a class that is used inside Django but not in the user code.
| 0 | 0 | 0 | 0 |
2013-03-08T18:04:00.000
| 2 | 1.2 | true | 15,300,422 | 0 | 0 | 1 | 2 |
In django, what's the difference between a ManyToOneRel and a ForeignKey field?
|
Use MongoDB with Django but also use relational database
| 15,498,874 | 0 | 0 | 258 | 1 |
python,django,mongodb
|
You could use django-nonrel which is a fork of Django and will let you use the same ORM.
If you dont want a forked Django you could use MongoEngine which has a similar syntax otherwise just raw pymongo.
| 0 | 0 | 0 | 0 |
2013-03-09T18:00:00.000
| 1 | 0 | false | 15,314,025 | 0 | 0 | 1 | 1 |
I'm working on a Django application that needs to interact with a mongoDB instance ( preferably through django's ORM) The meat of the application still uses a relational database - but I just need to interact with mongo for a single specific model.
Which mongo driver/subdriver for python will suite my needs best ?
|
using python and node.js
| 15,316,002 | 0 | 2 | 4,502 | 0 |
python,node.js
|
I think you're thinking about this problem backwards. Node.js lets you run browser Javascript without a browser. You won't find it useful in your Python programming. You're better off, if you want to stick with Python, using a framework such as Pyjamas to write Javascript with Python or another framework such as Flask or Twisted to integrate the Javascript with Python.
| 0 | 0 | 1 | 0 |
2013-03-09T21:12:00.000
| 3 | 0 | false | 15,315,984 | 0 | 0 | 1 | 1 |
I use to program on python. I have started few months before, so I am not the "guru" type of developer. I also know the basics of HTML and CSS.
I see few tutorials about node.js and I really like it. I cannot create those forms, bars, buttons etc with my knowledge from html and css.
Can I use node.js to create what user see on browser and write with python what will happen if someone push the "submit" button? For example redirect, sql write and read etc.
Thank you
|
How do I program an Android App with Python?
| 27,331,662 | 0 | 28 | 91,831 | 0 |
android,python
|
Dr. python is a good ide/editor, but its simple to use. If your using linux, you can install it in the software center. I don't know how to install it on OSX/Windows
| 1 | 0 | 0 | 0 |
2013-03-10T04:46:00.000
| 5 | 0 | false | 15,319,018 | 0 | 0 | 1 | 1 |
I will be in charge of teaching a small group of middle school students how to program phone applications. After much research I found that Python might be the best way to go.
I am a website development senior at my university, but I have not used Python before. I understand both ActionScript and Javascript and I think their logic might be beneficial for learning Python. For the web languages I am familiar with writing I use Sublime2, Dreamweaver, or Flash to code them.
So my questions are:
Which program do I use to code Python?
How do I use the code created in Python to work on Android phones?
|
django-admin.py doesn't work properly
| 27,304,554 | 0 | 0 | 314 | 0 |
python,django,powershell
|
When the window pops-up and then disappears that means an error is occurring inside of the action being called, but no error message is being returned for display in the command line. In my case, I was trying to run manage.py (and any command) and it required MySQL-python to be installed first.
| 0 | 0 | 0 | 0 |
2013-03-10T15:46:00.000
| 2 | 0 | false | 15,324,095 | 0 | 0 | 1 | 2 |
I can use django-admin.py startproject foo and it works fine, however now when I try for instance django-admin.py shell or django-admin.py help, nothing happens.
By nothing happens, I mean (for the django-admin.py shell example) the console doesn't open up the shell command like manage.py would, but instead will pop up and immediately close a new window as it does when I double-click a python file. There isn't any error message, the console just doesn't output anything.
Hopefully what I am trying to say makes sense. It's kind of hard for me to explain. I used to be able to use the django-admin.py shell command, so I don't know what happened.
Anyone know whats going on? Thanks in advance, and if I need to try to clarify something feel free to ask and I will try.
|
django-admin.py doesn't work properly
| 15,324,273 | 1 | 0 | 314 | 0 |
python,django,powershell
|
Did you upgrade your version of Python/Django? Sometimes there's a compatibility issues and you have to uninstall the previous version.
| 0 | 0 | 0 | 0 |
2013-03-10T15:46:00.000
| 2 | 0.099668 | false | 15,324,095 | 0 | 0 | 1 | 2 |
I can use django-admin.py startproject foo and it works fine, however now when I try for instance django-admin.py shell or django-admin.py help, nothing happens.
By nothing happens, I mean (for the django-admin.py shell example) the console doesn't open up the shell command like manage.py would, but instead will pop up and immediately close a new window as it does when I double-click a python file. There isn't any error message, the console just doesn't output anything.
Hopefully what I am trying to say makes sense. It's kind of hard for me to explain. I used to be able to use the django-admin.py shell command, so I don't know what happened.
Anyone know whats going on? Thanks in advance, and if I need to try to clarify something feel free to ask and I will try.
|
Python/Django - Avoid saving passwords in source code
| 59,843,984 | 0 | 40 | 15,090 | 0 |
python,django,security,version-control,django-settings
|
Having something like this in your settings.py:
db_user = 'my_db_user'
db_password = 'my_db_password'
Hard codes valuable information in your code and does pose a security risk. An alternative is to store your valuable information (Api keys, database passwords etc.) on your local machine as environment variables. E.g. on linux you could add:
export DB_USER = "my_db_user"
export DB_PASS = "my_db_password"
to your .bash_profile. Or there is usually an option with your hosting provider to set environment variables e.g. with AWS elastic beanstalk you can add env variables under your configuration on console.
Then to retrieve your information import os:
import os
db_user = os.environ.get['DB_USER']
db_password = os.environ.get['DB_PASS']
| 0 | 0 | 0 | 0 |
2013-03-10T21:20:00.000
| 4 | 0 | false | 15,327,776 | 0 | 0 | 1 | 1 |
I use Python and Django to create web applications, which we store in source control. The way Django is normally set up, the passwords are in plain text within the settings.py file.
Storing my password in plain text would open me up to a number of security problems, particularly because this is an open source project and my source code would be version controlled (through git, on Github, for the entire world to see!)
The question is, what would be the best practice for securely writing a settings.py file in a a Django/Python development environment?
|
Recommendations for backend image processing in Python
| 15,331,517 | 1 | 3 | 1,031 | 0 |
python,image-processing,python-imaging-library
|
If all you want to do is overlay text, I suggest you simply use imagemagick.
| 0 | 0 | 0 | 0 |
2013-03-11T04:04:00.000
| 1 | 0.197375 | false | 15,331,031 | 1 | 0 | 1 | 1 |
On my site I'm using simple text overlay. Inputs come from textboxes and then javascript makes an AJAX call with the inputs that are then processed in the backend by PIL (Python Imaging Library).
Thing is, I'm not happy about the quality of PIL's text overlays - it's not possible to do a nice looking stroke (e.g. white font color + black stroke) and I'm thinking about switching to a different solution than PIL. I want to stay with Python though.
What would you recommend for image processing in Python? Which library offers the best quality?
Thanks!
Best,
Tom
|
Pyramid server not serving flash files
| 15,347,393 | 1 | 1 | 138 | 0 |
python,flash,pyramid
|
I possibly ran into a similar problem on my pyramid app. I'm using TinyMCE and had placed the files in the static folder. Everything worked on my dev server, but moved to test and prod and static .html files related to TinyMCE couldn't be found.
My web host had me add a symlink basically I think hardcoding to the server software (nginix in this case) the web address to my static HTML to the server path and that worked.
I'll have to check out the mimetypes thing, though, too.
| 0 | 0 | 0 | 1 |
2013-03-11T04:05:00.000
| 2 | 0.099668 | false | 15,331,039 | 0 | 0 | 1 | 1 |
I am running this python pyramid server. Strangely, when I moved my server code to a different machine, pserve stopped serving flash videos in my static folder. Whereas it serves other static files, like images, fine ! What could be a reason for this ?
|
Is a Python Decorator the same as Java annotation, or Java with Aspects?
| 49,356,738 | 29 | 63 | 19,358 | 0 |
java,python,python-decorators,java-annotations
|
This is a very valid question that anyone dabbling in both these languages simultaneously, can get. I have spent some time on python myself, and have recently been getting myself up to speed with Java and here's my take on this comparison.
Java annotations are - just that: annotations. They are markers; containers of additional metadata about the underlying object they are marking/annotating. Their mere presence doesn't change execution flow of the underlying, or doesn't add encapsulation/wrapper of some sort on top of the underlying. So how do they help? They are read and processed by - Annotation Processors. The metadata they contain can be used by custom-written annotation processors to add some auxiliary functionality that makes lives easier; BUT, and again, they NEITHER alter execution flow of an underlying, NOR wrap around them.
The stress on "not altering execution flow" will be clear to someone who has used python decorators. Python decorators, while being similar to Java annotations in look and feel, are quite different under the hood. They take the underlying and wrap themselves around it in any which way, as desired by the user, possibly even completely avoiding running the underlying itself as well, if one chooses to do so. They take the underlying, wrap themselves around it, and replace the underlying with the wrapped ones. They are effectively 'proxying' the underlying!
Now that is quite similar to how Aspects work in Java! Aspects per se are quite evolved in terms of their mechanism and flexibility. But in essence what they do is - take the 'advised' method (I am talking in spring AOP nomenclature, and not sure if it applies to AspectJ as well), wrap functionality around them, along with the predicates and the likes, and 'proxy' the 'advised' method with the wrapped one.
Please note these musings are at a very abstract and conceptual level, to help get the big picture. As you start delving deeper, all these concepts - decorators, annotations, aspects - have quite an involving scope. But at an abstract level, they are very much comparable.
TLDR
In terms of look and feel, python decorators can be considered similar to Java annotations, but under the hood, they work very very similar to the way Aspects work in Java.
| 0 | 0 | 0 | 1 |
2013-03-11T19:40:00.000
| 4 | 1 | false | 15,347,136 | 1 | 0 | 1 | 1 |
Are Python Decorators the same or similar, or fundamentally different to Java annotations or something like Spring AOP, or Aspect J?
|
Create an listener on a 3rd party's web page
| 15,348,563 | 0 | 0 | 68 | 0 |
php,python,web
|
There is no 'clean' way to do this.
You must relay on CURL or file_get_context() with context options in order to simply get data from URL, and, in order to notify you when content of URL is changed, you must store in database snapshots of page you are listening. Lately you are comparing new version of crawled content with earlier created snapshot and, if change in significant parts of DOM are detected, that should be trigger for your mail notifier function.
| 0 | 0 | 1 | 0 |
2013-03-11T20:49:00.000
| 1 | 1.2 | true | 15,348,326 | 0 | 0 | 1 | 1 |
I want to write a little program that will give me an update whenever a webpage changes. Like I want to see if there is a new ebay listing under a certain category and then send an email to myself. Is there any clean way to do this? I could set up a program and run it on a server somewhere and have it just poll ebay.com every couple of minutes or seconds indefinitely but I feel like there should be a better way. This method could get dicey too if I wanted to monitor a variety of pages for updates.
|
Web server which generates PDF files
| 15,352,337 | 0 | 2 | 368 | 0 |
python,http,pdf,web-applications,latex
|
can you send it by email just like amazon , send a file to server , when it's ok , the server send it by email
| 0 | 0 | 0 | 1 |
2013-03-11T21:10:00.000
| 1 | 0 | false | 15,348,710 | 0 | 0 | 1 | 1 |
I'm building a website using Python which uses LaTeX to generate PDF files. But I want to put most of the website on Google App Engine, and I can't run LaTeX on that. So I want to do the LaTeX part on another server.
It seemed like a simple problem at first---I thought the best way to do it would be to POST the LaTeX to the server and have it respond with the PDF. But LaTeX files can take a while to compile sometimes if they're long, so I'm starting to think this isn't the best way to do it. What's the standard way of doing something like this? It must be a pretty common problem.
|
Change openerp 7 header/footer with rml
| 15,353,498 | 2 | 3 | 2,744 | 0 |
python,header,report,footer,openerp
|
You can find header/footer of rml report in res_company_view.xml file server side.
The file path is : server/openerp/addons/base/res/res_company_view.xml
And the value of this header footer set default from:
server/openerp/addons/base/res/res_company.py
Regards
| 0 | 0 | 1 | 0 |
2013-03-12T02:21:00.000
| 1 | 1.2 | true | 15,352,183 | 0 | 0 | 1 | 1 |
I'm currently searching what is the rml file generating the header in openerp 7. I can't find it...
I have found server/openerp/addons/base/report/corporate_defaults.xml but no... Or maybe there is a cache caching the rml befort the report generation ?
Thanks by advance !
|
how to add different base templates for different languages of same page in django cms
| 15,381,624 | 3 | 0 | 404 | 0 |
python,django,content-management-system,django-cms
|
You need to create different page trees per language.
Every page has only one template. Use {% trans %} and {% blocktrans %} for translating string in it. Or {% if request.LANGUAGE == "en" %}.
If the templates really differ that much: don't add other languages to pages... but create different page trees with only one language.
| 0 | 0 | 0 | 0 |
2013-03-12T21:20:00.000
| 1 | 0.53705 | false | 15,372,361 | 0 | 0 | 1 | 1 |
How can i add different base templates for different languages of same page in django cms?
I am trying to set a page and show it in different languages. And for all the languages, i need to use a different base template.
I am completely new django cms. Please help.
|
best way to make a page available in multiple languages in django cms
| 15,378,057 | 1 | 2 | 735 | 0 |
python,django,content-management-system,django-cms
|
Use Django Internationalization , It is there in django
first create a "locale" folder in your django project,after that in settings.py include folder path .
eg-LOCALE_PATHS="projectpath/locale"
Add this to your middleware - 'django.middleware.locale.LocaleMiddleware', in settings.py
and set USE_I18N = True in settings.py
After that in settings.py include this "django.core.context_processors.i18n", in template context processor
For html file -:
First you have to include template tag of Internationalization and then you can use template tags in all the static elements of html file
Eg-
{% load i18n %} put this on top of your html file
and try this
{% trans "put your static text here" %} wherever static text is there in that page
In case of template variable in Django you can use this -
{% blocktrans %}this is your translated {{ object }}{% endblocktrans %}
Now for django views you have to follow this-:
from django.utils.translation import ugettext as _
def view(request):
output = _("this is translated text")
return HttpResponse(output)
| 0 | 0 | 0 | 0 |
2013-03-12T21:54:00.000
| 1 | 1.2 | true | 15,372,971 | 0 | 0 | 1 | 1 |
What is the best way to make a page available in multiple languages in Django? I have followed the documentation and used LANGUAGES but I can't see the translated page.
I am stuck. Should I manage the /en, /de, etc urls by myself?
Thanks in advance.
|
What is the changed file size in plone during version control?
| 15,378,810 | 1 | 0 | 85 | 0 |
python,plone
|
If you are referring to the Working-copy-support option: Files are excluded from the possibility to be checked-out. Probably exactly because of the reasons you are rising. Versioning attached files is easily blasting the DB because every file would be kept in the DB and thus (to answer your question) the object's size would be a sum of all of them.
Also the standard-versioning-history is not applied to files.
| 0 | 0 | 0 | 0 |
2013-03-13T05:35:00.000
| 1 | 0.197375 | false | 15,377,808 | 0 | 0 | 1 | 1 |
I have a file uploaded to plone say size 1 MB. After checkout and downloading it, I edit and then upload and check-in. What is the size of the new uploaded file in plone site. It is the original size + added size OR the size of the new uploaded file?
|
no module found for an installed app
| 15,380,907 | 0 | 0 | 56 | 0 |
python,django,django-manage.py,manage.py
|
I don't know what manage.py tasks is, must be from some extension you have installed. However what you are asking doesn't make much sense. An app, at a minumum, has a models.py file, which presumably your templates folder doesn't have. But there is of course no reason at all to treat your templates folder as an app. Put the folder into TEMPLATE_DIRS instead.
| 0 | 0 | 0 | 0 |
2013-03-13T08:26:00.000
| 1 | 0 | false | 15,380,190 | 0 | 0 | 1 | 1 |
I have made my template folder as an app in my python web application
and I mentioned it in my INSTALLED_APPS
I also have the init file in the template folder, but when I'm running manage.py tasks, there is an error and manage.py doesn't find this app, even there is no error when I'm putting the app into INSTALLED_APPS dictionary.
any help?
|
OAuth to authenticate my app and allow it to access data at Google App Engine
| 15,399,334 | 0 | 2 | 556 | 0 |
python,google-app-engine,rest,mobile,oauth
|
I see you say you're not using Endpoints, but not why. It's likely the solution you want, as it's designed precisely for same-party (i.e. you own the backend and the client application) use cases, which is exactly what you've described.
| 0 | 1 | 0 | 0 |
2013-03-13T10:02:00.000
| 2 | 1.2 | true | 15,382,094 | 0 | 0 | 1 | 1 |
I have a web server at Google App Engine and I want to protect my data. My mobile app will access the server to get this data.
The idea is with OAuth authenticate my app, when it requests some data via REST. After the first authentication, the app will always be able to access the content.
I don't want user's data, as Google Account or Facebook. My mobile app will assume the role of user to my services.
Is it possible? Someone has another idea to create these structure?
I'm not using Google End Point and my GAE is developed with Python.
Thank you in advance!
Regards,
Mario Jorge Valle
|
Internet app calls intranet page
| 15,391,677 | 2 | 0 | 301 | 0 |
javascript,jquery,python,html,intranet
|
Yes. Any page that the user can browse to normally can be loaded in an iframe.
| 0 | 0 | 1 | 0 |
2013-03-13T16:55:00.000
| 2 | 0.197375 | false | 15,391,618 | 0 | 0 | 1 | 2 |
This is not so much of a specific question, but more a general one. I'm redoing one of my old projects and to make it more user friendly I want the following:
The project will be running on my home server, with a flask/python backend. User using my website will be coming from my companies intranet. Would it be possible to load an intranet page in a iframe on my website.
So in short, is it possible to load an intranet page from an internet-page that has no access to said intranet.
|
Internet app calls intranet page
| 15,391,680 | 3 | 0 | 301 | 0 |
javascript,jquery,python,html,intranet
|
Of course you can load it in an iframe, you don't need access to the page from the internet for that - the client needs it. Yet, the intranet application might request not to be viewed in a frame.
| 0 | 0 | 1 | 0 |
2013-03-13T16:55:00.000
| 2 | 1.2 | true | 15,391,618 | 0 | 0 | 1 | 2 |
This is not so much of a specific question, but more a general one. I'm redoing one of my old projects and to make it more user friendly I want the following:
The project will be running on my home server, with a flask/python backend. User using my website will be coming from my companies intranet. Would it be possible to load an intranet page in a iframe on my website.
So in short, is it possible to load an intranet page from an internet-page that has no access to said intranet.
|
Pyramid: How to specify the base URL for the application
| 15,411,610 | 1 | 2 | 1,246 | 0 |
python,config,pyramid
|
Yes, a proper reverse proxy will forward along the appropriate headers to your wsgi server. See the pyramid cookbook for an nginx recipe.
| 0 | 0 | 0 | 0 |
2013-03-14T06:08:00.000
| 2 | 0.099668 | false | 15,402,355 | 0 | 0 | 1 | 1 |
Let's say my app is served at the domain www.example.com.
How (where?) should I specify this in the Pyramid configuration file so that functions like request.route_url would automatically pick it and generate the correct URL.
(I think [server:main] is not the place for this)
|
How to customize Page not found (404) in django?
| 15,406,550 | 11 | 5 | 3,695 | 0 |
python,html,django,templates
|
Just create a 404.html file in your project's root level templates directory.
| 0 | 0 | 0 | 0 |
2013-03-14T10:19:00.000
| 2 | 1 | false | 15,406,507 | 0 | 0 | 1 | 1 |
How I customize the error page in Django and where do I put my html for this page.
|
cannot import name LOOKUP_SEP
| 15,408,439 | 0 | 5 | 2,388 | 0 |
python,django
|
django_roa is not yet compatible with django 1.5. I'm afraid it only works with django 1.3.
| 0 | 0 | 0 | 0 |
2013-03-14T11:41:00.000
| 3 | 1.2 | true | 15,408,255 | 0 | 0 | 1 | 2 |
I'm using django and I'm trying to set up django-roa but when i'm trying to start my webserver I have this error cannot import name LOOKUP_SEP
If I remove django_roa from my INSTALLEDS_APP it's okay but I want django-roa working and I don't know how resolve this problem.
And I don't know what kind of detail I can tell to find a solution.
Thanks
|
cannot import name LOOKUP_SEP
| 18,352,809 | 0 | 5 | 2,388 | 0 |
python,django
|
I downgraded from 1.5.2 to 1.4.0 and my app started working again. Via pip:
pip install django==1.4
Hope that helps.
| 0 | 0 | 0 | 0 |
2013-03-14T11:41:00.000
| 3 | 0 | false | 15,408,255 | 0 | 0 | 1 | 2 |
I'm using django and I'm trying to set up django-roa but when i'm trying to start my webserver I have this error cannot import name LOOKUP_SEP
If I remove django_roa from my INSTALLEDS_APP it's okay but I want django-roa working and I don't know how resolve this problem.
And I don't know what kind of detail I can tell to find a solution.
Thanks
|
Heroku truncates HTTP responses?
| 16,584,784 | 1 | 78 | 2,763 | 0 |
python,json,heroku,flask,gunicorn
|
I know I may be considered a little off the wall here but there is another option.
We know that from time to time there is a bug that happens on transit.We know that there is not much we can do right now to stop the problem. If you are only providing the API then stop reading however if you write the client too, keep going.
The error is a known case, and known cause. The result of an empty return value means that something went wrong. However the value is available and was fetched, calculated, whatever... My instinct as a developer would be to treat an empty result as an HTTP error and request the data be resent. You could then track the resend requests and see how often this happens.
I would suggest (although you strike me as the kind of developer to think of this too) that you count the requests and set a sane value for responding "network error" to the user. My instinct would be to retry right away and then to wait a little while before retrying some more.
From what you describe the first retry would probably pick up the data properly. Of course this could mean keeping older requests hanging about in cache for a few minutes or running the request a second time depending on what seemed most appropriate.
This would also route around any number of other point-to-point networking errors and leave the app far more robust even in the face of connectivity problems.
I know our instinct as developers is to fix the known fault but sometimes it is better to work towards a system that is able to operate despite faults. That said it never hurts to log errors and problems and try to fix them anyway.
| 0 | 0 | 0 | 0 |
2013-03-14T14:04:00.000
| 1 | 0.197375 | false | 15,411,498 | 0 | 0 | 1 | 1 |
I am running a Flask/Gunicorn Python app on a Heroku Cedar dyno. The app returns JSON responses to its clients (it's an API server, really).
Once in a while clients get 0-byte responses. It's not me returning them, however. Here is a snippet of my app's log:
Mar 14 13:13:31 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 app[web.1]
[2013-03-14 13:13:31 UTC] 10.104.41.136 apisrv -
api_get_credits_balance(): session_token=[MASKED]
The first line above is me starting to handle the request.
Mar 14 13:13:31 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 app[web.1]
[2013-03-14 13:13:31 UTC] 10.104.41.136 apisrv 1252148511
api_get_credits_balance(): returning [{'credits_balance': 0}]
The second line is me returning a value (to Flask -- it's a Flask "Response" object).
Mar 14 13:13:31 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 app[web.1]
"10.104.41.136 - - [14/Mar/2013:13:13:31] "POST
/get_credits_balance?session_token=MASKED HTTP/1.1" 200 22 "-"
"Appcelerator Titanium/3.0.0.GA (iPhone/6.1.2; iPhone OS; en_US;)"
The third line is Gnicorn's, where you can see the Gunicorn got the 200 status and 22 bytes HTTP body ("200 22").
However, the client got 0 bytes. Here is the Heroku router log:
Mar 14 13:13:30 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 heroku[router]
at=info method=POST path=/get_credits_balance?session_token=MASKED
host=matchspot-apisrv.herokuapp.com fwd="66.87.116.128" dyno=web.1
queue=0 wait=0ms connect=1ms service=19ms status=200 bytes=0
Why does Gunicorn return 22 bytes, but Heroku sees 0, and indeed passes back 0 bytes to the client? Is this a Heroku bug?
|
How can i include my template from the static folder in django
| 15,423,962 | 0 | 0 | 519 | 0 |
python,django
|
Make sure that the folder is included in the TEMPLATE_DIRS setting (in settings.py).
| 0 | 0 | 0 | 0 |
2013-03-15T02:49:00.000
| 4 | 0 | false | 15,423,863 | 0 | 0 | 1 | 2 |
I have the djnago app structure like
myproject/myapp
myproject/myapp/static/site/app/index.html
I want to include that template in my file like this
url(r'^public$', TemplateView.as_view(template_name="/static/site/app/index.html")),
but it says template not found
|
How can i include my template from the static folder in django
| 15,427,726 | 0 | 0 | 519 | 0 |
python,django
|
BY default django checks in app/templates directory for a template match, or in the project_root/templates/. You should have the directory listed in the TEMPLATE_DIRS setting as others have stated.
| 0 | 0 | 0 | 0 |
2013-03-15T02:49:00.000
| 4 | 0 | false | 15,423,863 | 0 | 0 | 1 | 2 |
I have the djnago app structure like
myproject/myapp
myproject/myapp/static/site/app/index.html
I want to include that template in my file like this
url(r'^public$', TemplateView.as_view(template_name="/static/site/app/index.html")),
but it says template not found
|
Strip a dependency of unused functions
| 15,433,922 | 3 | 1 | 101 | 0 |
python
|
No, there's no generally applicable way of doing that for Python. There are some heuristics for simple modules, but they're going to fail miserably.
In the specific case of NumPy you'd have to first find out which parts of its underlying C and Fortran code are needed and which aren't, which is a pretty difficult problem in its own right. Even if you can solve that, the fact that NumPy also uses __import__ in several places, including in compiled extension modules, makes it nearly impossible to determine which parts of the code are going to be imported.
| 0 | 0 | 0 | 0 |
2013-03-15T13:35:00.000
| 1 | 1.2 | true | 15,433,845 | 0 | 0 | 1 | 1 |
I'm just deploying a Django application which uses Matplotlib and Numpy as dependencies. It's a small app, and in the end, the dependency code outweighs the app code by a lot. I'm also getting lots of errors in setting the dependencies in the production environment for methods I'm not directly using in the app.
Is there a method for stripping down a dependecy for it to contain only the things necessary for the app to work?
|
django get_choices_display() displaying label in shell and value in view
| 15,440,329 | 0 | 0 | 192 | 0 |
python,django
|
If you want the label, I believe you want to use instance.label.
| 0 | 0 | 0 | 0 |
2013-03-15T17:47:00.000
| 1 | 0 | false | 15,438,947 | 0 | 0 | 1 | 1 |
When I try instance.get_choices_display() in shell, it is showing the label.
But, I do the same thing in the view, it is printing value.
What can be the reason?
I want the label to be returned/printed in the view.
|
python - simpledb - how to shard/chunk a big string into several <1kb values?
| 15,460,747 | 0 | 0 | 255 | 0 |
python,amazon-simpledb
|
I opted to go with storing large text documents in Amazon S3 (retrieval seems to be quick), I'll be implementing an EC2 instance for caching the documents with S3 as a failover.
| 0 | 0 | 0 | 0 |
2013-03-15T22:13:00.000
| 2 | 1.2 | true | 15,442,919 | 0 | 0 | 1 | 1 |
I've been reading up on SimpleDB and one downfall (for me) is the 1kb max per attribute limit. I do a lot of RSS feed processing and I was hoping to store feed data in SimpleDB (articles) and from what I've read the best way to do this is to shard the article across several attributes. The typical article is < 30kb of plain text.
I'm currently storing article data in DynamoDB (gzip compressed) without any issues, but the cost is fairly high. Was hoping to migrate to SimpleDB for cheaper storage with still fast retrievals. I do archive a json copy of all rss articles on S3 as well (many years of mysql headaches make me wary of db's).
Does anyone know to shard a string into < 1kb pieces? I'm assuming an identifier would need to be appended to each chunk for order of reassembly.
Any thoughts would be much appreciated!
|
Persistent in-memory Python object for nginx/uwsgi server
| 45,383,617 | 1 | 8 | 4,976 | 0 |
python,optimization,nginx,redis,uwsgi
|
"python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken."
you are mistaken.
the whole point of using uwsgi over, say, the CGI mechanism is to persist data across threads and save the overhead of initialization for each call. you must set processes = 1 in your .ini file, or, depending on how uwsgi is configured, it might launch more than 1 worker process on your behalf. log the env and look for 'wsgi.multiprocess': False and 'wsgi.multithread': True, and all uwsgi.core threads for the single worker should show the same data.
you can also see how many worker processes, and "core" threads under each, you have by using the built-in stats-server.
that's why uwsgi provides lock and unlock functions for manipulating data stores by multiple threads.
you can easily test this by adding a /status route in your app that just dumps a json representation of your global data object, and view it every so often after actions that update the store.
| 0 | 1 | 0 | 1 |
2013-03-15T23:31:00.000
| 4 | 0.049958 | false | 15,443,732 | 0 | 0 | 1 | 2 |
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question):
I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB.
Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit).
Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second.
All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second.
But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken.
Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above!
So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly.
P.S:
Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now
If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
|
Persistent in-memory Python object for nginx/uwsgi server
| 45,384,113 | 1 | 8 | 4,976 | 0 |
python,optimization,nginx,redis,uwsgi
|
You said nothing about writing this data back, is it static? In this case, the solution is every simple, and I have no clue what is up with all the "it's not feasible" responses.
Uwsgi workers are always-running applications. So data absolutely gets persisted between requests. All you need to do is store stuff in a global variable, that is it. And remember it's per-worker, and workers do restart from time to time, so you need proper loading/invalidation strategies.
If the data is updated very rarely (rarely enough to restart the server when it does), you can save even more. Just create the objects during app construction. This way, they will be created exactly once, and then all the workers will fork off the master, and reuse the same data. Of course, it's copy-on-write, so if you update it, you will lose the memory benefits (same thing will happen if python decides to compact its memory during a gc run, so it's not super predictable).
| 0 | 1 | 0 | 1 |
2013-03-15T23:31:00.000
| 4 | 0.049958 | false | 15,443,732 | 0 | 0 | 1 | 2 |
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question):
I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB.
Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit).
Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second.
All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second.
But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken.
Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above!
So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly.
P.S:
Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now
If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
|
Creating links to private S3 files which still requires authentication
| 15,503,402 | 1 | 1 | 415 | 0 |
python,file,amazon-s3,boto
|
No, there really isn't any way to do this without putting some sort of service between the people clicking on the links and the S3 objects.
The reason is that access to the S3 content is determined by your AWS access_key and secret_key. There is no way to "login" with these credentials and logging into the AWS web console uses a different set of credentials that are only useful for the console. It does not authenticate you with the S3 service.
| 0 | 0 | 1 | 0 |
2013-03-19T09:42:00.000
| 1 | 0.197375 | false | 15,495,949 | 0 | 0 | 1 | 1 |
I'm having trouble with S3 files. I have some python code using boto that uploads file to S3, and I want to write to a log file links to the files I created for future reference.
I can't seem to find a way to generate a link that works to only people that authenticated. I can create a link using the generate_url method, but then anybody who clicks on that link can access the file. Any other of creating the url, creates a link that doesn't work even if I'm logged in (Get an XML with access denied).
Anybody knows of a way of doing this? Preferably permanent links, but I can do with only temporary links that expires after given time
Thanks,
Ophir
|
while downloading app from google app engine its throwing error <400>
| 15,499,700 | 0 | 0 | 304 | 0 |
python,google-app-engine,sdk
|
@Bharadwaj Please check if the version number you have specified in the command actually exists in the appengine.
Also Make sure that you are providing your right appengine credentials.
| 0 | 1 | 0 | 0 |
2013-03-19T09:48:00.000
| 2 | 0 | false | 15,496,065 | 0 | 0 | 1 | 1 |
my app name is nfcVibe but still i am getting error like below.anyone suggest me to download my app. i think i gave the command correct only. but where it is going wrong that i dont know.
C:\Program Files\Google\google_appengine>appcfg.py download_app -A nfcVibe -V 1
"e:\nfcvibe1"
03:11 PM Host: appengine.google.com
03:11 PM Fetching file list...
Error 400: --- begin server output ---
Client Error (400)
The request is invalid for an unspecified reason.
--- end server output ---
|
How can I extract the list of urls obtained during a HTML page render in python?
| 15,513,793 | 0 | 2 | 996 | 0 |
python,http,http-headers
|
I guess you will have to create a list of all known file extensions that you do NOT want, and then scan the content of the http response, checking with "if substring not in nono-list:"
The problem is all href's ending with TLDs, forwardslashes, url-delivered variables and so on, so i think it would be easier to check for stuff you know you dont want.
| 0 | 0 | 1 | 0 |
2013-03-20T01:28:00.000
| 2 | 0 | false | 15,513,699 | 0 | 0 | 1 | 1 |
I want to be able to get the list of all URLs that a browser will do a GET request for when we try to open a page. For eg: if we try to open cnn.com, there are multiple URLs within the first HTTP response which the browser recursively requests for.
I'm not trying to render a page but I'm trying to obtain a list of all the urls that are requested when a page is rendered. Doing a simple scan of the http response content wouldn't be sufficient as there could potentially be images in the css which are downloaded. Is there anyway I can do this in python?
|
Is it possible to build Android apps without Java?
| 15,514,369 | 3 | 1 | 3,672 | 0 |
java,android,c++,python,django
|
You might want to checkout the likes of Phonegap, Scala, Groovy, Mirah, Rhodes, Clojure
| 1 | 0 | 0 | 0 |
2013-03-20T02:38:00.000
| 2 | 0.291313 | false | 15,514,294 | 0 | 0 | 1 | 1 |
I want to learn how to how to program apps for Android. I am not very fond of Java. I read that you could build Android apps with Python and C++. So can I build apps completely without using Java? Also what are the advantages of C++, Python, and Java when building Android? Another question: Will Django Framework work for Android? Thank you for your time.
|
how to check the request is https in app engine python
| 44,672,760 | 0 | 1 | 428 | 0 |
google-app-engine,python-2.x
|
If you are using GAE Flex (where the secure: directive doesn't work), the only way I've found to detect this (to redirect http->https myself) is to check if request.environ['HTTP_X_FORWARDED_PROTO'] == 'https'
| 0 | 1 | 1 | 0 |
2013-03-20T04:24:00.000
| 3 | 0 | false | 15,515,299 | 0 | 0 | 1 | 1 |
I would like to know if is there a way to validate that a request (say a POST or a GET) was made over https,
I need to check this in a webapp2.RequestHandler to invalidate every request that is not sent via https
best regards
|
AppEngine 1.7.6 and Django 1.4.2 release
| 15,525,796 | 0 | 0 | 154 | 0 |
python,django,google-app-engine,python-2.7,django-nonrel
|
The django library built into GAE is straight up normal django that has an SQL ORM. So you can use this with Cloud SQL but not the HRD.
django-nonrel is up to 1.4.5 according to the messages on the newsgroup. The documentation, unfortunately, is sorely behind.
| 0 | 1 | 0 | 0 |
2013-03-20T07:33:00.000
| 2 | 0 | false | 15,517,766 | 0 | 0 | 1 | 1 |
AppEngine 1.7.6 has promoted Django 1.4.2 to GA.
I wonder how and if people this are using The reason for my question is that Django-nonrel seems to be stuck on Django 1.3 and there are no signs of an updated realease.
What I would like to use from Djano are controllers, views and especially form validations.
|
Web front-end for c++ code
| 15,520,011 | 0 | 0 | 959 | 0 |
php,c++,python,flask
|
You can use sockets, and start listening on some port from C++ program, then from PHP you can connect and send/receive data to/from your program.
| 1 | 0 | 0 | 0 |
2013-03-20T09:40:00.000
| 3 | 0 | false | 15,519,904 | 0 | 0 | 1 | 1 |
I have c++ code that can be compiled under Linux, windows or Mac OS. The code compares two images. I would like to have its front end running on a browser and make available to the www.
I am familiar with hosting and dns and that is not the issue. what I can't seem to figure out is:
How do I invoke the script once the image is uploaded by users?
The results from the code needs to be displayed back to the browser. How can a callback be set up for this?
Is there a php solution? Or python (with flask)?
|
Scrapy Case : Incremental Update of Items
| 25,308,766 | 2 | 3 | 1,365 | 0 |
python,screen-scraping,scrapy
|
Before trying to give you an idea...
I must say I would try first your database option. Databases are made just for that and, even if your DB gets really big, this should not do the crawling significantly slow.
And one lesson I have learned: "First do the dumb implementation. After that, you try to optimize." Most of times when you optimize first, you just optimize the wrong part.
But, if you really want another idea...
Scrapy's default is not to crawl the same url two times. So, before start the crawling you can put the already scraped urls (3 days before) in the list that Scrapy uses to know which urls were already visited. (I don't know how to do that.)
Or, simpler, in your item parser you can just check if the url was already scraped and return None or scrape the new item accordingly.
| 0 | 0 | 1 | 0 |
2013-03-20T17:03:00.000
| 1 | 0.379949 | false | 15,530,071 | 0 | 0 | 1 | 1 |
Please help me solve following case:
Imagine a typical classified category page. A page with list of items. When you click on items you land on internal pages.Now currently my crawler scrapes all these URLs, further scrapes these urls to get details of the item, check to see if the initial seed URL as any next page. If it has, it goes to the next page and do the same. I am storing these items in a sql database.
Let say 3 days later, there are new itmes in the Seed URL and I want to scrap only new items. Possible solutions are:
At the time of scraping each item, I check in the database to see if the URL is already scraped. If it has, I simply ask Scrapy to stop crawling further.
Problem : I don't want to query database each time. My database is going to be really large and it will eventually make crawling super slow.
I try to store last scraped URL and pass it on in the beginning, and the moment it finds this last_scraped_url it simply stops the crawler.
Not possible, given the asynchronous nature of crawling URLs are not scraped in the same order they are received from seed URLs.
( I tried all methods to make it in orderly fashion - but that's not possible at all )
Can anybody suggest any other ideas ? I have been struggling over it for past three days.
Appreciate your replies.
|
Windows Error in Google App Engine
| 15,538,956 | 2 | 2 | 1,412 | 0 |
google-app-engine,python-2.7
|
I updated GAE SDK to 1.7.6 from 1.7.5, since then I started getting this error. I reverted back to 1.7.5, the application is functioning normally :)
| 0 | 1 | 0 | 0 |
2013-03-20T17:39:00.000
| 5 | 0.07983 | false | 15,530,866 | 0 | 0 | 1 | 3 |
This is my first program in GAE. I'm working with latest GAE SDK, and Python 2.7 on Windows XP 32 bit. All was working fine; but to my surprise I'm getting the following error:
2013-03-20 22:48:26 Running command: "['C:\\Python27\\pythonw.exe', 'C:\\Program Files\\Google\\google_appengine\\dev_appserver.py', '--skip_sdk_update_check=yes', '--port=9080', '--admin_port=8001', u'B:\\AppEngg\\huddle-up']"
INFO 2013-03-20 22:48:27,236 devappserver2.py:401] Skipping SDK update check.
WARNING 2013-03-20 22:48:27,253 api_server.py:328] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-03-20 22:48:27,283 api_server.py:152] Starting API server at: http://localhost:1127
INFO 2013-03-20 22:48:27,299 api_server.py:517] Applying all pending transactions and saving the datastore
INFO 2013-03-20 22:48:27,299 api_server.py:520] Saving search indexes
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 194, in
_run_file(__file__, globals())
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 190, in _run_file
execfile(script_path, globals_)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 545, in
main()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 538, in main
dev_server.start(options)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 513, in start
self._dispatcher.start(apis.port, request_data)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\dispatcher.py", line 95, in start
servr.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\server.py", line 827, in start
self._watcher.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\win32_file_watcher.py", line 74, in start
raise ctypes.WinError()
WindowsError: [Error 6] The handle is invalid.
2013-03-20 22:48:27 (Process exited with code 1)
I Googled it; but it seems that most of the people getting this error have something wrong in there PATH config or in x64 Windows.
|
Windows Error in Google App Engine
| 25,406,846 | 0 | 2 | 1,412 | 0 |
google-app-engine,python-2.7
|
I got exactly the same problem with SDK 1.99 on Windows 8.
I was running a test script .yaml and .go file from Google Go's own working directory.
Moving my code to its own subdirectory solved the problem.
| 0 | 1 | 0 | 0 |
2013-03-20T17:39:00.000
| 5 | 0 | false | 15,530,866 | 0 | 0 | 1 | 3 |
This is my first program in GAE. I'm working with latest GAE SDK, and Python 2.7 on Windows XP 32 bit. All was working fine; but to my surprise I'm getting the following error:
2013-03-20 22:48:26 Running command: "['C:\\Python27\\pythonw.exe', 'C:\\Program Files\\Google\\google_appengine\\dev_appserver.py', '--skip_sdk_update_check=yes', '--port=9080', '--admin_port=8001', u'B:\\AppEngg\\huddle-up']"
INFO 2013-03-20 22:48:27,236 devappserver2.py:401] Skipping SDK update check.
WARNING 2013-03-20 22:48:27,253 api_server.py:328] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-03-20 22:48:27,283 api_server.py:152] Starting API server at: http://localhost:1127
INFO 2013-03-20 22:48:27,299 api_server.py:517] Applying all pending transactions and saving the datastore
INFO 2013-03-20 22:48:27,299 api_server.py:520] Saving search indexes
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 194, in
_run_file(__file__, globals())
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 190, in _run_file
execfile(script_path, globals_)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 545, in
main()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 538, in main
dev_server.start(options)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 513, in start
self._dispatcher.start(apis.port, request_data)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\dispatcher.py", line 95, in start
servr.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\server.py", line 827, in start
self._watcher.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\win32_file_watcher.py", line 74, in start
raise ctypes.WinError()
WindowsError: [Error 6] The handle is invalid.
2013-03-20 22:48:27 (Process exited with code 1)
I Googled it; but it seems that most of the people getting this error have something wrong in there PATH config or in x64 Windows.
|
Windows Error in Google App Engine
| 15,578,142 | 1 | 2 | 1,412 | 0 |
google-app-engine,python-2.7
|
I had the same issue with GAE SDK 1.7.6, downgrading to 1.7.5 solved it for me too.
| 0 | 1 | 0 | 0 |
2013-03-20T17:39:00.000
| 5 | 0.039979 | false | 15,530,866 | 0 | 0 | 1 | 3 |
This is my first program in GAE. I'm working with latest GAE SDK, and Python 2.7 on Windows XP 32 bit. All was working fine; but to my surprise I'm getting the following error:
2013-03-20 22:48:26 Running command: "['C:\\Python27\\pythonw.exe', 'C:\\Program Files\\Google\\google_appengine\\dev_appserver.py', '--skip_sdk_update_check=yes', '--port=9080', '--admin_port=8001', u'B:\\AppEngg\\huddle-up']"
INFO 2013-03-20 22:48:27,236 devappserver2.py:401] Skipping SDK update check.
WARNING 2013-03-20 22:48:27,253 api_server.py:328] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-03-20 22:48:27,283 api_server.py:152] Starting API server at: http://localhost:1127
INFO 2013-03-20 22:48:27,299 api_server.py:517] Applying all pending transactions and saving the datastore
INFO 2013-03-20 22:48:27,299 api_server.py:520] Saving search indexes
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 194, in
_run_file(__file__, globals())
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 190, in _run_file
execfile(script_path, globals_)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 545, in
main()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 538, in main
dev_server.start(options)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 513, in start
self._dispatcher.start(apis.port, request_data)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\dispatcher.py", line 95, in start
servr.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\server.py", line 827, in start
self._watcher.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\win32_file_watcher.py", line 74, in start
raise ctypes.WinError()
WindowsError: [Error 6] The handle is invalid.
2013-03-20 22:48:27 (Process exited with code 1)
I Googled it; but it seems that most of the people getting this error have something wrong in there PATH config or in x64 Windows.
|
Move data from Raspberry pi to a synology diskstation to present in a webpage
| 15,534,482 | 1 | 2 | 699 | 0 |
python,service,web
|
I'm not experiment on this topic but what I would do is setup a database in between (on the Synology rather than on the Raspberry Pi). Let's call your Synology server, and Raspberry Pi a sensor client.
I would host a database on the server, and push the from the sensor client. The data would be pushed either using an API through webservices or a more low level if you need it faster (some code needed on server side for this) or, since the client computer is under your control, it could directly push in the database.
Your concrete choice between database, webservice or other API depends on:
How much data have to be pushed?
How fast data have to pushed?
How much do you trust your network?
How much do you trust your sensor client?
I've never used it but I suggest you use SQLAlchemy for connecting to the database (from both side).
If in some use case the remote server can be down, the sensor client would store sensor data in some local file and push them when the server come back online.
| 0 | 0 | 0 | 1 |
2013-03-20T20:42:00.000
| 1 | 0.197375 | false | 15,534,297 | 0 | 0 | 1 | 1 |
I'm looking for ideas, on how to display sensor data in a webpage, hosted by a Synology Diskstation, where the data comes from sensors connected to a Raspberry pi. This is going to be implemented in Python.
I have put together the sensors, and have these connected to the Raspberry. I have also the Python code, so I can read the sensors. I have a webpage up and running on the Diskstation using Python. But how do I get the data from the rasp to the Diskstation. The reading is just done, when the webpage is displayed.
Guess some kind of WebServices on the Rasp ? I have looked at Pyro4, but doesn't look like it can be installed at the Diskstation. And I would prefer not to install a whole WebServer Framework on the rasp.
Do you have a suggestion ?
|
How should an application add/remove scopes to an existing grant?
| 15,593,836 | 1 | 1 | 140 | 0 |
python,google-app-engine,google-drive-api,google-oauth,oauth2client
|
Sorry, you can't do that. You will need to re-authorize the user. I agree that it would be nice to incrementally add scopes, but you will still need to show an authorization page, so I think you won't gain much doing that.
| 0 | 0 | 0 | 0 |
2013-03-21T05:48:00.000
| 1 | 1.2 | true | 15,540,360 | 0 | 0 | 1 | 1 |
Tried adding additional scopes using oauth2client's OAuth2DecoratorFromClientSecrets via the scopes parameter.
I believe users of an application would prefer to gradually expand privileges; as its needed, and as trust forms...
What is the best way to add/expand/remove scopes when the application has an existing grant? Revoke and reauthorize?
|
Can I use ModelForm to filter a Date by range?
| 15,551,326 | 0 | 0 | 57 | 0 |
python,django
|
your form will be more complicated than a simple ModelForm .
maybe you could subclass ModelForm and populate a new DateTimeField for each DateTimeField in the model...
as for the making of the query... it will be a work too.
think about hardcode the extra DateField if you want to filter for only one Model
| 0 | 0 | 0 | 0 |
2013-03-21T15:02:00.000
| 1 | 1.2 | true | 15,551,092 | 0 | 0 | 1 | 1 |
I want to filter some models in a list.
I know I can use ModelForm and filter in my view.
But my question is, how can I take advantage of ModelForms to filter a date field by range?
Also, I wish my form would generate two date widgets for my date field, one for start date and another for end date.
|
How to stop flask application without using ctrl-c
| 63,349,977 | 0 | 154 | 204,744 | 0 |
python,flask,flask-extensions
|
Google Cloud VM instance + Flask App
I hosted my Flask Application on Google Cloud Platform Virtual Machine.
I started the app using python main.py But the problem was ctrl+c did not work to stop the server.
This command $ sudo netstat -tulnp | grep :5000 terminates the server.
My Flask app runs on port 5000 by default.
Note: My VM instance is running on Linux 9.
It works for this. Haven't tested for other platforms.
Feel free to update or comment if it works for other versions too.
| 0 | 0 | 0 | 0 |
2013-03-22T03:55:00.000
| 16 | 0 | false | 15,562,446 | 0 | 0 | 1 | 3 |
I want to implement a command which can stop flask application by using flask-script.
I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.
|
How to stop flask application without using ctrl-c
| 56,710,034 | -6 | 154 | 204,744 | 0 |
python,flask,flask-extensions
|
For Windows, it is quite easy to stop/kill flask server -
Goto Task Manager
Find flask.exe
Select and End process
| 0 | 0 | 0 | 0 |
2013-03-22T03:55:00.000
| 16 | -1 | false | 15,562,446 | 0 | 0 | 1 | 3 |
I want to implement a command which can stop flask application by using flask-script.
I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.
|
How to stop flask application without using ctrl-c
| 59,755,698 | 6 | 154 | 204,744 | 0 |
python,flask,flask-extensions
|
If you're working on the CLI and only have one flask app/process running (or rather, you just want want to kill any flask process running on your system), you can kill it with:
kill $(pgrep -f flask)
| 0 | 0 | 0 | 0 |
2013-03-22T03:55:00.000
| 16 | 1 | false | 15,562,446 | 0 | 0 | 1 | 3 |
I want to implement a command which can stop flask application by using flask-script.
I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.
|
Store file on server for a short time period
| 15,587,348 | 0 | 0 | 79 | 0 |
python,webapp2,temporary-files
|
I would suggest using a client-created UUID on the server, and when the server stores it, send back an error (forcing a retry) to the client. Under most circumstances, the UUID will be completely unique and won't collide with anything already stored. If it does, the client can pick a new name and try again. If you want to make this slightly better, wait a random number of milliseconds between retries to reduce the likelihood of collisions being repeated.
That'd be my approach to this specific, insecure, short-term storage problem.
As for removal, I'd leave that in the responsibility of the server to remove them at intervals, basically checking to see if any file is greater than 5 minutes old and removing them. As long as in-process downloads leave the file open, it shouldn't interrupt.
If you want to leave the client in control, you will not have an easy way to enforce deletion when the client is offline, so I'd suggest keeping a list of the files in date order and delete them:
in a background thread as necessary if you expect to be running a long time
at startup (which will require persisting these to disk)
at shutdown (doesn't require persisting to disk)
However, all of these mechanisms are prone to leaving unnecessary files on the server if you crash or lose the persistent information, so I'd still recommend making the deletion the responsibility of the server.
| 0 | 0 | 0 | 0 |
2013-03-22T20:45:00.000
| 1 | 0 | false | 15,579,545 | 0 | 0 | 1 | 1 |
What's the best way to name and store a generated file on a server, such that if the user requests the file in the next 5 minutes or so, you return it, otherwise, return an error code? I am using Python and Webapp2 (although this would work with any WSGI server).
|
Django + postgreSQL: find near points
| 15,593,621 | 0 | 0 | 689 | 1 |
python,django,postgresql,postgis,geodjango
|
You're probably right, PostGIS/GeoDjango is probably overkill, but making your own Django app would not be too much trouble for your simple task. Django offers a lot in terms of templating, etc. and with the built in admin makes it pretty easy to enter single records. And GeoDjango is part of contrib, so you can always use it later if your project needs it.
| 0 | 0 | 0 | 0 |
2013-03-24T00:09:00.000
| 3 | 0 | false | 15,593,572 | 0 | 0 | 1 | 1 |
For my app, I need to determine the nearest points to some other point and I am looking for a simple but relatively fast (in terms of performance) solution. I was thinking about using PostGIS and GeoDjango but I think my app is not really that "geographic" (I still don't really know what that means though). The geographic part (around 5 percent of the whole) is that I need to keep coordinates of objects (people and places) and then there is this task to find the nearest points. To put it simply, PostGIS and GeoDjango seems to be an overkill here.
I was also thinking of django-haystack with SOLR or Elasticsearch because I am going to need a strong, strong text search capabilities and these engines have also these "geographic" features. But not sure about it either as I am afraid of core db <-> search engine db synchronisation and hardware requirements for these engines. At the moment I am more akin to use posgreSQL trigrams and some custom way to do that "find near points problem". Is there any good one?
|
Sorting OpenERP table by functional field
| 15,632,547 | 1 | 3 | 1,523 | 0 |
python,openerp
|
The reason for storing the field is that you delegate sorting to sql, that gives you more performance than any other subsequent sorting, for sure.
| 0 | 0 | 0 | 0 |
2013-03-25T17:24:00.000
| 2 | 0.099668 | false | 15,621,013 | 0 | 0 | 1 | 1 |
On search screens, users can sort the results by clicking on a column header. Unfortunately, this doesn't work for all columns. It works fine for regular fields like name and price that are stored on the table itself. It also works for many-to-one fields by joining to the referenced table and using the default sort order for that table.
What doesn't work is most functional fields and related fields. (Related fields are a type of functional field.) When you click on the column, it just ignores you. If you change the field definition to be stored in the database, then you can sort by it, but is that necessary? Is there any way to sort by a functional field without storing its values in the database?
|
Matching Month and Year In Python with datetime
| 15,625,871 | 1 | 1 | 483 | 0 |
python,django,date,datetime
|
What you're looking for is probably coverd by post_date__year=year and post_date__month=month in django.
Nevertheless all this seems a little bit werid for get parameters. Do you have any constraint at database level that forbids you from putting there two posts with the same title in the same month of given year?
| 0 | 0 | 0 | 0 |
2013-03-25T22:01:00.000
| 2 | 0.099668 | false | 15,625,662 | 1 | 0 | 1 | 2 |
I'm working on a blog using Django and I'm trying to use get() to retrieve the post from my database with a certain post_title and post_date. I'm using datetime for the post_date, and although I can use post_date = date(year, month, day) to fetch a post made on that specific day, I don't know how to get it to ignore the day parameter, I can't only pass two arguments to date() and since only integers can be used I don't think there's any kind of wildcard I can use for the day. How would I go about doing this?
To clarify, I'm trying to find a post using the year in which it was posted, the month, and it's title, but not the day. Thanks in advance for any help!
|
Matching Month and Year In Python with datetime
| 15,625,840 | 1 | 1 | 483 | 0 |
python,django,date,datetime
|
you could use post_date__year and post_date__month
| 0 | 0 | 0 | 0 |
2013-03-25T22:01:00.000
| 2 | 1.2 | true | 15,625,662 | 1 | 0 | 1 | 2 |
I'm working on a blog using Django and I'm trying to use get() to retrieve the post from my database with a certain post_title and post_date. I'm using datetime for the post_date, and although I can use post_date = date(year, month, day) to fetch a post made on that specific day, I don't know how to get it to ignore the day parameter, I can't only pass two arguments to date() and since only integers can be used I don't think there's any kind of wildcard I can use for the day. How would I go about doing this?
To clarify, I'm trying to find a post using the year in which it was posted, the month, and it's title, but not the day. Thanks in advance for any help!
|
How to remove Create and Edit... from many2one field.?
| 44,493,866 | 2 | 10 | 19,810 | 0 |
python,xml,one-to-many,openerp,many-to-one
|
In the XML file:
Please add options="{'no_create': True}" to your field which will remove the create button
| 0 | 0 | 0 | 0 |
2013-03-26T05:33:00.000
| 10 | 0.039979 | false | 15,630,054 | 0 | 0 | 1 | 2 |
Please advice me How to remove "Create and Edit..." from many2one field.?
that item shows below in the many2one fields which I filtered with domain option.
OpenERP version 7
|
How to remove Create and Edit... from many2one field.?
| 15,630,138 | 18 | 10 | 19,810 | 0 |
python,xml,one-to-many,openerp,many-to-one
|
I don't have much idea. Maybe for that you have to make changes in web addons.
But an alternative solution is that you can make that many2one field selection. Add widget="selection" attribute in your xml.
<field name="Your_many2one_field" widget="selection">
| 0 | 0 | 0 | 0 |
2013-03-26T05:33:00.000
| 10 | 1.2 | true | 15,630,054 | 0 | 0 | 1 | 2 |
Please advice me How to remove "Create and Edit..." from many2one field.?
that item shows below in the many2one fields which I filtered with domain option.
OpenERP version 7
|
GAE: Data is lost after dev server restart
| 15,641,028 | 2 | 0 | 322 | 0 |
python,google-app-engine,google-cloud-datastore
|
This is answered, but to explain a little further - the local datastore, by default writes to the temporary file system on your computer. By default, the temporary file is emptied any time you restart the computer, hence your datastore is emptied. If you don't restart your computer, your datastore should remain.
| 0 | 1 | 0 | 0 |
2013-03-26T11:27:00.000
| 2 | 0.197375 | false | 15,635,888 | 0 | 0 | 1 | 1 |
I'm running App Engine with Python 2.7 on OS X. Once I stop the development server all data in the data store is lost. Same thing happens when I try to deploy my app. What might cause this behaviour and how to fix it?
|
Can I capture which templates have been used/not used during an XSL transformation?
| 16,332,834 | 0 | 0 | 36 | 0 |
python,xslt,lxml,libxslt
|
lxml's transform methods allow you to profile a transformation, and obtain the results as an XML document which shows how many times a pattern/mode/named-template was used. It should be possible to then perform an XPath across the XSL files to obtain all the comparative patterns/modes/named-templates and compare the two lists to see which templates are most/least used.
| 0 | 0 | 0 | 0 |
2013-03-26T13:58:00.000
| 1 | 0 | false | 15,638,937 | 0 | 0 | 1 | 1 |
Is it possible to log/capture which XSL templates are used and/or not used during an XML transform using lxml? I'm looking to report on and prune unused templates to reduce "technical debt".
|
Django Not Reflecting Updates to Javascript Files?
| 15,641,548 | 34 | 21 | 17,601 | 0 |
python,django
|
I believe your browser is caching your js
you could power refresh your browser, or clear browser cache?
on chrome control+f5 or shift + f5
i believe on firefox it is control + shift + r
| 0 | 0 | 0 | 0 |
2013-03-26T15:51:00.000
| 3 | 1.2 | true | 15,641,474 | 0 | 0 | 1 | 2 |
I have javascript files in my static folder. Django finds and loads them perfectly fine, so I don't think there is anything wrong with my configuration of the static options. However, sometimes when I make a change to a .js file and save it, the Django template that uses it does NOT reflect those changes -- inspecting the javascript with the browser reveals the javascript BEFORE the last save. Restarting the server does nothing, though restarting my computer has sometimes solved the issue. I do not have any code that explicitly deals with caching. Has anyone ever experienced anything like this?
|
Django Not Reflecting Updates to Javascript Files?
| 67,354,933 | 0 | 21 | 17,601 | 0 |
python,django
|
For me, opening Incognito Mode in Chrome let the browser show the recent changes in my .js static files.
| 0 | 0 | 0 | 0 |
2013-03-26T15:51:00.000
| 3 | 0 | false | 15,641,474 | 0 | 0 | 1 | 2 |
I have javascript files in my static folder. Django finds and loads them perfectly fine, so I don't think there is anything wrong with my configuration of the static options. However, sometimes when I make a change to a .js file and save it, the Django template that uses it does NOT reflect those changes -- inspecting the javascript with the browser reveals the javascript BEFORE the last save. Restarting the server does nothing, though restarting my computer has sometimes solved the issue. I do not have any code that explicitly deals with caching. Has anyone ever experienced anything like this?
|
Admin.sites.url password transmission
| 15,641,560 | 1 | 1 | 43 | 0 |
python,django
|
It is sent in the clear, then the server hashes it. You would need to use https to prevent eavesdropping.
| 0 | 0 | 0 | 0 |
2013-03-26T15:53:00.000
| 1 | 1.2 | true | 15,641,529 | 0 | 0 | 1 | 1 |
When you login to django, does the password get hashed and then transmitted or is it transmitted in the clear and the server does the hashing?
This is within the context of not using https.
|
How to access the file shared between different accessors?
| 15,642,897 | 2 | 1 | 53 | 0 |
java,python,concurrency
|
Your best bet might be to ditch the use of a file and use sockets. The Java program generates and caches the output until a Python script is listening. The Python script then accepts the data, and handles it.
Alternatively, you could use IPC signalling between the two processes, although this seems a lot more messy than sockets, IMHO.
Otherwise, a .lock file seems like your best bet.
| 0 | 0 | 0 | 0 |
2013-03-26T16:43:00.000
| 3 | 0.132549 | false | 15,642,665 | 1 | 0 | 1 | 3 |
I have code written on Java which writes all data to the file and then I have python script which handles this data.
They run completely separately and python script can be run by schedule but it also removing handled records from the file.
The question is in implementation for the access to the file when java code from first process will try to write something and python code from second process will try to remove handled record?
First thought was to have .lock file physically created when one of the processes updating the file but perhaps there are some other solutions to consider?
Thank you.
|
How to access the file shared between different accessors?
| 15,642,755 | 0 | 1 | 53 | 0 |
java,python,concurrency
|
Make sure that both the Java and Python methods close the file when they are done.
One possibility is to convert your Python script to Jython. If both processes are running in the JVM then you should be able to use standard Java concurrency techniques to make sure you do not have both threads modifying the file simultaneously.
| 0 | 0 | 0 | 0 |
2013-03-26T16:43:00.000
| 3 | 0 | false | 15,642,665 | 1 | 0 | 1 | 3 |
I have code written on Java which writes all data to the file and then I have python script which handles this data.
They run completely separately and python script can be run by schedule but it also removing handled records from the file.
The question is in implementation for the access to the file when java code from first process will try to write something and python code from second process will try to remove handled record?
First thought was to have .lock file physically created when one of the processes updating the file but perhaps there are some other solutions to consider?
Thank you.
|
How to access the file shared between different accessors?
| 15,642,744 | 0 | 1 | 53 | 0 |
java,python,concurrency
|
One mechanism would be to have the producer roll the file to a new name (maybe with HHMMSS suffix) every so often, and have the consumer only process the file once it has been rolled to the new name. Maybe every 5 minutes?
Another mechanism would be to have the consumer roll the file itself and have the producer notice that the file has rolled and to re-open the original file name. So the consumer is always consuming from output.consume and the producer is always writing to output or something.
Every time a line is written to the file, the producer makes sure that output exists.
When a consumer is ready to read the file, he renames output to output.consume or something.
The producer notices that the file output no longer exists so he reopens it for output.
Once the output file is re-created the consumer can process the output.comsume file.
| 0 | 0 | 0 | 0 |
2013-03-26T16:43:00.000
| 3 | 0 | false | 15,642,665 | 1 | 0 | 1 | 3 |
I have code written on Java which writes all data to the file and then I have python script which handles this data.
They run completely separately and python script can be run by schedule but it also removing handled records from the file.
The question is in implementation for the access to the file when java code from first process will try to write something and python code from second process will try to remove handled record?
First thought was to have .lock file physically created when one of the processes updating the file but perhaps there are some other solutions to consider?
Thank you.
|
How do I install Mezzanine as a Django app?
| 27,972,009 | 13 | 16 | 4,399 | 0 |
python,django,mezzanine
|
If you are like me, you will find that the FAQ is sorely lacking in its description of how to get Mezzanine working as an app. So here is what I did (after a painful half day of hacking) to get it integrated (somewhat):
Download the repo and copy it into your project
Run setup.py for the package
cd to the package and run the mezzanine command to create a new app (mezzanine-project <project name>), let's say you use the name blog as your <project_name>.
In either the local_settings.py or settings.py file, set the DATABASES dict to use your project's database.
Run the createdb command from the mezzanine manage.py file
Now it's time to start the hack-fest:
In your project's settings.py file, add blog to INSTALLED_APPS
Add some configuration variables to settings.py that Mezzanine is expecting:
PACKAGE_NAME_FILEBROWSER = "filebrowser_safe"
PACKAGE_NAME_GRAPPELLI = "grappelli_safe"
GRAPPELLI_INSTALLED = False
ADMIN_REMOVAL = []
RATINGS_RANGE = range(1, 5)
TESTING = False
BLOG_SLUG = ''
COMMENTS_UNAPPROVED_VISIBLE = True
COMMENTS_REMOVED_VISIBLE = False
COMMENTS_DEFAULT_APPROVED = True
COMMENTS_NOTIFICATION_EMAILS = ",".join(ALL_EMAILS)
COMMENT_FILTER = None
Add some middleware that Mezzanine is expecting:
````
...
"mezzanine.core.request.CurrentRequestMiddleware",
"mezzanine.core.middleware.RedirectFallbackMiddleware",
"mezzanine.core.middleware.TemplateForDeviceMiddleware",
"mezzanine.core.middleware.TemplateForHostMiddleware",
"mezzanine.core.middleware.AdminLoginInterfaceSelectorMiddleware",
"mezzanine.core.middleware.SitePermissionMiddleware",
Uncomment the following if using any of the SSL settings:
"mezzanine.core.middleware.SSLRedirectMiddleware",
"mezzanine.pages.middleware.PageMiddleware",
....
````
Add some INSTALLED_APPS that Mezzanine is expecting:
....
"mezzanine.boot",
"mezzanine.conf",
"mezzanine.core",
"mezzanine.generic",
"mezzanine.blog",
"mezzanine.forms",
"mezzanine.pages",
"mezzanine.galleries",
"mezzanine.twitter",
....
Add references to the template folders of mezzanine to your TEMPLATE_DIRS tuple
os.path.join(BASE_PARENT, '<path to mezzanine>/mezzanine/mezzanine'),
os.path.join(BASE_PARENT, '<path to mezzanine>/mezzanine/mezzanine/blog/templates'),
Finally, if your like me, you'll have to override some of the extends paths in the mezzanine templates, the most obvious being in "blog_post_list.html" which just extends base.html, instead you want it to extend the mezzanine specific base file. So go to that file and replace the {% extends "base.html" %} with {% extends "core/templates/base.html" %}.
| 0 | 0 | 0 | 0 |
2013-03-27T19:19:00.000
| 2 | 1 | false | 15,667,578 | 0 | 0 | 1 | 1 |
I already have an existing Django website. I have added a new url route '/blog/' where I would like to have a Mezzanine blog. If it possible to installed Mezzanine as an app in an existing Django site as opposed to a standalone blog application.
|
Datastore vs spreadsheet for provisioning Google apps
| 15,671,792 | 0 | 0 | 248 | 1 |
python,google-app-engine,google-sheets,google-cloud-datastore
|
If you use the Datastore API, you will also need to build out a way to manage users data in the system.
If you use Spreadsheets, that will serve as your way to manage users data, so in that way managing the data would be taken care of for you.
The benefits to use the Datastore API would be if you'd like to have a seamless integration of managing the user data into your application. Spreadsheet integration would remain separate from your main application.
| 0 | 1 | 0 | 0 |
2013-03-27T23:37:00.000
| 1 | 0 | false | 15,671,591 | 0 | 0 | 1 | 1 |
In my company we want to build an application in Google app engine which will manage user provisioning to Google apps. But we do not really know what data source to use?
We made two propositions :
spreadsheet which will contains users' data and we will use spreadsheet API to get this data and use it for user provisioning
Datastore which will contains also users' data and this time we will use Datastore API.
Please note that my company has 3493 users and we do not know too many advantages and disadvantages of each solution.
Any suggestions please?
|
ubuntu django run managements much faster( i tried renice by setting -18 priority to python process pid)
| 20,323,986 | 0 | 0 | 171 | 0 |
python,django,ubuntu
|
Just in case, did you run the command renice -20 -p {pid} instead of renice --20 -p {pid}? In the first case it will be given the lowest priority.
| 0 | 0 | 0 | 0 |
2013-03-28T09:22:00.000
| 2 | 0 | false | 15,678,119 | 0 | 0 | 1 | 1 |
I am using ubuntu. I have some management commands which when run, does lots of database manipulations, so it takes nearly 15min.
My system monitor shows that my system has 4 cpu's and 6GB RAM. But, this process is not utilising all the cpu's . I think it is using only one of the cpus and that too very less ram. I think, if I am able to make it to use all the cpu's and most of the ram, then the process will be completed in very less time.
I tried renice , by settings priority to -18 (means very high) but still the speed is less.
Details:
its a python script with loop count of nearly 10,000 and that too nearly ten such loops. In every loop, it saves to postgres database.
|
Making a web app which allows the user to view the server desktop
| 15,681,587 | 0 | 0 | 54 | 0 |
java,python,web-applications
|
To allow the user to interact with the desktop in real-time, you need to run the application in the users web browser. Interaction with a webserver would just be too slow to do anything meaningful. I do not know about any way to execute Python in a web browser, so I would rule it out. Some of your options for client-sided code execution are:
Javascript (the recent addition of Canvas and WebSocket made it suitable for this kind of problem)
Java Applets (felt out of favor recently due to security problems)
ActiveX (IE- and Windows only, very rarely used in a public context nowadays)
Flash (a popular but dying technology)
| 0 | 0 | 0 | 0 |
2013-03-28T12:01:00.000
| 1 | 1.2 | true | 15,681,266 | 0 | 0 | 1 | 1 |
I did a pretty fair bit of scouring, yet could not find anything useful which answers my questions. Either that or I am asking the wrong questions.
I am trying to make a web application which gives a user a graphical view of the server desktop. I have understood that somewhere in here X engine has to be invoked and I have also understood that this is not something that php can accomplish primarily because its a language which processes before sending requests, please correct me if I am wrong in this regard.
You may say that what I am trying to accomplish is something akin to what teamviewer does only on the web. My dilemma is whether I should be using python or java for this task, both would be pretty apt for the task, but which one would be better?
Please give your suggestions
|
API Key for GCM from GAE
| 17,506,596 | 4 | 4 | 907 | 0 |
google-app-engine,python-2.7,google-cloud-messaging
|
You can check the IP easily by doing a ping from the command line to the domain name, as in "ping appspot.com". With this you will obtain the response from the real IP. Unfortunately this IP will change over time and won't make your GCM service work.
In order to make it work you only need to leave the allowed IPs field blank.
| 0 | 1 | 1 | 0 |
2013-03-28T16:13:00.000
| 2 | 1.2 | true | 15,686,853 | 0 | 0 | 1 | 1 |
I have implemented GCM using my own sever. Now I'm trying to do the same using Python 2.7 in Google App Engine. How can I get the IP address for the server hosting my app? (I need it for API Key). Is IP-LookUp only option? And if I do so will the IP address remain constant?
|
Template to forms
| 15,704,671 | 1 | 2 | 83 | 0 |
python,forms,parsing,templates,jinja2
|
I think Jinja makes sense for building this, in particular because it contains a full-on lexer and parser. You can leverage those to derive your own versions of this that do what you need.
| 0 | 0 | 0 | 0 |
2013-03-29T00:09:00.000
| 1 | 1.2 | true | 15,694,341 | 0 | 0 | 1 | 1 |
I'd like to do somehow the contrary to what a template is used for: I want to write templates and programmatically derive a representation of the different tags and placeholders present in the template, to ultimately generate a form.
To put it another way, when you usually have the data and populate the template with it, I want to have the template and ask the user the right data to fill it.
Example (with pseudo-syntax): Hello {{ name_of_entity only-in ['World', 'Universe', 'Stackoverflow'] }}!
With that I could programatically derive that I should generate a form with a select tag named 'name_of_entity' and having 3 options ('World', 'Universe', 'Stackoverflow').
I looked into Jinja2, and it seems I can reach my goal using it and extending it (even if it's made to do things the other way). But I am still unsure how I should do in some cases, eg.:
if I want to represent that {{ weekday }} has values only in ['Mo', 'Tu', ...]
if I want to represent in the template that the {{ amount }} variable is accepting only integers...
Is Jinja a good base to reach these goals? If yes, how would you recommend to do that?
|
Noob questions about upload & security
| 15,735,772 | 0 | 0 | 83 | 0 |
python,google-app-engine
|
Since you have app in C:\myap you need to run appcfg.py update C:\myap. It's just a path to you app on your machine.
In windows command line. For example, "C:\Program Files (x86)\Google\google_appengine\appcfg.py" update C:\myap
No, appcfg uses SSL while uploading. It's safe.
If you mean to call application uploading - it's not really safe. I don't know why you need this. You can add app developers in App Engine admin console, so they will be able to deploy application from their accounts.
| 0 | 1 | 0 | 0 |
2013-03-29T14:05:00.000
| 1 | 0 | false | 15,704,873 | 0 | 0 | 1 | 1 |
I have the myapp.py and app.yaml in my windows C:\myap directory. The docs say to use:
appcfg.py update myapp/
to upload the app.
I've downloaded/installed Python and the Google python kit.
Sorry, for these noobish questions, but:
Is the myapp/ listed above refer to c:\myapp on my windows machine? Or is it the name of my app on the google side?
How/where do I type the appcfg.py to upload my directory?
Are there any security issues associated with using my gmail account and email address?
I'd like anybody from Second Life to be able to call this from in-world. There will be about a dozen calls a week. Are they going to have to authenticate with my email/password to use it?
Thanks for any help you can provide!
|
Why do Python MVC web frameworks use views.py to contain route functions?
| 15,712,144 | 2 | 1 | 583 | 0 |
python,model-view-controller,web-frameworks
|
A view, from the django's perspective is what content is presented on a page. And the template is the how it is presented.
A django view is not exactly a controller equivalent. The controller in some of those other frameworks is how the call of a function happens. In django, that is a part of the framework itself.
Technically, there is nothing preventing you from renaming your views into controllers.- The URL routing scheme takes either the function or the string to the function. As long as you can send the appropriate string to the function (or the function itself), you can call your view whatever you want. However, for the reason stated in the paragraph above and for the fact of meeting the expectations of the other people that work on django, you should not really have files called controller.py.
It's just a matter of getting used to. Hang in there for a bit.
| 0 | 0 | 0 | 0 |
2013-03-29T20:51:00.000
| 1 | 1.2 | true | 15,711,233 | 0 | 0 | 1 | 1 |
I've developed many applications using the MVC pattern in Zend and Symfony. Now that I'm in Pythonland, I find that many frameworks such as Flask, Django and Pyramid use a file called views.py to contain functions which implement the routes. But, these "views" are really controllers in other MVC frameworks I've used before. Why are they called views in Python web frameworks? And, can I change them to controller.py without tearing a hole in the Python universe?
|
Trigger Django module on Database update
| 15,712,350 | 1 | 1 | 2,861 | 0 |
python,database,django,sqlite,triggers
|
A better way would be to have that application that modifies the records call yours. Or at least make a celery queue entry so that you don't really have to query the database too often to see if something changed.
But if that is not an option, letting celery query the database to find if something changed is probably the next best option. (surely better than the other possible option of calling a web service from the database as a trigger, which you should really avoid.)
| 0 | 0 | 0 | 0 |
2013-03-29T21:28:00.000
| 2 | 0.099668 | false | 15,711,677 | 0 | 0 | 1 | 1 |
I want to develop an application that monitors the database for new records and allows me to execute a method in the context of my Django application when a new record is inserted.
I am planning to use an approach where a Celery task checks the database for changes since the last check and triggers the above method.
Is there a better way to achieve this?
I'm using SQLite as the backend and tried apsw's setupdatehook API, but it doesn't seem to run my module in Django context.
NOTE: The updates are made by a different application outside Django.
|
Python CGI - Script outputs source of generated page
| 15,726,928 | 1 | 0 | 426 | 0 |
python,html,apache,cgi
|
The default Content Type is text, and if you forgot to send the appropriate header in your CGI file, you will end up with what you are seeing.
| 0 | 0 | 0 | 1 |
2013-03-31T06:01:00.000
| 2 | 1.2 | true | 15,726,843 | 0 | 0 | 1 | 2 |
I have a CGI script that I wrote in python to use as the home page of the website I am creating. Everything works properly except when you view the page instead of seeing the page that it outputs you see the source code of the page, why is this? I dont mean that it shows me the source code of the .py file, it shows me all the printed information as if I were looking at a .htm file in notepad.
|
Python CGI - Script outputs source of generated page
| 15,726,936 | 2 | 0 | 426 | 0 |
python,html,apache,cgi
|
Add the following before you print anything
print "Content-type: text/html"
Probably your script is not getting executed.
Is your python script executable?
Check whether you have the script under cgi-bin directory.
| 0 | 0 | 0 | 1 |
2013-03-31T06:01:00.000
| 2 | 0.197375 | false | 15,726,843 | 0 | 0 | 1 | 2 |
I have a CGI script that I wrote in python to use as the home page of the website I am creating. Everything works properly except when you view the page instead of seeing the page that it outputs you see the source code of the page, why is this? I dont mean that it shows me the source code of the .py file, it shows me all the printed information as if I were looking at a .htm file in notepad.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.