Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Issue with Sites system in Django
23,903,821
0
0
58
0
python,django,django-views,django-sites
add SITE_ID = 1 to your settings.py, and run python manage.py syncdb to create corresponding tables if not exist, this will work and then, you could login into your admin site: click Site to modify defalut example.com to yours, this used when you edit an object, It will provide a View on site button if you defined a get_absolute_url function in your models.py
0
0
0
0
2014-05-27T23:55:00.000
2
0
false
23,900,813
0
0
1
1
I have an issue with the Sites system in Django. My problem is when running Site.objects.get_current() on some pages on initial load of the development server I get an exception saying Site matching query does not exist. however if I load the main page of the site it loads just fine then I can go back to any other page with the exception and they load fine as well. Has anyone come across this issue before? Thanks, Nick
Python wsgi OSError: [Errno 10] No child process
23,920,537
1
0
1,426
0
python,linux,mod-wsgi,wsgi
OSError [Errno 10] no child processes can mean the program ran, but took too much memory and died. Starting jobs within Apache is fine. Running as root is a bit sketchy, but isn't that big of a deal. Note that the 'root' account setup, like PATH, might be different from your account. This would explain why it runs from the shell but not from Apache. In your program log the current directory. If the script requires a certain module in a certain location, then that would cause weird problems. Also 'root' tends to not have "current directory" (ie: ".") on the sys.path.
0
1
0
0
2014-05-28T13:56:00.000
1
0.197375
false
23,913,689
0
0
1
1
I have a python wsgi script that is attempting to make a call to generate an openssl script. Using subprocess.check_call(args), the process throws an OSError [Errno 10] no child processes. The owner of the opensll bin is root:root. Could this be the problem? Or does apache not allow for child processes? Using just the subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) seems to work fine, I just want to wait and make sure the process finishes before moving on. communicate() and wait() both fail with the same error. Running it outside of wsgi the code works fine. This is python 2.6 btw.
When do I need to restart database server in Django?
23,920,627
-1
0
237
1
python,mysql,django,postgresql
Usually when the settings that are controlling the application are changed then the server has to be restarted.
0
0
0
0
2014-05-28T19:41:00.000
3
-0.066568
false
23,920,481
0
0
1
3
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server? Any explanation would be really helpful, Cheers!
When do I need to restart database server in Django?
23,920,963
2
0
237
1
python,mysql,django,postgresql
You will not NEED to restart your database in production due to anything you've done in Django. You may need to restart it to change your database security or configuration settings, but that has nothing to do with Django and in a lot of cases doesn't even need a restart.
0
0
0
0
2014-05-28T19:41:00.000
3
0.132549
false
23,920,481
0
0
1
3
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server? Any explanation would be really helpful, Cheers!
When do I need to restart database server in Django?
23,920,777
1
0
237
1
python,mysql,django,postgresql
You shouldn't really ever need to restart the database server. You probably do need to restart - or at least reload - the web server whenever any of the code changes. But the db is a separate process, and shouldn't need to be restarted.
0
0
0
0
2014-05-28T19:41:00.000
3
1.2
true
23,920,481
0
0
1
3
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server? Any explanation would be really helpful, Cheers!
Web scraping without knowledge of page structure
23,922,228
2
8
3,298
0
python,web-scraping,beautifulsoup,web-crawler
You're basically asking "how do I write a search engine." This is... not trivial. The right way to do this is to just use Google's (or Bing's, or Yahoo!'s, or...) search API and show the top n results. But if you're just working on a personal project to teach yourself some concepts (not sure which ones those would be exactly though), then here are a few suggestions: search the text content of the appropriate tags (<p>, <div>, and so forth) for relevant keywords (duh) use the relevant keywords to check for the presence of tags that might contain what you're looking for. For example, if you're looking for a list of things, then a page containing <ul> or <ol> or even <table> might be a good candidate build a synonym dictionary and search each page for synonyms of your keywords too. Limiting yourself to "US" might mean an artificially low ranking for a page containing just "America" keep a list of words which are not in your set of keywords and give a higher ranking to pages which contain the most of them. These pages are (arguably) more likely to contain the answer you're looking for good luck (you'll need it)!
0
0
1
0
2014-05-28T21:13:00.000
2
0.197375
false
23,921,986
0
0
1
1
I'm trying to teach myself a concept by writing a script. Basically, I'm trying to write a Python script that, given a few keywords, will crawl web pages until it finds the data I need. For example, say I want to find a list of venemous snakes that live in the US. I might run my script with the keywords list,venemous,snakes,US, and I want to be able to trust with at least 80% certainty that it will return a list of snakes in the US. I already know how to implement the web spider part, I just want to learn how I can determine a web page's relevancy without knowing a single thing about the page's structure. I have researched web scraping techniques but they all seem to assume knowledge of the page's html tag structure. Is there a certain algorithm out there that would allow me to pull data from the page and determine its relevancy? Any pointers would be greatly appreciated. I am using Python with urllib and BeautifulSoup.
Django: relation "django_site" does not exist
25,240,502
11
35
30,101
0
python,django
You may be calling a site object before creating site model(before syncdb or migrate) ex: site = Site.objects.get(id=settings.SITE_ID)
0
0
0
0
2014-05-29T04:31:00.000
12
1
false
23,925,726
0
0
1
5
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
62,523,888
1
35
30,101
0
python,django
if you are getting this error when deploying you django app to Heroku, make sure you have run: heroku run python manage.py migrate This worked for me
0
0
0
0
2014-05-29T04:31:00.000
12
0.016665
false
23,925,726
0
0
1
5
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
70,516,679
-1
35
30,101
0
python,django
I just restarted my computer and problem disappeared :) (restarting docker-compose is not enough).
0
0
0
0
2014-05-29T04:31:00.000
12
-0.016665
false
23,925,726
0
0
1
5
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
34,530,826
1
35
30,101
0
python,django
A horrible code lead to this error for me. I had a global variable to get the current site SITE = Site.objects.get(pk=1) this was evaluated during migration and lead to the error.
0
0
0
0
2014-05-29T04:31:00.000
12
0.016665
false
23,925,726
0
0
1
5
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Django: relation "django_site" does not exist
60,034,840
1
35
30,101
0
python,django
Going to leave this here for future me: python manage.py makemigrations allauth This worked for me, I forgot why, took me too long to figure out how I fixed this the first time Edit: makemigrations sometimes doesnt make 3rd party things like allauth which some of my projects use, so I have to specify those ones
0
0
0
0
2014-05-29T04:31:00.000
12
0.016665
false
23,925,726
0
0
1
5
I am running a test django server on aws and I just installed django-userena and when I try to signup a user upon clicking submit, I get the following message: relation "django_site" does not exist LINE 1: ..."django_site"."domain", "django_site"."name" FROM "django_si... I am not really sure what went wrong here. I did some researching and added " 'django.contrib.sites'," to my installed apps, but I am still getting the error. I will there is an extra step I am missing. Any suggestions or advice?
Jython throws java.lang.ExceptionInInitializerError intermittently
23,934,381
0
0
913
0
python,jython
The first thing to do is to read the exception. About seven lines in, your exception says "Caused by: java.lang.NullPointerException". I would focus on that. Where is the null coming from? Also note that your stack trace is missing some lines at the end, where it says "... 7 more". This makes it hard to read the exception, because we don't know what the missing lines say. See if you can find a way to show the missing lines, in case they are helpful.
0
0
0
0
2014-05-29T11:12:00.000
2
0
false
23,932,002
0
0
1
1
I am a java person not well versed in jython or python. So pardon my ignorance if this is a basic question. I am using jython 2.5, python 2.5 and jre 1.7. Intermittently, the jython interpreter fails to start, and is throwing an error like: Exception in thread "main" java.lang.ExceptionInInitializerError - at java.lang.J9VMInternals.initialize(J9VMInternals.java:258) - at org.python.core.PySystemState.initStaticFields(PySystemState.java:912) - at org.python.core.PySystemState.doInitialize(PySystemState.java:882) - at org.python.core.PySystemState.initialize(PySystemState.java:800) - at org.python.core.PySystemState.initialize(PySystemState.java:750) - at org.python.core.PySystemState.initialize(PySystemState.java:743) - at org.python.util.jython.run(jython.java:150) - at org.python.util.jython.main(jython.java:129) - Caused by: java.lang.NullPointerException - at org.python.core.PyObject._cmpeq_unsafe(PyObject.java:1362) - at org.python.core.PyObject._eq(PyObject.java:1456) - at org.python.core.PyObject.equals(PyObject.java:244) - at java.util.HashMap.put(HashMap.java:475) - at java.util.HashSet.add(HashSet.java:217) - at org.python.core.PyType.fromClass(PyType.java:1317) - at org.python.core.PyType.fromClass(PyType.java:1275) - at org.python.core.PyEllipsis.(PyEllipsis.java:14) - at java.lang.J9VMInternals.initializeImpl(Native Method) - at java.lang.J9VMInternals.initialize(J9VMInternals.java:236) - ... 7 more I did search the net, however I did not find any helpful information. If anyone of you has solved this issue, please share. Thanks Ashoka
Pass a Python dict to a Java map possibly with JSON?
23,943,045
0
0
1,082
0
java,python,json,dictionary,map
You could pass it as JSON, and parse the info out of the JSON in Java. If the data is more simple, you could make every other String in Java's String[] args parameter represent a key or value, and then have your code loop through those and add them to a map.
0
0
0
0
2014-05-29T21:07:00.000
2
0
false
23,943,024
1
0
1
1
I am calling a Java file within Python code. Currently, I am passing it several parameters which Java sees in its String[] args array. I would prefer to pass just one parameter that is a Python dictionary (dict) which Java can understand and make into a map. I know I will probably have to pass the python dictionary as a string. How can I do this? Should I use JSON?
Making the stack levels in Django HTML email reports collapsable
23,959,164
1
7
123
0
python,django,email
As J. C. Leitão pointed out, the django error debug page has javascript and css(most css doesn't work in email). But all these css and js code are inline. The debug page is a single html file that has no external resources. In my company, we include the html as a attachment in the report email. When we feel the plain text traceback is not clear enough, we download the html page and open it. The user experience is not as good as Sentry, but much better than the plain-text only version.
0
0
0
0
2014-05-30T14:40:00.000
3
0.066568
false
23,957,232
0
0
1
1
Django has an awesome debug page that shows up whenever there's an exception in the code. That page shows all levels in the stack compactly, and you can expand any level you're interested in. It shows only in debug mode. Django also has an awesome feature for sending email reports when errors are encountered on the production server by actual users. These reports have a stacktrace with much less information than the styled debug page. There's an awesome optional setting 'include_html': True that makes the email include all the information of the debug page, which is really useful. The problem with this setting is that the HTML comes apparently unstyled, so all those levels of the stack are expanded to show all the data they contain. This results in such a long email that GMail usually can't even display it without sending you to a dedicated view. But the real problem is that it's too big to navigate in and find the stack level you want. What I want: I want Django to send that detailed stacktrace, but I want the levels of the stack to be collapsable just like in the debug page. How can I do that? (And no, I don't want to use Sentry.)
Secure access of webassets with Flask and AWS S3
23,986,820
1
0
890
0
python,amazon-web-services,amazon-s3,flask
Make the request to your Flask application, which will authenticate the user and then issue a redirect to the S3 object. The trick is that the redirect should be to a signed temporary URL that expires in a minute or so, so it can't be saved and used later or by others. You can use boto.s3.key.generate_url function in your Flask app to create the temporary URL.
0
0
0
0
2014-06-02T00:19:00.000
1
0.197375
false
23,985,795
0
0
1
1
I am trying to serve files securely (images in this case) to my users. I would like to do this using flask and preferably amazon s3 however I would be open to another cloud storage solution if required. I have managed to get my flask static files like css and such on s3 however this is all non-secure. So everyone who has the link can open the static files. This is obviously not what I want for secure content. I can't seems to figure out how I can make a file available to just authenticated user that 'owns' the file. For example: When I log into my dropbox account and copy a random file's download link. Then go over to anther computer and use this link it will denie me access. Even though I am still logged in and the download link is available to user on the latter pc.
How to access Django DB and ORM outside of Django
23,987,194
1
2
849
1
python,django,orm
I chose option 1 when I set up my environment, which does much of the same stuff. I have a JSON interface that's used to pass data back to the server. Since I'm on a well-protected VLAN, this works great. The biggest benefit, like you say, is the Django ORM. A simple address call with proper data is all that's needed. I also think this is the simplest method. The "blocking on the DB" issue should be non-existent. I suppose that it would depend on the DB backend, but really, that's one of the benefits of a DB. For example, a single-threaded file-based sqlite instance may not work. I keep things in Django as much as I can. This could also help with DB security/integrity, since it's only ever accessed in one place. If your client accesses the DB directly, you'll need to ship username/password with the Client. My recommendation is to go with 1. It will make your life easier, with fewer lines of code. Besides, as long as you code Client properly, it should be easy to modify DB access later on down the road.
0
0
0
0
2014-06-02T03:41:00.000
2
0.099668
false
23,987,050
0
0
1
1
So in my spare time, I've been developing a piece of network monitoring software that essentially can be installed on a bunch of clients, and the clients report data back to the server(RAM/CPU/Storage/Network usage, and the like). For the administrative console as well as reporting, I've decided to use Django, which has been a learning experience in itself. The Clients report to the Server asynchronously, with whatever data they happen to have(As of right now, it's just received and dumped, not stored in a DB). I need to access this data in Django. I have already created the models to match my needs. However, I don't know how to go about getting the actual data into the django DB safely. What is the way to go about doing this? I thought of a few options, but they all had some drawbacks: Give the Django app a reference to the Server, and just start a thread that continuously checks for new data and writes it to the DB. Have the Server access the Django DB directly, and write it's data there. The problem with 1 is that im even more tightly coupling the server with the django app, but the upside is that I can use the ORM to write the data nicely. The problem with 2 is that I can't use the ORM to write data, and I'm not sure if it could cause blocking on the DB in some cases. Is there some obvious good option I'm missing? I'm sorry if this question is vague. I'm very new to Django, and I don't want to write myself into a corner.
Is it possible to show multiple form views or tree views of the same object in Openerp?
23,993,609
0
3
3,944
0
python-2.7,openerp
yes it is possible you can create two views for same table with separate menu and action for each view.
0
0
0
0
2014-06-02T11:29:00.000
2
0
false
23,993,475
0
0
1
1
I need Multiple form views of the same object in my module , I created multiple forms but OpenERP shows only one form related to the object other forms are hidden . i looked in the documentation but there is no answer . if anybody know , please help. Thanks in advance.
How to use bower package manager in Django App?
24,024,759
2
18
8,184
0
python,django,bower
There is no recommended way - it depends on your project. If you are using bower, node for more than the django project, it might make sense to place it in your project root (above django) so that it may be reused elsewhere. If it's purely for django's static files, then it might make sense to place it in a src/ outside of the staticfiles system which builds to the static directory which is exported via collectstatic.
0
0
0
0
2014-06-03T19:12:00.000
5
0.07983
false
24,023,131
0
0
1
2
I'm new to Django framework and i have read that that the 'static' files like css and js must be inside the 'static' directory, but my question is: Given that bower package manager install its dependencies on a new directory called bower_components in the current directory, the bower.json must be created on the 'static' django directory? and if it is true, is not bower.json exported with the collectstatic command? (something might not wanted) Which is the recommended way to work with bower and Django framework? Update: Thanks Yuji 'Tomita' Tomita, your answer can give more perspective. I want to use bower just to manage front end dependencies like jQuery, bootstrap and so on, as you see, by logic must be inside de static/ django directory, but do it that way, can cause to the bower.json be treated as a static resource, something might not wanted.
How to use bower package manager in Django App?
25,576,643
2
18
8,184
0
python,django,bower
If you're afraid of the bower.json being included, the collectstatic command has an --ignore option that you can use to exclude whatever you want.
0
0
0
0
2014-06-03T19:12:00.000
5
0.07983
false
24,023,131
0
0
1
2
I'm new to Django framework and i have read that that the 'static' files like css and js must be inside the 'static' directory, but my question is: Given that bower package manager install its dependencies on a new directory called bower_components in the current directory, the bower.json must be created on the 'static' django directory? and if it is true, is not bower.json exported with the collectstatic command? (something might not wanted) Which is the recommended way to work with bower and Django framework? Update: Thanks Yuji 'Tomita' Tomita, your answer can give more perspective. I want to use bower just to manage front end dependencies like jQuery, bootstrap and so on, as you see, by logic must be inside de static/ django directory, but do it that way, can cause to the bower.json be treated as a static resource, something might not wanted.
Exporting figures from Bokeh as svg or pdf?
24,030,090
0
29
17,265
0
python,bokeh
It seems that since bokeh uses html5 canvas as a backend, it will be writing things to static html pages. You could always export the html to pdf later.
0
0
0
0
2014-06-03T23:25:00.000
3
0
false
24,026,618
0
1
1
1
Is it possible to output individual figures from Bokeh as pdf or svg images? I feel like I'm missing something obvious, but I've checked the online help pages and gone through the bokeh.objects api and haven't found anything...
Using Mechanize for python, need to be able to right click
24,027,994
0
1
194
0
python,selenium,web-scraping,mechanize
I would try to watch Chrome's network tab, and try to imitate the final request to get the image. If it turned out to be too difficult, then I would use selenium as you suggested.
0
0
1
0
2014-06-04T02:12:00.000
2
0
false
24,027,928
0
0
1
1
My script logs in to my account, navigates the links it needs to, but I need to download an image. This seems to be easy enough to do using urlretrive. The problem is that the src attribute for the image contains a link which points it to the page which initiates a download prompt, and so my only foreseeable option is to right click and select 'save as'. I'm using mechanize and from what I can tell Mechanize doesn't have this functionality. My question is should I switch to something like Selenium?
Going back to the main page after closing the pop up window
24,085,441
0
0
1,622
0
python-2.7,selenium,selenium-webdriver,robotframework,automated-tests
I have seen this issue and found that there is a recovery period where Selenium does not work correctly for a short time after closing a window. Try using a fixed delay or poll with Wait Until Keyword Succeeds combined with a keyword from Selenium2Library.
0
0
1
0
2014-06-04T08:20:00.000
3
0
false
24,032,359
0
0
1
1
I am having a problem on handling the pop up windows in robot framework. The process I want to automate is : When the button is clicked, the popup window appears. When the link from that popup window is clicked, the popup window is closed automatically and go back to the main page. While the popup window appears, the main page is disabled, and it can be enabled only when the link from the pop up window is clicked. The problem I have here is that I cannot go back to the main page after clicking the link from the popup window. I got the following error. 20140604 16:04:24.160 : FAIL : NoSuchWindowException: Message: u'Unable to get browser' I hope you guys can help me solve this problem. Thank you!
How do you test the consistency models and race conditions on Google App Engine / NDB?
32,898,743
1
1
261
0
python,google-app-engine,app-engine-ndb
I am answering this over a year since it was asked. The only way to test these sorts of things is by deploying an app on GAE. What I sometimes do when I run across these challenges is to just "whip up" a quick application that is tailor made to just test the scenario under consideration. And then, as you put it, you just have to 'script' the doing of stuff using some combination of tasks, cron, and client side curl type operations. The particular tradeoff in the original question is write throughput versus consistency. This is actually pretty straightforward once you get the hang of it. A strongly consistent query requires that the entities are in the same entity group. And, at the same time, there is the constraint that a given entity group may only have approximately 1 write per second. So, you have to look at your needs / usage pattern to figure out if you can use an entity group.
0
1
0
0
2014-06-05T01:20:00.000
4
0.049958
false
24,050,155
0
0
1
3
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How do you test the consistency models and race conditions on Google App Engine / NDB?
24,050,936
1
1
261
0
python,google-app-engine,app-engine-ndb
I am not sure it can be tested. The inconsistencies are inconsistent. I think you just have to know that datastore operations have inconsistencies, and code around them. You don't want to plan on observations from your tests being dependable in the future.
0
1
0
0
2014-06-05T01:20:00.000
4
0.049958
false
24,050,155
0
0
1
3
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
How do you test the consistency models and race conditions on Google App Engine / NDB?
24,050,360
2
1
261
0
python,google-app-engine,app-engine-ndb
You really need to do testing in the real environment. At best the dev environment is an approximation of production. You certainly can't draw any conclusions at all about performance by just using the SDK. In many cases the SDK is faster (startup times) and slower (queries on large datasets. Eventual Consistency is emulated and not 100% the same as production.
0
1
0
0
2014-06-05T01:20:00.000
4
0.099668
false
24,050,155
0
0
1
3
Setup: Python, NDB, the GAE datastore. I'm trying to make sure i understand some of the constraints around my data model and its tradeoffs for consistency and max-write-rate. Is there any way to test/benchmark this on staging or on my dev box, or should I just bite the bullet, push to prod on a shadow site of sorts, and write a bunch of scripts?
Django admin panel interface missing
24,057,408
1
0
449
0
python,django
You need to collectstatic in your live environment, have you setup your static folders and placed the appropriate declarations in your nginx? If yes then just run: ./manage.py collectstatic
0
0
0
0
2014-06-05T10:05:00.000
1
0.197375
false
24,057,239
0
0
1
1
i am using default django admin panel,I have just moved my django site on my live server, and found the admin panel has no styling with it,but in my local server,everything is fine.In mention i am using nginx.To fix this problem, i have just check the path /usr/local/lib/python2.7/site-packages/django/contrib/ and found that there is no /django/contrib/ directory in my virtual environment.There is no /django/contrib/ file in my virtual environment.is that the reason of missing django admin panel interface?
How can I get XML data in Python/Django passed in POST request?
24,061,391
2
0
2,986
0
python,xml,django,post
I'm not quite sure what you're asking. Do you just want to know how to access the POST data? You can get that via request.body, which will contain your XML as a string. You can then use your favourite Python XML parser on it.
0
0
1
0
2014-06-05T13:18:00.000
1
1.2
true
24,061,320
0
0
1
1
There is XML data passed in POST request as a simple string and not inside some name=value pair, with HTTP header (optionally) set to 'Content-Type: text/xml'. How can I get this data in Python (by its own ways or by tools of Django)?
How to search freebase data dump
25,625,683
1
3
964
0
python,rdf,freebase
The Freebase dump is in RDF format. The easiest way to query it is to dump it (or a subset of it) into an RDF store. It'll be quicker to query, but you'll need to pay the database load time up front first.
0
0
1
0
2014-06-05T20:31:00.000
1
0.197375
false
24,069,711
0
0
1
1
I downloaded the freebase data dump and I want to use it to get information about a query just like how I do it using the web API. How exactly do I do it? I tried using a simple zgrep but the result was a mess and takes too much time. Any graceful way to do it (preferably something that plays nicely with python)?
Somthing wrong with using CSV as database for a webapp?
64,239,216
-1
3
3,024
1
python,csv,web-applications,flask
I am absolutely baffled by how many people discourage using CSV as an database storage back-end format. Concurrency: There is NO reason why CSV can not be used with concurrency. Just like how a database thread can write to one area of a binary file at the same time that another thread writes to another area of the same binary file. Databases can do EXACTLY the same thing with CSV files. Just as a journal is used to maintain the atomic nature of individual transactions, the same exact thing can be done with CSV. Speed: Why on earth would a database read and write a WHOLE file at a time, when the database can do what it does for ALL other database storage formats, look up the starting byte of a record in an index file and SEEK to it in constant time and overwrite the data and comment out anything left over and record the free space for latter use in a separate index file, just like a database could zero out the bytes of any unneeded areas of a binary "row" and record the free space in a separate index file... I just do not understand this hostility to non-binary formats, when everything that can be done with one format can be done with the other... everything, except perhaps raw binary data compression, depending on the particular CSV syntax in use (special binary comments... etc.). Emergency access: The added benefit of CSV is that when the database dies, which inevitably happens, you are left with a CSV file that can still be accessed quickly in the case of an emergency... which is the primary reason I do not EVER use binary storage for essential data that should be quickly accessible even when the database breaks due to incompetent programming. Yes, the CSV file would have to be re-indexed every time you made changes to it in a spread sheet program, but that is no different than having to re-index a binary database after the index/table gets corrupted/deleted/out-of-sync/etc./etc..
0
0
0
0
2014-06-06T00:11:00.000
4
-0.049958
false
24,072,231
0
0
1
2
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database.
Somthing wrong with using CSV as database for a webapp?
47,320,760
1
3
3,024
1
python,csv,web-applications,flask
I think there's nothing wrong with that as long as you abstract away from it. I.e. make sure you have a clean separation between what you write and how you implement i . That will bloat your code a bit, but it will make sure you can swap your CSV storage in a matter of days. I.e. pretend that you can persist your data as if you're keeping it in memory. Don't write "openCSVFile" in you flask app. Use initPersistence(). Don't write "csvFile.appendRecord()". Use "persister.saveNewReport()". When and if you actually realise CSV to be a bottleneck, you can just write a new persister plugin. There are added benefits like you don't have to use a mock library in tests to make them faster. You just provide another persister.
0
0
0
0
2014-06-06T00:11:00.000
4
0.049958
false
24,072,231
0
0
1
2
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database.
How do I run flask on codebox IDE?
24,072,542
0
0
256
0
python,flask
What exactly is happening when Run is clicked? It run the app.py on the codebox vm. It maybe python app.py To update, you fist should save your change(ctrl-s), and stop the run app, run it again.
0
0
0
0
2014-06-06T00:33:00.000
1
0
false
24,072,413
1
0
1
1
When I create a Python box, I run command pip install -r requirements.txt then I click the Run button on the side and I can see the existing app run. What exactly is happening when Run is clicked? I've updated the existing file app.py, I made def hello(): return something new, however, it seems like codebox does not update and I still see hello world. What is happening when I click on the Run button? I'm trying to follow the Mega Flask Tutorial but haven't been able to make the flask server return the correct value, it always returns "hello world".
Is os.path.join necessary?
24,072,745
4
11
2,005
0
python,filepath
If you are presenting a filename to a user for any reason, it's better if that filename follows the usual OS conventions. Windows has been able to use the / for path separators for as long as there have been paths - this was a DOS feature.
0
0
0
0
2014-06-06T01:16:00.000
2
0.379949
false
24,072,713
0
0
1
1
Currently I use os.path.join almost always in my django project for cross OS support; the only places where I don't currently use it are for template names and for URLs. So in situations where I want the path '/path/to/some/file.ext' I use os.path.join('path', 'to', 'some', 'file.ext'). However I just tested my project on windows to see whether that worked fine / was necessary and it seems windows will happily accept '/' or '\\' (or '\' when working outside of python), and as all UNIX systems all use '/' it seems like there is no reason to ever use '\\', in which case is it necessary to use os.path.join anywhere? Is there a situation in which adding a '/' or using posixpath will cause problems on certain operating systems (not including XP or below as they are no longer officially supported)? If not I think I will just use posixpath or adding a '/' for joining variables with other variables or variables with strings and not separate out string paths (so leave it as '/path/to/some/file.ext') unless there is another reason for me to not do that other than it breaking things. To avoid this being potentially closed as primarily-opinion based I would like to clarify that my specific question is whether not using os.path.join will ever cause a python program to not work as intended on a supported operating system.
Google App Engine Check Success of backup programmatically
24,088,125
1
0
69
0
python,google-app-engine
Unfortunately there is not currently a well-supported way to do this. However, with the disclaimer that this is likely to break at some point in the future, as it depends on internal implementation details, You can fetch the relevant _AE_Backup_Information and _AE_DatastoreAdmin_Operation entities from your datastore and inspect them for information regarding the backup. In particular, the _AE_DatastoreAdmin_Operation has fields active_jobs, completed_jobs, and status.
0
1
0
0
2014-06-06T08:05:00.000
1
1.2
true
24,077,041
0
0
1
1
I am taking the backup of datastore , using Taskqueues. I want to check whether the backup has completed successfully or not. I can check the end of the backup job by checking the taskqueue, but how can i check whether the backup was successful or it failed due to some errors.
very high number of queries in a second.
24,094,254
1
2
90
0
php,python,mysql,google-cloud-messaging
From what I understand, if you want to store unstructured data and retrieve it really fast, you should be looking at NoSql segment for storage and try to do a POC using a few of the available solutions in market. I would like to suggest giving a try to Aerospike NoSql DB which has a track record of easily doing 1 Million TPS on a single machine.
0
0
0
0
2014-06-07T05:12:00.000
2
0.099668
false
24,093,913
0
0
1
1
I need to design a mobile application which requires a lot of database queries. A lot of means, a peak value can be 1 million in a sec. I dont know which database to use and which backed to use. In client side, i will be using phonegap for android and ios and i will be needing a webinterface for pc also. My doubts are, i am planning to host the system online and use google cloud message to push data to users. Can online hostings handle this much traffic? I am planning to use php as backed. Or python? the software need not be having a lot of calculation but a lot of queries. And, which database system to use? Mysql or, google cloud sql? Also tell me about using hadoop or other technologies like load balancers. I may be totally wrong about the question itself. Thank you very much in advance.
Google App Engine SDK Fatal Error
24,108,384
0
0
194
0
python,google-app-engine
App Engine does not support Python 3.x. Do you still have 2.x installed? Go to your Google App Engine Launcher > Preferences, and make sure you have the proper Python Path to your v2.x. It should be something like "/usr/bin/python2.7" From Terminal, typewhereis python to help find it. If you know you were using version 2.7, try: whereis python2.7
0
1
0
0
2014-06-08T16:27:00.000
1
0
false
24,108,241
0
0
1
1
I installed Python 3.4 on my Mac (OS 10.9.3) and my command for running Google App Engine from the terminal via /usr/local/dev_appengine stopped working. I then (stupidly) did some rather arbitrary things from online help forums and now my Google App Engine itself stopped working as well. When I open it, it says: Sorry, pieces of GoogleAppEngineLauncher.app appear missing or corrupted, or I can't run python2.5 properly. Output was: I have tried to delete the application and all related files and reinstall, but nothing has worked for me. It now fails to make the command symlinks as well so when I try to run from terminal I get /usr/local/bin/dev_appserver.py: No such file or directory.
Long running task scalablity EC2
24,115,986
0
0
63
0
python,amazon-web-services,amazon-ec2,flask,scalability
Autoscaling is tailor-made for situations like these. You could run an initial diagnostic to see what the CPU usage usually is when a single server is running it's maximum allowable tasks (let's say it's above X%). You can then set up an autoscaling rule to spin up more instances once this threshold is crossed. Your rule could ensure a new instance is created every time one instance crosses X%. Further, you can also add a rule to scale down (setting the minimum instances to 1) based on a similar usage threshold.
0
1
0
1
2014-06-09T04:15:00.000
1
0
false
24,113,602
0
0
1
1
There is a long running task (20m to 50m) which is invoked from a HTTP call to a Webserver. Now, since this task is compute intensive, the webserver cannot take up more than 4-5 tasks in parallel (on m3.medium). How can this be scaled? Can the auto-scaling feature of EC2 be used in this scenario? Are there any other frameworks available which can help in scaling up and down, preferably on AWS EC2?
Custom domain routing to Flask server with custom domain always showing in address bar
24,125,918
5
6
2,809
0
python,web-services,dns,flask,tornado
I managed to solve it by myself, but I'll add this as an answer since evidently someone thought it was a worthwhile question. It turns out that it was just me that did not understand how DNS works and what the difference between DNS and domain forwarding is. At most domain hosts you can configure "domain forwarding", which sounds what precisely what you need but is NOT. Rather, for the simple usecase above, I went into the DNS Zone Records in the options and created a DNS Zone Record type A that pointed xyz.com to a.b.c.d. The change does not seem to have propagated entirely yet, but already on some devices I can see it working exactly how I want it to, so I will consider this issue resolved.
0
1
0
0
2014-06-09T15:19:00.000
1
1.2
true
24,123,389
0
0
1
1
I have a small home-server running Flask set up at IP a.b.c.d. I also have a domain name xyz.com. Now I would like it so that when going to xyz.com, the user is served the content from a.b.c.d, with xyz.com still showing in the address bar. Similarly, when going to xyz.com/foo the content from a.b.c.d/foo should be shown, with xyz.com/foo showing in the address bar. I have path forwarding activated at my domain name provider, so xyz.com/foo is correctly forwarded to a.b.c.d/foo, but when going there a.b.c.d/foo is shown in the address bar. I'm currently running tornado, but I can switch to another server if it is necessary. Is it possible to set up this kind of solution? Or is my only option to buy some kind of hosting?
How to find and replace 6 digit numbers within HREF links from map of values across site files, ideally using SED/Python
24,127,504
0
1
95
0
python,html,regex,bash,sed
I will write the outline for the code in some kind of pseudocode. And I don't remember Python well to quickly write the code in Python. First find what type it is (if contains N=0 then type 3, if contains "+" then type 2, else type 1) and get a list of strings containing "N=..." by exploding (name of PHP function) by "+" sign. The first loop is on links. The second loop is for each N= number. The third loop looks in map file and finds the replacing value. Load the data of the map file to a variable before all the loops. File reading is the slowest operation you have in programming. You replace the value in the third loop, then implode (PHP function) the list of new strings to a new link when returning to a first loop. Probably you have several files with the links then you need another loop for the files. When dealing with repeated codes you nees a while loop until spare number found. And you need to save the numbers that are already used in a list.
0
0
1
0
2014-06-09T18:44:00.000
1
0
false
24,126,783
1
0
1
1
I need to create a BASH script, ideally using SED to find and replace value lists in href URL link constructs with HTML sit files, looking-up in a map (old to new values), that have a given URL construct. There are around 25K site files to look through, and the map has around 6,000 entries that I have to search through. All old and new values have 6 digits. The URL construct is: One value: HREF=".*jsp\?.*N=[0-9]{1,}.*" List of values: HREF=".*\.jsp\?.*N=[0-9]{1,}+N=[0-9]{1,}+N=[0-9]{1,}...*" The list of values are delimited by + PLUS symbol, and the list can be 1 to n values in length. I want to ignore a construct such as this: HREF=".*\.jsp\?.*N=0.*" IE the list is only N=0 Effectively I'm only interested in URL's that include one or more values that are in the file map, that are not prepended with CHANGED -- IE the list requires updating. PLEASE NOTE: in the above construct examples: .* means any character that isn't a digit; I'm just interested in any 6 digit values in the list of values after N=; so I've trying to isolate the N= list from the rest of the URL construct, and it should be noted that this N= list can appear anywhere within this URL construct. Initially, I want to create a script that will create a report of all links that fulfills the above criteria and that have a 6 digital OLD value that's in the map file, with its file path, to get an understanding of links impacted. EG: Filename link filea.jsp /jsp/search/results.jsp?N=204200+731&Ntx=mode+matchallpartial&Ntk=gensearch&Ntt= filea.jsp /jsp/search/BROWSE.jsp?Ntx=mode+matchallpartial&N=213890+217867+731& fileb.jsp /jsp/search/results.jsp?N=0+450+207827+213767&Ntx=mode+matchallpartial&Ntk=gensearch&Ntt= Lastly, I'd like to find and replace all 6 digit numbers, within the URL construct lists, as outlined above, as efficiently as possible (I'd like it to be reasonably fast as there could be around 25K files, with 6K values to look up, with potentially multiple values in the list). **PLEASE NOTE:** There is an additional issue I have, when finding and replacing, is that an old value could have been assigned a new value, that's already been used, that may also have to be replaced. E.G. If the map file is as below: MAP-FILE.txt OLD NEW 214865 218494 214866 217854 214867 214868 214868 218633 ... ... and there is a HREF link such as: /jsp/search/results.jsp?Ntx=mode+matchallpartial&Ntk=gensearch&N=0+450+214867+214868 214867 changes to 214868 - this would need to be prepended to flag that this value has been changed, and should not be replaced, otherwise what was 214867 would become 218633 as all 214868 would be changed to 218633. Hope this makes sense - I would then need to run through file and remove all 6 digit numbers that had been marked with the prepended flag, such that link would become: /jsp/search/results.jsp?Ntx=mode+matchallpartial&Ntk=gensearch&N=0+450+214868CHANGED+218633CHANGED Unless there's a better way to manage these infile changes. Could someone please help me on this, I'm note an expert with these kind of changes - so help would be massively appreciated. Many thanks in advance, Alex
django-allauth: how to modify email confirmation url?
24,129,038
6
8
2,367
0
python,django,email,nginx,django-allauth
Django get hostname and port from HTTP headers. Add proxy_set_header Host $http_host; into your nginx configuration before options proxy_pass.
0
0
0
0
2014-06-09T20:26:00.000
2
1.2
true
24,128,433
0
0
1
1
I'm running django on port 8001, while nginx is handling webserver duties on port 80. nginx proxies views and some REST api calls to Django. I'm using django-allauth for user registration/authentication. When a new user registers, django-allauth sends the user an email with a link to click. Because django is running on port 8001, the link looks like http://machine-hostname:8001/accounts/confirm-email/xxxxxxxxxxxxxx How can I make the url look like http://www.example.com/accounts/confirm-email/xxxxxxxx ? Thanks!
Google Appengine runs Cron tasks even if they are no cron.yaml file defined
49,346,121
1
1
645
0
google-app-engine,python-2.7,cron
The tip from @Greg above solved that problem for me. Note the full sequence of events: A past version of the application included a cron.yaml file that ran every hour In a successive version, I removed the cron.yaml file, thinking that was enough, but then I discovered that the cron jobs were still running! Uploading an EMPTY cron.yaml file didn't change things either Uploading a cron.yaml file with only "cron:" in it did it -- the cron jobs just stopped. From the above I reckon that the way things work is this: when a cron.yaml file is found it is parsed, and if the syntax is correct, its cron jobs are loaded in the app server. And apparently, just removing the cron.yaml file from a successive version, or loading an empty one (i.e. un-parseable, bad syntax) doesn't remove the cron jobs. The only way to remove the cron jobs is to load a new, PARSEABLE cron.yaml file, i.e. with the "cron:" line, but with no actual jobs after that. And the proof of the pudding is that after you do that, you can now remove cron.yaml from successive versions, and those old cron jobs will not come back any more.
0
1
0
0
2014-06-10T08:07:00.000
1
1.2
true
24,135,908
0
0
1
1
I start receiving errors from the CRON service even if I have not a single cron.yaml file defined. The cron task runs every 4 hours. I really don't know where to look at in order to correct such behaviour. Please tell me what kind of information is needed to correct the error. Cron jobs First Cron error Cron Job : /admin/push/feedbackservice/process - Query APNS Feedback service and remove inactive devices Schedule/Last Run/Last Status (All times are UTC) : every 4 hours (UTC) 2014/06/10 07:00:23 on time Failed Second Cron error Cron job: /admin/push/notifications/cleanup - Remove no longer needed records of processed notifications Schedule/Last Run/Last Status (All times are UTC) : every day 04:45 (America/New_York) - 2014/06/09 04:45:01 on time Failed Console log 2014-06-10 09:00:24.064 /admin/push/feedbackservice/process 404 626ms 0kb AppEngine-Google; (+http://code.google.com/appengine) module=default version=1 0.1.0.1 - - [10/Jun/2014:00:00:24 -0700] "GET /admin/push/feedbackservice/process HTTP/1.1" 404 113 - "AppEngine-Google; (+http://code.google.com/appengine)" "xxx-dev.appspot.com" ms=627 cpu_ms=353 cpm_usd=0.000013 queue_name=__cron task_name=471b6c0016980883f8225c35b96 loading_request=1 app_engine_release=1.9.5 instance=00c61b17c3c8be02ef95578ba43 I 2014-06-10 09:00:24.063 This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
Can two programs, written in different languages, connect to the same SQL database?
24,157,502
0
1
1,076
1
sql,node.js,python-2.7
Yes its possible. Two applications with different languages using one database is almost exactly the same as one application using several connections to it, so you are probably already doing it. All the possible problems are exactly the same. The database won't even know whether the connections are made from one application or the other.
0
0
0
0
2014-06-11T07:25:00.000
3
0
false
24,156,992
0
0
1
2
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script will communicate with the hardware via bacnet IP and log diagnostic info in an SQL database. The nodejs JavaScript server code will respond to webfront requests by querying the database and returning the relevant info. I'm pretty new to SQL, so my primary question is.. Is this possible? My second and more abstract questions would be... is this wise? And, is there any obvious advantage to using the same language on both ends, whether that language is Python, Java, or something else?
Can two programs, written in different languages, connect to the same SQL database?
24,158,115
1
1
1,076
1
sql,node.js,python-2.7
tl;dr You can use any programming language that provides a client for the database server of your choice. To the database server, as long as the client is communicating as per the server's requirements (that is, it is using the server's library, protocol, etc.), then there is no difference to what programming language or system is being used. The database drivers provide a common abstract layer, providing a guarantee that the database server and the client are speaking the same language. The programming language's interface to the database driver takes care of the language specifics - for example, providing syntax that conforms to the language; and on the opposite side it the driver will ensure that all commands are sent in the protocol that the server expects. Since drivers are such a core requirement, there are usually multiple drivers available for databases; and also because good database access is a core requirement for programmers, each language strives to have a "standard" API for all databases. For example Java has JDBC Python has the DB-API, .NET has ODBC (and ADO I believe, but I am not a .NET expert). These are what the database drivers will conform to, so that it doesn't matter which database server you are using, you have one standard way to connect, one standard way to execute queries and one standard way to fetch results - in effect, making your life as a programmer easier. In most cases, there is a reference driver (and API/library) provided by the database vendor. It is usually in C, and it is also what the "native" client to the database uses. For example the mysql client for the MySQL database server using the MySQL C drivers to connect, and it is the same driver that is used by the Python MySQLdb driver; which conforms to the Python DB-API.
0
0
0
0
2014-06-11T07:25:00.000
3
1.2
true
24,156,992
0
0
1
2
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script will communicate with the hardware via bacnet IP and log diagnostic info in an SQL database. The nodejs JavaScript server code will respond to webfront requests by querying the database and returning the relevant info. I'm pretty new to SQL, so my primary question is.. Is this possible? My second and more abstract questions would be... is this wise? And, is there any obvious advantage to using the same language on both ends, whether that language is Python, Java, or something else?
Which has the better memory footprint, ImageMagick or Pillow (PIL)?
24,158,927
2
3
1,421
0
python,django,heroku,imagemagick,pillow
I have had a similar experience (alas in Java) which might help make a decision . Calling the ImageMagick library binding from Java (using JNI) seemed like a good idea, but turned out to leak memory by the tons. We ended up moving to an external command-line invocation of ImageMagick - which worked a lot better, for the reason you mentioned- guarantee the release of memory.
0
0
0
1
2014-06-11T09:00:00.000
1
0.379949
false
24,158,704
0
0
1
1
Our Heroku-hosted Django app does some simple image processing on images our users upload to Amazon S3—mostly resizing to the sizes we will display on the site. For this we use Pillow (the fork of the Python Imaging Library), running in a Celery task. We have seen the time for this operation change from a fraction of a second to half a minute or more. My best guess for why is that we are now often getting memory-quota (R14) conditions (just because the application is bigger), which I would naïvely expect to make resizing particularly slow. So I am considering refactoring the tasks to use an external ImageMagick process to do the processing rather than in-memory PIL. The thinking is that this will at least guarantee that memory used during resizing is released when the convert process terminates. So my question is, is this going to help? Is ImageMagick’s convert going to have a smaller memory footprint than Pillow?
Django Haystack Elastic Search maximum value for indexable CharField
24,213,412
0
0
169
0
python,django,elasticsearch,full-text-search,django-haystack
The problem is that there was not enough memory. Upgrading from 2GB to 4GB in my VM fixed the problem.
0
0
0
0
2014-06-11T10:11:00.000
1
1.2
true
24,160,264
0
0
1
1
I was wondering whats the maximum value for an indexable CharField in Django Haytack with elasticsearch? I am asking this, because I am getting a timeout when I try to index a specific charfield that has a size of at least 736166. This seems to be pretty big for me, but is there a way for me to avoid that timeout, or am I not supposed to be using fields this big? Thanks in advance.
Django call_command and popen. When to use what?
24,169,550
2
0
437
0
python,django,django-views,subprocess,popen
You don't say what "command" you want to call. call_command is for calling Django management commands only - eg syncdb, runserver, or any custom management commands. It calls the command directly, without shelling out to the system. popen and the various functions in subprocess etc are for shelling out to call any other executable file.
0
0
0
0
2014-06-11T16:37:00.000
1
1.2
true
24,168,152
0
0
1
1
I want to start a command from django management/command from django.view but don't know which one to use. Can some one explain me the properties and differences of both. Thanks in advance
http POSTs over a certain size failing when authentication is enabled
24,169,289
2
0
119
0
python,json,authentication,flask,werkzeug
It goes as follows: your client POSTs JSON data without authentication server receives the request (not necessarily in one long chunk, it might come in parst) server evaluates the requests and finds it is not providing credentials, so decides stopping processing the request and replies 401. With short POST size server consumes all and does not have time to break the POST requests in the middle. With growing POST size the chances to interrupt unauthorized POST request is higher. You client has two options: Either start sending credentials right away. Or try / catch broken pipe and react to it by forming proper Digest based request. The first feeling is, something is broken, but it is rather reasonable approach - imagine, someone could post huge POST request, consume resources on your server while not being authorized to do so. The reaction of the server seems reasonable in this context.
0
0
0
0
2014-06-11T17:27:00.000
1
0.379949
false
24,169,037
0
0
1
1
I've developed a fairly simple web service using Flask (Python 2.7, current Flask and dependencies), where clients POST a hunk of JSON to the server and get a response. This is working 100% of the time when no authentication is enabled; straight up POST to my service works great. Adding HTTP Digest authentication to the endpoint results in the client producing a 'Broken Pipe' error after the 401 - Authentication Required response is sent back... but only if the JSON hunk is more than about 22k. If the JSON hunk being transmitted in the POST is under ~22k, the client gets its 401 response and cheerfully replies with authentication data as expected. I'm not sure exactly what the size cut-off is... the largest I've tested successfully with is 21766 bytes, and the smallest that's failed is 43846 bytes. You'll note that 32k is right in that range, and 32k might be a nice default size for a buffer... and this smells like a buffer size problem. The problem has been observed using a Python client (built with the 'requests' module) and a C++ client (using one of Qt's HTTP client classes). The problem is also observed both when running the Flask app "stand-alone" (that is, via app.run()) and when running behind Apache via mod_wsgi. No SSL is enabled in either case.
Flask can't find directory.. returns a 404?
24,183,349
0
0
212
0
python,flash,flask,client,directory
Flask doesn't produce directory views; that's something generic file-serving HTTP servers like Apache and nginx can do (if so configured) but it is not functionality that Flask offers out of the box. You'd have to code this yourself.
0
0
0
0
2014-06-12T04:04:00.000
1
0
false
24,176,266
0
0
1
1
Directory Structure: static game something1.swf something2.swf something3.swf templates something.html main.py I have a small web game application that interacts with Flash .SWF files to show the images and everything back to the website. If I call ./static/game/something1.swf, it loads. It loads any other file that is being called specifically, however my application needs to call the whole directory in general for whatever reason. If I call ./static/game/ or ./static/game, for some reason, Flask returns a 404 error. It's like the directory does not exist at all to Flask. My hypothesis is that Flask sees game as a file, when it isn't.
Show Django errors on console rather than Web Browser
24,186,117
0
1
2,805
0
python,django,rest,extjs
In settings.py, set DEBUG=True to make Django show the whole error information in the web response. With DEBUG=False you have to manually print the error to console using print or export it to a log with some log tool.
0
0
0
0
2014-06-12T13:33:00.000
3
0
false
24,185,800
0
0
1
1
I'm developing a REST application using ExtJS4 and using Django 1.6.5 to create a simple mock API for this application, that for now I want to only save some data in the SQLite db and output some other to the console. While testing the GET methods is fairly simple, when I have a problem with POST, PUT and DELETE I can't see the returning error from Django in the browser. Is there any way to make these errors show up in the Django's developer server console instead?
Using Google App Engine to update files on Google Compute Engine
24,194,583
0
0
273
0
python,google-app-engine,file-transfer,google-compute-engine
The most straightforward approach seems to be: A user submit a form on App Engine instance. App Engine instance makes a POST call to a handler on GCE instance with the new data. GCE instance updates its own file and processes it.
0
1
0
0
2014-06-12T21:28:00.000
4
0
false
24,194,217
0
0
1
2
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Using Google App Engine to update files on Google Compute Engine
24,215,374
0
0
273
0
python,google-app-engine,file-transfer,google-compute-engine
You can set an action URL in your form to point to the GCE instance (it can be load-balanced if you have more than one). Then all data will be uploaded directly to the GCE instance, and you don't have to worry about transferring data from your App Engine instance to GCE instance.
0
1
0
0
2014-06-12T21:28:00.000
4
0
false
24,194,217
0
0
1
2
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Avoid Python CGI browser timeout
24,201,579
1
0
678
0
python,sqlalchemy
You will have to set the timing on the http server's (Apache for example) configuration. The default should be more than 120 seconds, if I remember correct.
0
0
0
1
2014-06-13T09:01:00.000
1
0.197375
false
24,201,497
0
0
1
1
I have a Python CGI that I use along with SQLAlchemy, to get data from a database, process it and return the result in Json format to my webpage. The problem is that this process takes about 2mins to complete, and the browsers return a time out after 20 or 30 seconds of script execution. Is there a way in Python (maybe a library?) or an idea of design that can help me let the script execute completely ? Thanks!
What's the proper Tornado response for a log in success?
24,232,201
1
1
246
0
python,ios,tornado
You can send your response with either self.write() or self.finish() (the main difference is that with write() you can assemble your response in several pieces, while finish() can only be called once. You also have to call finish() once if you're using asynchronous functions that are not coroutines, but in most cases it is done automatically). As for what to send, it doesn't really matter if it's a non-browser application that only looks at the status code, but I generally send an empty json dictionary for cases like this so there is well-defined space for future expansion.
0
1
0
0
2014-06-14T21:38:00.000
1
1.2
true
24,224,539
0
0
1
1
So far I have a pretty basic server (I haven't built in any security features yet, like cookie authentication). What I've got so far is an iOS app where you enter a username and password and those arguments are plugged into a URL and passed to a server. The server checks to see if the username is in the database and then sends a confirmation to the app. Pretty basic but what I can't figure out is what the confirmation should look like? The server is a Python Tornado server with a MySQL dbms.. What I'm unsure of is what Tornado should/can send in response? Do I use self.write or self.response or self.render? I don't think it's self.render because I'm not rendering an HTML file, I'm just sending the native iOS app a confirmation response which, once received by the app, will prompt it to load the next View Controller. After a lot of googling I can't seem to find the answer (probably because I don't know how to word the question correctly). I'm new to servers so I appreciate your patience.
How to use Google Cloud Datastore Statistics
24,227,785
1
0
68
1
python,google-app-engine,google-cloud-datastore
You can't, that's not what it's for at all. It's only for very broad-grained statistics about the number of each types in the datastore. It'll give you a rough estimate of how many Person objects there are in total, that's all.
0
1
0
0
2014-06-15T07:34:00.000
2
0.099668
false
24,227,510
0
0
1
1
How can I use Google Cloud Datastore stats object (in Python ) to get the number of entities of one kind (i.e. Person) in my database satisfying a given constraint (i.e. age>20)?
Does Google App Engine support Python 3?
45,396,993
1
50
20,010
0
python,google-app-engine
YES! Google App engine supports python v3, you need to set up flexible environments. I got a chance to deploy my application on app engine and It's using python 3.6 runtime and works smoothly... :)
0
1
0
0
2014-06-15T11:37:00.000
7
0.028564
false
24,229,203
1
0
1
1
I started learning Python 3.4 and would like to start using libraries as well as Google App Engine, but the majority of Python libraries only support Python 2.7 and the same with Google App Engine. Should I learn 2.7 instead or is there an easier way? (Is it possible to have 2 Python versions on my machine at the same time?)
Caching in Django: Redis + Django & Varnish
24,230,332
2
0
739
0
python,django,caching,varnish
Varnish is a caching HTTP reverse proxy. It's always in front of your server. However Redis is a key-value store. So they are not located in the same level. For me, I use redis to store builded objects, result of DB queries, and varnish for static pages (Don't cache your dynamic content with Varnish this will cause a lot of trouble)
0
0
0
0
2014-06-15T13:38:00.000
1
1.2
true
24,230,147
0
0
1
1
I've read up on some documentation and had a few questions. I am aware we can use redis as a cache backend for Django. We can then use the decorators in Django's cache framework to cache certain views. I understand to this point but I've learned about a HTTP accelerator called Varnish. How does Varnish work if used with redis + django cache? What is the difference between Varnish and Django + redis cache using in the in-built cache framework? Can these two things work side by side because having a web accelerator sounds really good actually?
Django large variable storage
24,231,580
4
2
118
0
python,django,python-2.7,networkx
You could simply install it as a global variable. Call the function that loads it in a module-level context and import that module when you need it (or use a singleton pattern that loads it on first access, but it's basically the same thing). You should never use a global variable in a webapp if you expect to alter the contents on the fly, but for static content there's nothing wrong with them. Just be aware that if you put the import inside a function, then that import will run for the first time when that function is run, which means that the first time someone accesses a specific server after reboot they'll have to wait for the data to load. If you instead put the import in a module-level context so that it's loaded on app start, then your app will take four seconds (or whatever) longer to start in the first place. You'll have to pick one of those two performance hits -- the latter is probably kinder to users.
0
0
1
0
2014-06-15T16:05:00.000
1
1.2
true
24,231,504
0
0
1
1
The scenario: I have a Networkx network with around 120.000 edges which I need to query each time a user requests a page or clicks something on the page, so a lot of calls. I could load and parse the network each call, but that would be a waste of time as that would take around 4 seconds each time (excluding the querying). I was hoping I could store this network object (which is static) somewhere globally and just query it when needed, but I can't find an easy way to do so. Putting all the edges in a DB is not an option as it doesn't eliminate the time needed for parsing.
Flask send large csv to angularJS frontend to download
24,302,497
0
0
540
0
python,angularjs,flask
You should give user just a link to download this data and convert it to JSON on your backend. Don't do this on frontend, because you don't do any post-processing things on frontend. Make an url for your query view with postfix like '/csv' and make a view for it.
0
0
0
0
2014-06-16T16:03:00.000
1
1.2
true
24,247,829
0
0
1
1
I have a Flask application performs an API query to a database, which returns a typically large JSON file (around 20-50MB). My intentions are to convert this JSON response into csv data, and return this back to the frontend, which should then prompt the user to download the file. The conversion part I have handled, but what is the best way to transfer this CSV file to the client for download? Should I stream it to avoid any memory overload on the browser? Any advice would be appreciated.
Using Django as server (backend) for Titanium mobile application?
24,252,267
0
0
222
0
android,python,django,twitter-bootstrap,titanium
Titanium is a platform in which you can create mobile apps using JavaScript as your main language. However it doesn't use HTML and CSS to render UI of your app. You are creating different view objects by calling Ti.UI.create* methods which Titanium translates to native objects. This way look&feel of your app will be as close as possible to native apps. If you really want to use Bootstrap to style your mobile app, you should take a look on PhoneGap or just create responsive layout which is accessible through mobile browser.
0
0
0
0
2014-06-16T18:07:00.000
1
0
false
24,249,626
0
0
1
1
I am working on building a web interface, Android and iOS applications for the same application. I have my web interface in Django. There is a multi agent system built using Python in the backend, to which I send and receive messages using zeromq from the django application. The Django application uses bootstrap for the frontend. I am looking at Titanium to build the mobile applications. Is there a way to use Titanium for the front end, and use Django as my server for the mobile applications as well? Also, I would like to know if I can use the same bootstrap theme I use for the web interface, in my Titanium project as well? I am new to Titanium, and I am just reading documents now just to get an idea of how it works. This could be a naive question, but I am total newbie and would like to get this information from this forum.
Copying Django project to root server is not working
24,258,210
0
0
103
0
python,django,deployment
I know some of this tip can be obvious but never knows: Do you update all your settings in settings.py ? (paths to static files, path to project...) Wich server are you using ? django server ? apache ? nginx ? Do you have permissions over all file in the project ? You should check that the owner of the files is the user you have, not root. If the owner is root you'll have this permissions problem in every file that is owned by root. Are you using uwsgi ? Have you installed all the apps you got in your VM ? Have you installed the SAME version you got in your VM ? When I move a project from VM to real server I go over this steps: Review settings.py and update paths Check permissions in the folder that the web server may use I have a list with the packages and versions in a txt file, let's call it packages.txt I install all that packages using pip install -r packages.txt I allways use apache/nginx, so I have to update the virtualhost to the new paths If I'm using uwsgi, update uwsgi settings To downgrade some pip packages you may need to delete the egg files, because if you uninstall a package and reinstall it, although your using pip install package==VERSION, if you have a package already downloaded, pip will install this one, even if the VERSION is different. To check actual version of pip packages use pip freeze To export all pip packages to a file, to import them in other place: pip freeze > packages.txt nad to install packages from this file pip install -r packages.txt
0
1
0
0
2014-06-17T01:00:00.000
2
0
false
24,254,300
0
0
1
1
I hope you can help me. I have been building this webshop for the company I work for with Django and Lightning Fast Shop. It's basically finished now and I have been running it of a virtual ubuntu machine on my PC. Since it got annoying leaving my PC on the entire time, so others could access the site, I wanted to deploy it on a root server. So I got a JiffyBox and installed ubuntu on it. I managed to get Gnome working on it and to connect to it with VNC. I then uploaded my finished project via FTP to the server. Now I thought I would only need to download Django-LFS, create a new project and replace the project files with my finished ones. This worked when I tested it on my virtual machine. To my disappointment it did not work on the root server. When I tried running "bin/django runserver" I got an error message saying "bash: bin/django: Permission denied" and when I try it with 'sudo' I get "sudo: bin/django: command not found" I then realized that I had downloaded a newer version of Django-LFS and tried it with the same version to no avail. I am starting to get really frustrated and would appreciate it very much if somebody could help me with my problem. Greetings, Krytos.
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
37,644,135
12
147
56,894
0
python,google-app-engine,installation,pip,distutils
Another solution* for Homebrew users is simply to use a virtualenv. Of course, that may remove the need for the target directory anyway - but even if it doesn't, I've found --target works by default (as in, without creating/modifying a config file) when in a virtual environment. *I say solution; perhaps it's just another motivation to meticulously use venvs...
0
1
0
0
2014-06-17T07:16:00.000
8
1
false
24,257,803
0
0
1
3
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
48,954,780
2
147
56,894
0
python,google-app-engine,installation,pip,distutils
If you're using virtualenv*, it might be a good idea to double check which pip you're using. If you see something like /usr/local/bin/pip you've broken out of your environment. Reactivating your virtualenv will fix this: VirtualEnv: $ source bin/activate VirtualFish: $ vf activate [environ] *: I use virtualfish, but I assume this tip is relevant to both.
0
1
0
0
2014-06-17T07:16:00.000
8
0.049958
false
24,257,803
0
0
1
3
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both
45,668,067
24
147
56,894
0
python,google-app-engine,installation,pip,distutils
On OSX(mac), assuming a project folder called /var/myproject cd /var/myproject Create a file called setup.cfg and add [install] prefix= Run pip install <packagename> -t .
0
1
0
0
2014-06-17T07:16:00.000
8
1
false
24,257,803
0
0
1
3
I've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?
SL4A Scripting on P.C
24,258,557
0
0
621
0
android,python,sl4a,ase
The ADT could make an easy transfer between PC and android. But you could use wifi keyboard or some app like this, and write in android directly. Also, debuging on PC is a pain, emulated android is slow and error prone for third party like SL4A; the best/fastest is remote control but run script on android. I use a ssh server in android (you find at least 3 free on Play) and Vim from PC and is the fastest and most comfortable way.
0
0
0
0
2014-06-17T07:47:00.000
2
1.2
true
24,258,400
0
0
1
1
Is there a way to write Python scripts for SL4A (ASE) on P.C. and then deploy it on the hardware? I'm new to this and I could write the basic scripts on the device but writing long scripts on phone is tedious. Is there a support for that? Thanks in advance!
Writing Django forms and views
24,264,757
0
0
74
0
django,python-2.7,django-forms,django-views
There is no "right" here. Forms can be defined in either views.py or forms.py (calling it "forms.py" is merely a naming convention that has been widely adopted). It really comes down to personal preference. Larger projects will likely benefit by keeping forms separate (if only to keep things less cluttered).
0
0
0
0
2014-06-17T11:45:00.000
1
1.2
true
24,262,946
0
0
1
1
Is it right to write django forms inside views.py or should i keep 2 separate files views.py forms.py Which is the right convention ? I have seen different projects following these two convensions
Can scrapy step through a website by following "next" button
24,275,158
0
0
124
0
python,web-scraping,scrapy
It sure can, there is multiple way: Add SgmlLinkextractor to follow that next link. Or make Request in your function, like yield(Request(url)) In your case url = erthdata.---.com/full_record.do?product=UA&search_mode=GeneralSearch&qid=21&SID=d89sduisd&excludeEventConfig=ExcludeIfFromFullRecPage&page=1&doc=4&cacheurlFromRightClick=no
0
0
0
0
2014-06-17T16:46:00.000
2
0
false
24,269,245
0
0
1
1
Can scrapy save a web page, follow the next button and save the next web page etc., in a series of search results? It always needs to follow the "next" button and nothing else. This is the url (obfuscated) that links to the next page: erthdata.---.com/full_record.do?product=UA&search_mode=GeneralSearch&qid=21&SID=d89sduisd&excludeEventConfig=ExcludeIfFromFullRecPage&page=1&doc=4&cacheurlFromRightClick=no Thanks John
Handling Cache with Constant Change of Social Network
24,293,403
0
1
83
0
python,django,caching,django-rest-framework
One technique is to key the URLs on the content of the media they are referring too. For example if you're hosting images then use the sha hash of the image file in the url /images/<sha>. You can then set far-future cache expiry headers on those URLs. If the image changes then you also update the URL referring to it, and a request is made for an image that is no longer cached. You can use this technique for regular database models as well as images and other media, so long as you recompute the hash of the object whenever any of its fields change.
0
0
0
1
2014-06-17T18:36:00.000
1
0
false
24,271,006
0
0
1
1
Using Python's Django (w/ Rest framework) to create an application similar to Twitter or Instagram. What's the best way to deal with caching content (JSON data, images, etc.) considering the constantly changing nature of a social network? How to still show updated state a user creates a new post, likes/comments on a post, or deletes a post while still caching the content for speedy performance? If the cache is to be flushed/recreated each time a user takes an action, then it's not worth having a cache because the frequency of updates will be too rapid to make the cache useful. What are some techniques of dealing with this problem. Please feel free to share your approach and some wisdom you learned while implementing your solution. Any suggestions would be greatly appreciated. :)
Difference among Mongoengine, flask-MongoEngine and Django-MongoEngine?
24,280,473
1
1
1,114
0
python,django,mongoengine,flask-mongoengine,django-mongodb-engine
Django MongoEngine aim is to provide better integration with Django - however currently (June 2014) its not stable and the readme says DO NOT CLONE UNTIL STABLE So beware!
0
0
0
0
2014-06-18T07:19:00.000
3
0.066568
false
24,279,336
0
0
1
2
What are the differences between the Mongoengine, flask-MongoEngine and Django-MongoEngine projects? I am using Mongoengine in my Django project. Will I get any benefits if I use Django-MongoEngine instead?
Difference among Mongoengine, flask-MongoEngine and Django-MongoEngine?
58,571,661
0
1
1,114
0
python,django,mongoengine,flask-mongoengine,django-mongodb-engine
Django framework provides a unified unified interface to connect to a Database backend which is usually an SQL based database such as SQLite or Postgresql. That means that the developer does not have to worry about writing code specific to the database technology used, but defines Models and perform transactions and run all kinds of queries using the Django database interface. Flask does the same. Django does not support MongoDB from the get-go. To interact with MongoDB databases, Collections and Documents using Python one would use the PyMongo package, that has a different syntax, and paradigms than Django Models or Flask's. MongoEngine wraps PyMongo in way that provides a Django-like Database for MongoDB. MongoEngine-Django tries to allow Django web-apps developers to use a Mongo database as the web-app backend. To provide the Django Admin, Users, Authentication and other database related features that are available in Django usually with an SQL backend. MongoEngine-Flash tries to allow Flask web-apps developers to use a Mongo database as the web-app backend. Personally, I prefer using a structured SQL database for the web-app essentials, and PyMongo, or MongoEngine to interface with any further Mongo Databases where unstructured big data might reside...
0
0
0
0
2014-06-18T07:19:00.000
3
0
false
24,279,336
0
0
1
2
What are the differences between the Mongoengine, flask-MongoEngine and Django-MongoEngine projects? I am using Mongoengine in my Django project. Will I get any benefits if I use Django-MongoEngine instead?
How do I convert RESTful POST call to Ajax in Tornado?
48,689,932
-1
0
168
0
javascript,python,ajax,rest,tornado
/add/name/(\d)+ Then make post function with def post (self, id): pass. this argument id is value in url \d pattern. Hope it’s helpful.
0
0
0
0
2014-06-18T12:57:00.000
1
-0.197375
false
24,286,218
0
0
1
1
I am learning Tornado and my app does just following: localhost:8000/add/name : adds name to the database localhost:8000/delete/name: deletes name from database As of now I type in browser address bar /add/name and manually adding names. How do I make use of HTML forms for this request? Is this the right way: I create a field box with a id, using JS I get the value from that id, construct the RESTful POST url and on clicking submit, it goes to the constructed url. Now I want to turn above thing to AJAX call so that there is no page refresh. All the examples I found uses form where it sends the 'value' as request parameter not as RESTful. Any help regarding this is appreciated. Thank you! PS: I know I can use get_argument in Tornado and get the value. But I want this in REST, sending the value in URL.
Low level file processing in ruby/python
24,299,151
0
0
162
0
python,r,dataset,fortran,data-processing
Is the file human-readable text or in the native format of the computer (sometimes called binary)? If the files are text, you could reduce the processing load and file size by switching to native format. Converting from the internal representation of floating point numbers to human-reading numbers is CPU intensive. If the files are in native format then it should be easy to skip in the file since each record will be 16 bytes. In Fortran, open the file with an open statement that includes form="unformated", access="direct", recl=16. Then you can read an arbitrary record X without reading intervening records via rec=X in the read statement. If the file is text, you can also read it with direct IO, but it might not be that each two numbers always uses the same number of characters (bytes). You can examine your files and answer that question. If the records are always the same length, then you can use the same technique, just with form="formatted". If the records vary in length, then you could read a large chunk and locate your numbers within the chunk.
0
0
0
1
2014-06-18T20:21:00.000
3
0
false
24,294,371
0
0
1
1
So I hope this question already hasn't been answered, but I can't seem to figure out the right search term. First some background: I have text data files that are tabular and can easily climb into the 10s of GBs. The computer processing them is already heavily loaded from the hours long data collection(at up to 30-50MB/s) as it is doing device processing and control.Therefore, disk space and access are at a premium. We haven't moved from spinning disks to SSDs due to space constraints. However, we are looking to do something with the just collected data that doesn't need every data point. We were hoping to decimate the data and collect every 1000th point. However, loading these files (Gigabytes each) puts a huge load on the disk which is unacceptable as it could interrupt the live collection system. I was wondering if it was possible to use a low level method to access every nth byte (or some other method) in the file (like a database does) because the file is very well defined (Two 64 bit doubles in each row). I understand too low level access might not work because the hard drive might be fragmented, but what would the best approach/method be? I'd prefer a solution in python or ruby because that's what the processing will be done in, but in theory R, C, or Fortran could also work. Finally, upgrading the computer or hardware isn't an option, setting up the system took hundreds of man-hours so only software changes can be performed. However, it would be a longer term project but if a text file isn't the best way to handle these files, I'm open to other solutions too. EDIT: We generate (depending on usage) anywhere from 50000 lines(records)/sec to 5 million lines/sec databases aren't feasible at this rate regardless.
How to select "Load more results" button when scraping using Python & lxml
24,305,212
4
3
2,379
0
python,web-scraping,lxml
Even JavaScript is using http requests to get the data, so one method would be to investigate, what requests are providing the data when user asks to "Load more results" and emulate these requests. This is not traditional scraping, which is based on plain or rendered html content and detecting further links, but can be working solution. Next actions: visit the page in Google Chrome or Firefox press F12 to start up Developer tools or Firebug switch to "Network" tab click "Load more results" check, what http requests have served data for loading more results and what data they return. try to emulate these requests from Python Note, that the data do not necessarily come in HTML or XML form, but could be in JSON. But Python provide enough tools to process this format too.
0
0
1
0
2014-06-19T10:42:00.000
2
1.2
true
24,304,640
0
0
1
2
I am scraping a webpage. The webpage consists of 50 entries. After 50 entries it gives a Load more reults button. I need to automatically select it. How can I do it. For scraping I am using Python, Lxml.
How to select "Load more results" button when scraping using Python & lxml
24,304,877
1
3
2,379
0
python,web-scraping,lxml
You can't do that. The functionality is provided by javascript, which lxml will not execute.
0
0
1
0
2014-06-19T10:42:00.000
2
0.099668
false
24,304,640
0
0
1
2
I am scraping a webpage. The webpage consists of 50 entries. After 50 entries it gives a Load more reults button. I need to automatically select it. How can I do it. For scraping I am using Python, Lxml.
Java - audio to values/variables?
24,438,142
0
0
107
0
java,python,audio
Yes, it's possible to get the actual audio samples from the audio, this is a very common operation and I'm sure you can do it in many languages.A good audio library to use in C# (.NET) is the NAudio library, it has many features and it relatively easy to use.
0
0
0
0
2014-06-19T17:25:00.000
1
0
false
24,312,753
0
0
1
1
I'm generally looking for any language in which I can do this in, be it Java/Python/.NET. I'm looking to programmatically convert audio to values. I know it's possible to render the waveform of audio using Java. Can I transfer the audio to values? For example, the part in the song with the highest amplitude would have the greatest value in this array.
Move package with migrations in django
24,321,591
1
0
78
0
python,django,migration,django-south
As far as I know there is no automatic way to do that, so you'll have to do the following by hand: Move your package to the new place Reflect this change in your settings.py INSTALLED_APPS In all the migration files of you package you have to edit the module path, table names and complete_apps list In your database table south_migrationhistory you have to edit the app_name column Rename the app table(s) in the database That's all. To check that everything is working properly you can type python manage.py schemamigration your_new_app_name --auto and if you did everything properly it will say that nothing have changed. Now you can continue working with your app as usual.
0
0
0
0
2014-06-20T05:27:00.000
2
1.2
true
24,320,514
0
0
1
2
How to move package from one place to another in Django (1.4) with south? Package has applied migrations.
Move package with migrations in django
24,322,566
0
0
78
0
python,django,migration,django-south
Another solution is just move package to another namespace/place but don`t change package name.
0
0
0
0
2014-06-20T05:27:00.000
2
0
false
24,320,514
0
0
1
2
How to move package from one place to another in Django (1.4) with south? Package has applied migrations.
Only one user cannot log into an app via Google ID Authentication
24,454,976
2
2
108
0
python,django,oauth
If it's only one user I'd say it's fairly safe to assume the problem has something to do with that user's credentials. It's hard to say without an error log but if it were me I'd first check to make sure the information the user is entering is the same as what oauth is expecting. Good luck and hope this helps!
0
0
0
0
2014-06-20T07:39:00.000
2
0.197375
false
24,322,264
0
0
1
2
At work we run a python application where users log in via their google account. One user gets an "Error logging in" message on any instance, this doesn't replicate on any other instance. The app was made by a third party and they can't tell us why this happens. Is there a debugging tool or something that comes with Google auth that could be used to trace where the failure is happening? Thanks in advance. If any more technical details are needed please let me know. I'm not very familiar with how all this works.
Only one user cannot log into an app via Google ID Authentication
24,472,076
0
2
108
0
python,django,oauth
Worked this out. Whoever set up the user's ID originally had a capital letter in the ID - but not in the Email addr so this wasn't showing up anywhere.
0
0
0
0
2014-06-20T07:39:00.000
2
1.2
true
24,322,264
0
0
1
2
At work we run a python application where users log in via their google account. One user gets an "Error logging in" message on any instance, this doesn't replicate on any other instance. The app was made by a third party and they can't tell us why this happens. Is there a debugging tool or something that comes with Google auth that could be used to trace where the failure is happening? Thanks in advance. If any more technical details are needed please let me know. I'm not very familiar with how all this works.
Retrieving AMQP routing key information using pika
41,400,921
2
5
3,298
0
python,amqp,pika
I would like to write the answer down because it this question was before the documentation on google. def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() The routing key can be found with:method.routing_key
0
0
0
1
2014-06-20T18:22:00.000
1
0.379949
false
24,333,423
0
0
1
1
New to RabbitMQ and I am trying to determine a way in which to retrieve the routing key information of an AMQP message. Has anyone really tried this before? I am not finding a lot of documentation that explicitly states how to query AMQP using pika (python). This is what I am trying to do: basically I have a Consumer class, for example: channel.exchange_declare(exchange='test', type='topic') channel.queue_declare(queue='topic_queue',auto_delete=True) channel.queue_bind(queue='topic_queue', exchange='test', routing_key = '#') I set up a queue and I bind to an exchange and all the routing_keys (or binding keys I suppose) being passed through that exchange. I also have a function: def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() I think that the routing_key should be "method.routing_key" from the amqmessage function but I am not certain how to get it to work correctly.
Python/Flask: Application is running after closing
24,350,506
0
3
3,981
0
python,eclipse,web-applications,flask,pydev
I've had a very similar thing happen to me. I was using CherryPy rather than Flask, but my solution might still work for you. Oftentimes browsers save webpages locally so that they don't have to re-download them every time the website is visited. This is called caching, and although it's very useful for the average web user, it can be a real pain to app developers. If you're frequently generating new versions of the application, it's possible that your browser is displaying an old version of the app that it has cached instead of the most up to date version. I recommend clearing that cache every time you restart your application, or disabling the cache altogether.
0
0
0
0
2014-06-22T08:07:00.000
3
0
false
24,349,335
0
0
1
1
I'm working on a simple Flask web application. I use Eclipse/Pydev. When I'm working on the app, I have to restart this app very often because of code changes. And that's the problem. When I run the app, I can see the frame on my localhost, which is good. But when I want to close this app, just click on the red square which should stop applications in Eclipse, sometimes (often), the old version of application keeps running so I can't test the new version. In this case the only thing which helps is to force close every process in Windows Task Manager. Will you give me any advice how to manage this problem? Thank you in advance. EDIT: This maybe helps: Many times, I have to run the app twice. Otherwise I can't connect.
How do i get started with Amazon Web Services for this scenario?
24,373,021
0
0
186
1
java,python,amazon-web-services,amazon-ec2
First-time: Create a Postgres db - Depending on size(small or large), might want RDS or Redshift Connect to Amazon Server - EC2 Download code to server - upload your programs to an S3 bucket Once a month: Download large data file to server - Move data to S3, if using redshift data can be loaded directly from s3 to redshift Run code (written in python) to load database with data Run code (written in Java) to create a Lucene Search Index file from data in the database - might want to look into EMR with this Continuously: Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database - If you have a java WAR file, you can host this using Elasticbean stalk In order to connect to your database, you must make sure the security group allows for this connection, and for an ec2 you must make sure port 22 is open to your IP to conncet to it. It sounds like the security group for RDS isn't opening up port 3306.
0
0
0
0
2014-06-23T13:40:00.000
1
1.2
true
24,367,485
0
0
1
1
I'm used to having a remote server I can use via ssh but I am looking at using Amazon Web Services for a new project to give me better performance and resilience at reduced costs but I'm struggling to understand how to use it. This is what I want to do: First-time: Create a Postgres db Connect to Amazon Server Download code to server Once a month: Download large data file to server Run code (written in python) to load database with data Run code (written in Java) to create a Lucene Search Index file from data in the database Continuously: Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database. Note: Technically I could create do the database population locally the trouble is the resultant lucene index file is about 5GB and I dont have a good enough Internet connection to upload a file of that size to Amazon. All that I have managed to do so far is create a Postgres database but I don't understand how to connect to it or get a ssh/telnet connection to my server (I requested a Direct Connect but this seems to be a different service). Update so far FYI: So I created a Postgres database using RDS I created a Ubuntu linux installation using EC2 I connected to the linux installation using ssh I installed required software (using apt-get) I downloaded data file to my linux installation I think according to the installation should be able to connect to my Postgres db from my EC2 instance and even from my local machine however in both cases it just times out. * Update 2 ** Probably security related but I cannot for the life of me understand what I'm meant to do with security groups ands why they don't make the EC2 instance able to talk to my database by default. Ive checked both RDS and EC2 have the3 same vpc id, and both are in the same availability zone. Postgres is using port 5432 (not 3306) but havent been able to access it yet. So taking my working EC2 instance as the starting point should I create a new security group before creating a database, and if so what values do I need to put into it so I can access the db with psql from within my ec2 ssh session - thats all that is holding me up for now and all I need to do * Update 3 * At last I have access to my database, my database had three security groups ( I think the other two were created when I created a new EC2 instance) I removed two of them and in the remaining on the inbound tab I set rule to All Traffic Ports 0-65535 Protocol All IPAddress 0.0.0.0/0 (The outbound tab already had the same rule) and it worked ! I realize this is not the most secure setup but at least its progress. I assume to only allow access from my EC2 instance I can change the IPAddress of the inbound rule but I don't how to calculate the CIDR for the ipaddress ? My new problem is having successfully downloaded my datafile to my EC2 instance I am unable to unzip it because I don't not have enough diskspace. I assume I have to use S3 Ive created a bucket but how do I make it visible as diskspace from my EC2 instance so I can Move my datafile to it Unzip the datafile into it Run my code against the unzipped datafile to load the database (Note the datafile is an Xml format and has to be processed with custom code to get it into the database it cannot just be loaded directly into the database using some generic tool) Update 4 S3 is the wrong solution for me instead I can use EBS which is basically disc storage accessible not as a service but by clicking Volumes in EC2 Console. Ensure create the volume in the same Availability zone as the instance, there maybe more than one in each location, for example my EC2 instance was created in eu-west-1a but the first time I created a volume it was in eu-west-1b and therefore could not be used. Then attach volume to instance But I cannot see the volume from the linux commandline, seems there is something else required. Update 5 Okay, have to format the disk and mount it in linux for it to work I now have my code for uploading the data to database working but it is running incredibly slow, much slower than my cheap local server I have at home. I'm guessing that because the data is being loaded one record at a time that the bottleneck is not the micro database but my micro instance, looks like I need to redo with a more expensive instance Update 6 Updated to a large Computative instance still very slow. Im now thinking the issue is the network latency between server and database perhaps need to install a postgres server directly onto my instance to cut that part out.
How to get non_field_errors on template when using FormView and ModelForm
24,379,552
1
0
1,090
0
django,python-2.7,django-forms,django-templates,django-views
First of all, We will have to make sure if its a non_field_error or a field error. Where have you raise ValidationError in the ModelForm you have defined ? If its raised in def clean() of the Form, then it would be present in non_field_errors and can be accessed via form.non_field_errors in template If it is raised in def clean_<field_name>() then, it would be a field error and can be accessed via form.errors or form.<field_name>.error in template Please decide for yourself where do you want to raise it. Note: ModelForm can work with with FormView. But Ideally, there are CreateView and UpdateView for that
0
0
0
0
2014-06-23T17:17:00.000
2
1.2
true
24,371,646
0
0
1
1
I'm using FormView with ModelForm to process a registration form. In case of duplication of email i'm raising ValidationError. But this error message is not available on registration template as non_field_errors. When i tried to find what is the form.errors in form_invalid method in RegistrationView, its showing the expected the errors, but somehow its not getting passed to template.
Django Dev/Prod Deployment using Mercurial
24,384,040
0
1
172
0
python,django,mercurial,fabric
Two branches for different environment (with env-specific changes in each, thus - additional merge before deploy) or MQ extension, "clean" code in changesets, MQ-patch for every environment on top of single branch (and accuracy with apply|unapply of patches)
0
0
0
0
2014-06-24T05:57:00.000
1
0
false
24,379,275
0
0
1
1
I've a puzzle of a development and production Django setup that I can't figure out a good way to deploy in a simple way. Here's the setup: /srv/www/projectprod contains my production code, served at www.domain.com /srv/www/projectbeta contains my development code, served at www.dev.domain.com Prod and Dev are also split into two different virtualenvs, to isolate their various Python packages, just in case. What I want to do here is to make a bunch of changes in dev, then push to my Mercurial server, and then re-pull those changes in production when stable. But there are a few things making this complicated: wsgi.py contains the activate_this.py call for the virtualenv, but the path is scoped to either prod or dev, so that needs to be edited before deployment. manage.py has a shebang at the top to define the correct python path for the virtualenv. (This is currently #!/srv/ve/.virtualenvs/project-1.2/bin/python so I'm wondering if I can just remove this to simplify things) settings.py contains paths to the templates, staticfiles, media root, etc. which are all stored under /srv/www/project[prod|dev]/* I've looked into Fabric, but I don't see anything in it that would re-write these files for me prior to doing the mercurial push/pull. Does anyone have any tips for simplifying this, or a way to automate this deployment?
Getting “Error loading MySQLdb module: No module named MySQLdb” in django-cms
24,380,525
3
1
9,227
1
python,mysql,django,django-cms
This is an error message you get if MySQLdb isn't installed on your computer. The easiest way to install it would be by entering pip install MySQL-python into your command line.
0
0
0
0
2014-06-24T06:59:00.000
2
0.291313
false
24,380,269
0
0
1
1
I can't connect with mysql and I can't do "python manage.py syncdb" on it how to connect with mysql in django and django-cms without any error?
Passing data from Jinja back to Flask
24,381,042
5
1
1,433
0
javascript,python,flask,jinja2
Note: by HTML I mean HTML incl. JavaScript etc. Python web app receives HTTP request to render a page Python code in controller asks Python model to prepare data for rendering HTML page by Jinja2 Jinja2 template renders the HTML page Python web app sends resulting page back to the client Client clicks on some element on the page. This could result in new HTTP request for completely new HTML page, or it can be AJAX request (Asynchronously performed HTTP request initiated from JavaScript on HTML page in a browser), which asks the web app for new data or provides web app with new information. Web app (Python) receives the request, could make changes in model content and can return response back to JavaScript JavaScript receives new data and uses them to update the HTML page in browser. As seen, Jinja template is only tool, which allows rendering HTML page. The only direct interaction with web app is providing renderd HTML content, there is no chance to include any user interaction in that content at the moment, as client did not see the page yet, it has to be provided by Python code. The only way, how can something in Jinja template inform Python code about user interaction is indirect by the round trip described above.
0
0
0
0
2014-06-24T07:02:00.000
1
1.2
true
24,380,332
0
0
1
1
How do I pass info from Jinja-templated page back to Flask? Say I print some list of items. User chooses the item, I can catch that via Javascript. What is the best practice to pass the chosen item as an argument to function that will generate this item's own page?
Alternatives to creating and iterating through a list for "bad" values each time clean is called in django?
24,395,620
-1
1
54
0
python,sql,database,django,web
Hard coding into the clean function, and displaying to the user is the best means. Test if any words are in the banned_words list, and show as error to user: Sorry, the following words are not allowed: foo, bar, foobar
0
0
0
0
2014-06-24T20:08:00.000
2
-0.099668
false
24,395,368
1
0
1
1
Let's say I have a group of words that I don't want to allow my users to include in their titles that they are going to be submitting. What are some alternatives on how to store those values besides hardcoding the list into the clean function? I thought about creating a new model that would contain all of these words that aren't allowed but I am not sure whether or not querying the database each time clean was called for that function would be slower/faster or more/less secure than just creating a separate list for the names. I do think it would be more readable if the list would get too long though.
Define models in Django
24,396,885
3
2
85
0
python,sql,django
To do this, I would recommend breaking down each individual relationship. Your relationships seem to be: Authoring Following For authoring, the details are: Each Question is authored by one User Each User can author many questions As such, this is a One-to-Many relationship between the two. The best way to do this is a foreign key from the Question to the User, since there can only be one author. For following, the details are: Each Question can have many following Users Each User can be following many Questions As such, this is a Many-to-Many relationship. The Many-to-Many field in Django is a perfect candidate for this. Django will let you use a field through another model, but in this case this is not needed, as you have no other information associated with the fact that a user is following a question (e.g. a personal score/ranking). With both of these relationships, Django will create lists of related items for you, so you do not have to worry about that. You may need to define a related_name for this field, as there are multiple users associated with a question, and by default django would make the related set for both named user_set.
0
0
0
0
2014-06-24T21:25:00.000
2
1.2
true
24,396,591
0
0
1
1
I'm new to Django and I'm trying to create a simple app! I basically want to create something like StackOverflow, I have many User and many Question. I don't know how I should define the relationship between these two Models. My Requirements: I want each Question to have a single author User, and a list of User that followed the Question. I want each User to have a list of posted Question and a list of followed Question. I'm pretty much lost, and I don't know how to define my relationship. Is this a many-to-many relationship? If so, how do I have like 2 lists of Question in my User model (Posted/Followed)? Please help!
Django: When to use multiple apps
24,408,309
1
6
2,868
0
python,django,web-applications
For example, if you have an admin and a user interface you can separate them as ; admin app user app
0
0
0
0
2014-06-25T12:06:00.000
1
0.197375
false
24,408,233
0
0
1
1
When are multiple apps actually used? I've been trying to find a concrete example of when multiple apps might be used, but haven't found anything. I've been reading through the docs, and following the tutorial, and it says that an app has a single functionality - what does this mean? This is open to interpretation depending on the level of detail: it could refer to the individual components of a blog perhaps (ie. the menu bar, the individual blog entries, the comments section); it could refer to the pages the visitors see, and the pages writers use to create posts; it could even refer to two separate websites running within the same server. Can someone give an example of a project which uses more the one application?
Reusable Django apps + Ansible provisioning
24,422,753
0
1
280
0
python,django,vagrant,ansible
You should probably think of it slightly differently. You create a Vagrant file which specifies Ansible as a provisioner. In that Vagrant file you also specify what playbook to use for your vagrant provision portion. If your playbooks are written in an idempotent way, running them multiple times will skip steps that already match the desired state. You should also think about what your desired end-state of a VM should look like and write playbooks to accomplish that. Unless I'm misunderstanding something, all your playbook actions should be happening inside of VM, not directly on your local machine.
0
0
0
0
2014-06-25T22:42:00.000
1
0
false
24,419,793
1
0
1
1
I'm a long-time Django developer and have just started using Ansible, after using Vagrant for the last 18 months. Historically I've created a single VM for development of all my projects, and symlinked the reusable Django apps (Python packages) I create, to the site-packages directory. I've got a working dev box for my latest Django project, but I can't really make changes to my own reusable apps without having to copy those changes back to a Git repo. Here's my ideal scenario: I checkout all the packages I need to develop as Git submodules within the site I'm working on I have some way (symlinking or a better method) to tell Ansible to setup the box and install my packages from these Git submodules I run vagrant up or vagrant provision It reads requirements.txt and installs the remaining packages (things like South, Pillow, etc), but it skips my set of tools because it knows they're already installed I hope that makes sense. Basically, imagine I'm developing Django. How do I tell Vagrant (via Ansible I assume) to find my local copy of Django, rather than the one from PyPi? Currently the only way I can think of doing this is creating individual symlinks for each of those packages I'm developing, but I'm sure there's a more sensible model. Thanks!
Loading data from a (MySQL) database into Django without models
44,363,554
1
2
1,496
1
python,mysql,django,webproject
There is one feature called inspectdb in Django. for legacy databases like MySQL , it creates models automatically by inspecting your db tables. it stored in our app files as models.py. so we don't need to type all column manually.But read the documentation carefully before creating the models because it may affect the DB data ...i hope this will be useful for you.
0
0
0
0
2014-06-26T06:19:00.000
3
0.066568
false
24,423,645
0
0
1
1
This might sound like a bit of an odd question - but is it possible to load data from a (in this case MySQL) table to be used in Django without the need for a model to be present? I realise this isn't really the Django way, but given my current scenario, I don't really know how better to solve the problem. I'm working on a site, which for one aspect makes use of a table of data which has been bought from a third party. The columns of interest are liklely to remain stable, however the structure of the table could change with subsequent updates to the data set. The table is also massive (in terms of columns) - so I'm not keen on typing out each field in the model one-by-one. I'd also like to leave the table intact - so coming up with a model which represents the set of columns I am interested in is not really an ideal solution. Ideally, I want to have this table in a database somewhere (possibly separate to the main site database) and access its contents directly using SQL.
Django 1.6: Clear data in one table
24,431,142
2
3
3,762
0
python,database,django,models
In the admin interface, you can go to the list page for that model, then you can select all models and use the Delete selected ... action at the top of the table. Remember that, in whatever way you delete the data, foreign keys default to ON DELETE CASCADE, so any model with a foreign key to a model you want to delete will be deleted as well. The admin interface will give you a complete overview of models that will be deleted.
0
0
0
0
2014-06-26T12:40:00.000
4
0.099668
false
24,430,817
0
0
1
1
I've a table name UGC and would like to clear all the data inside that table. I don't want to reset the entire app which would delete all the data in all the other models as well. Is it possible to clear only one single model? I also have South configured with my app, if that would help.
Django + Amazon S3 not loading static files on production
24,622,541
0
0
1,241
0
python,django,amazon-web-services,amazon-s3,django-staticfiles
i think my problem was related to bucket policies. i am not sure as i tried many different things but i would bet that's the one that made it work.
0
0
0
0
2014-06-26T17:42:00.000
1
1.2
true
24,436,952
0
0
1
1
So i have my Django site and i am trying to have my static files on S3, but i am getting a ERR_INSECURE_RESPONSE when my site is on a production server. If i click on the link and accept it, it then loads the page. I am using django-storages and on my local machine everything works fine (my S3 credentials are ok) but when i deploy to production i get the error. Do i need to have https enabled on my site to be able to serve static files through S3?? What should i do? THanks
Gunicorn doesn't work
24,469,221
2
3
1,863
0
python,django,heroku,virtualenv,gunicorn
One of the changes in later versions of gunicorn includes not logging to stdout/stderr. Add the argument --log-file=XXX, then examine that log file for what port it's running on.
0
0
0
0
2014-06-28T04:30:00.000
3
0.132549
false
24,463,587
0
0
1
1
I'm trying to deploy my django app on heroku. After following the steps instructed by the official document, the dyno I launched always crashes. Then I went through the whole process, and I think the problem might lie on the gunicorn part. Following the instruction, I set the Procfile as 'web: unicorn hellodjango.wsgi', and when I $foreman start, it only shows "21:21:07 web.1 | started with pid 77969". It didn't say where the web is launched. Then I tried to test whether gunicorn is working well. So I tried: "$gunicorn hellodjango.wsgi:application", it indeed doesn't work. I think the path is correct because in current folder there's a hellodjango folder and inside there's the file wsgi.py. What might be the problem?
HTTPSConnectionPool(host='s3-us-west-1b.amazonaws.com', port=443): Max retries exceeded with url
42,559,759
0
4
11,516
0
python,amazon-web-services,amazon-s3,aws-cli
One possible issue could be that proxy might not have been set in your instance service role. Configure the env to point to your proxy servers via HTTP_PROXY / HTTPS_PROXY (Since the above error displays 443, it should be HTTPS_PROXY).
0
0
1
0
2014-06-30T09:56:00.000
3
0
false
24,487,444
0
0
1
1
I am trying to copy a file from my aws ec2 instance to S3 bucket folder, but i am getting error Here is the command sample aws s3 cp /home/abc/icon.jpg s3://mybucket/myfolder This the error i am getting upload failed: ./icon.jpg to s3://mybucket/myfolder/icon.jpg HTTPSConnectionPool(host='s3-us-west-1b.amazonaws.com', port=443): Max retries exceeded with url: /mybucket/myfolder/icon.jpg (Caused by : [Errno -2] Name or service not known) I have already configured the config file for aws cli command line Please suggest the solution to this problem
Google App Engine NDB Query on Many Locations
24,501,164
0
0
144
1
javascript,python,google-maps,google-app-engine
You didn't say how frequently the data points are updated, but assuming 1) they're updated infrequently and 2) there are only hundreds of points, then consider just querying them all once, and storing them sorted in memcache. Then your handler function would just fetch from memcache and filter in memory. This wouldn't scale indefinitely but it would likely be cheaper than querying the Datastore every time, due to the way App Engine pricing works.
0
1
0
0
2014-06-30T19:09:00.000
2
0
false
24,497,219
0
0
1
1
I am developing a web app based on the Google App Engine. It has some hundreds of places (name, latitude, longitude) stored in the Data Store. My aim is to show them on google map. Since they are many I have registered a javascript function to the idle event of the map and, when executed, it posts the map boundaries (minLat,maxLat,minLng,maxLng) to a request handler which should retrieve from the data store only the places in the specified boundaries. The problem is that it doesn't allow me to execute more than one inequality in the query (i.e. Place.latminLat, Place.lntminLng). How should I do that? (trying also to minimize the number of required queries)
How do I terminate a long-running Django request if the XHR gets an abort()?
32,511,257
-1
11
1,318
0
python,ajax,django
Just think of the Web as a platform for building easy-to-use, distributed, loosely couple systems, with no guarantee about the availability of resources as 404 status code suggests. I think that creating tightly coupled solutions such as your idea is going against web principles and usage of REST. xhr.abort() is client side programming, it's completely different from server side. It's a bad practice trying to tighten client side technology to server side internal behavior. Not only this is a waste of resources, but also there is no guarantee on processing status of the request by web server. It may lead to data inconsistency too. If your request generates no server-side side effects for which the client can be held responsible. It is better just to ignore it, since these kind of requests does not change server state & the response is usually cached for better performance. If your request could cause changes in server state or data, for the sake of data consistency you can check whether the changes have taken effect or not using an API. In case of affection try to rollback using another API.
0
0
0
0
2014-06-30T19:10:00.000
2
-0.099668
false
24,497,239
0
0
1
2
I initiate a request client-side, then I change my mind and call xhr.abort(). How does Django react to this? Does it terminate the thread somehow? If not, how do I get Django to stop wasting time trying to respond to the aborted request? How do I handle it gracefully?
How do I terminate a long-running Django request if the XHR gets an abort()?
52,607,897
1
11
1,318
0
python,ajax,django
Due to how http works and that you usually got a frontend in front of your django gunicorn app processes (or uswgi etc), your http cancel request is buffered by nginx. The gunicorns don't get a signal, they just finish processing and then output whatever to the http socket. But if that socket is closed it will have an error (which is caught as a closed connection and move one). So it's easy to DOS a server if you can find a way to spawn many of these requests. But to answer your question it depends on the backend, with gunicorn it will keep going until the timeout.
0
0
0
0
2014-06-30T19:10:00.000
2
0.099668
false
24,497,239
0
0
1
2
I initiate a request client-side, then I change my mind and call xhr.abort(). How does Django react to this? Does it terminate the thread somehow? If not, how do I get Django to stop wasting time trying to respond to the aborted request? How do I handle it gracefully?
Why do I need to set environment variables for Python to make Scrapy work?
24,516,837
4
5
1,891
0
python,environment-variables,scrapy
Windows uses the environment variable called PATH to identify a command in comand prompt and directs to the folder in which the command is associated with. For instance, when you install Python, it appends it's location in your system to the PATH variable, so that when you call it in cmd (type in python), it knows where to look and calls the appropriate program/s at that location.
0
0
0
0
2014-07-01T18:10:00.000
1
0.664037
false
24,516,745
1
0
1
1
I have just got Scrapy set up on my machine (Windows Vista 64 bit, Python.org version 2.7, 64 bit shell). I have tried running the command 'scrapy startproject myproject' and got the seemingly standard error message of 'scrapy is not a recognised command. A lot of the other people who have asked this question have been advised that they need to set up environment variables for Python in Windows. I'm not entirely sure why I am supposed to do this to be honest. Could someone please explain?
Gunicorn, Django, Gevent: Spawned threads are blocking
24,544,667
-2
8
5,406
0
python,django,multithreading,gunicorn,gevent
I have settled for using a synchronous (standard) worker and making use of the multiprocessing library. This seems to be the easiest solution for now. I have also implemented a global pool abusing a memcached cache providing locks so only two tasks can run.
0
1
0
0
2014-07-02T01:42:00.000
3
1.2
true
24,521,661
0
0
1
2
we recently switched to Gunicorn using the gevent worker. On our website, we have a few tasks that take a while to do. Longer than 30 seconds. Preamble We did the whole celery thing already, but these tasks are run so rarely that its just not feasible to keep celery and redis running all the time. We just do not want that. We also do not want to start celery and redis on demand. We want to get rid of it. (I'm sorry for this, but I want to prevent answers that go like: "Why dont you use celery, it's great!") The tasks we want to run asynchronously I'm talking about tasks that perform 3000 SQL queries (inserts) that have to be performed one after each other. This is not done all too often. We limited to running only 2 of these tasks at once as well. They should take like 2-3 minutes. The approach Now, what we are doing now is taking advantage of the gevent worker and gevent.spawn the task and return the response. The problem I found that the spawned threads are actually blocking. As soon as the response is returned, the task starts running and no other requests get processed until the task stops running. The task will be killed after 30s, the gunicorn timeout. In order to prevent that, I use time.sleep() after every other SQL query, so the server gets a chance to respond to requests, but I dont feel like this is the point. The setup We run gunicorn, django and use gevent. The behaviour described occurs in my dev environment and using 1 gevent worker. In production, we will also run only 1 worker (for now). Also, running 2 workers did not seem to help in serving more requests while a task was blocking. TLDR We consider it feasible to use a gevent thread for our 2 minute task (over celery) We use gunicorn with gevent and wonder why a thread spawned with gevent.spawn is blocking Is the blocking intended or is our setup wrong? Thank you!
Gunicorn, Django, Gevent: Spawned threads are blocking
24,769,760
0
8
5,406
0
python,django,multithreading,gunicorn,gevent
It would appear no one here gave an actual to your question. Is the blocking intended or is our setup wrong? There is something wrong with your setup. SQL queries are almost entirely I/O bound and should not be blocking any greenlets. You are either using a SQL/ORM library that is not gevent-friendly, or something else in your code is causing the blocking. You should not need to use multiprocessing for this kind of task. Unless you are explicitly doing a join on the greenlets, then the server response should not be blocking.
0
1
0
0
2014-07-02T01:42:00.000
3
0
false
24,521,661
0
0
1
2
we recently switched to Gunicorn using the gevent worker. On our website, we have a few tasks that take a while to do. Longer than 30 seconds. Preamble We did the whole celery thing already, but these tasks are run so rarely that its just not feasible to keep celery and redis running all the time. We just do not want that. We also do not want to start celery and redis on demand. We want to get rid of it. (I'm sorry for this, but I want to prevent answers that go like: "Why dont you use celery, it's great!") The tasks we want to run asynchronously I'm talking about tasks that perform 3000 SQL queries (inserts) that have to be performed one after each other. This is not done all too often. We limited to running only 2 of these tasks at once as well. They should take like 2-3 minutes. The approach Now, what we are doing now is taking advantage of the gevent worker and gevent.spawn the task and return the response. The problem I found that the spawned threads are actually blocking. As soon as the response is returned, the task starts running and no other requests get processed until the task stops running. The task will be killed after 30s, the gunicorn timeout. In order to prevent that, I use time.sleep() after every other SQL query, so the server gets a chance to respond to requests, but I dont feel like this is the point. The setup We run gunicorn, django and use gevent. The behaviour described occurs in my dev environment and using 1 gevent worker. In production, we will also run only 1 worker (for now). Also, running 2 workers did not seem to help in serving more requests while a task was blocking. TLDR We consider it feasible to use a gevent thread for our 2 minute task (over celery) We use gunicorn with gevent and wonder why a thread spawned with gevent.spawn is blocking Is the blocking intended or is our setup wrong? Thank you!
Console output delay with Python but not Java using PsExec
24,534,244
0
0
156
0
java,python,psexec
Are you sure the remote python script flushes the stdout? It should get flushed every time you print a new line, or when you explicitly call sys.stdout.flush().
0
1
0
0
2014-07-02T14:00:00.000
1
1.2
true
24,533,128
0
0
1
1
I have two files on a remote machine that I am running with PsExec, one is a Java program and the other Python. For the Python file any outputs to screen (print() or sys.stdout.write()) are not sent back to my local machine until the script has terminated; for the Java program I see the output (System.out.println()) on my local machine as soon as it is created on the remote machine. If anyone can explain to me why there is this difference and how to see the Python outputs as they are created I would be very grateful! (Python 3.1, Remote Machine: Windows Server 2012, Local: Windows 7 32-bit)