Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
manipulating javascript code with BeautifulSoup
| 31,250,400 | 1 | 0 | 744 | 0 |
python,html,beautifulsoup,dom-manipulation
|
Beautiful Soup is a Python library for pulling data out of HTML and XML files. You can't directly use it for angular js code.
| 0 | 0 | 0 | 0 |
2015-07-06T15:53:00.000
| 2 | 0.099668 | false | 31,250,284 | 0 | 0 | 1 | 1 |
I have html code embeded with java script code related to angular js. Later I realized that rows and columns of html code need to be inter cahnged. As I have bunch of html files so decided to use Python script. Have tried using BeautifulSoup 4.x. I could able to do interchange of rows and columns but while writing back to disk, it is noticed that few java script tags are missing.
My question is can I use beautiful soup for angular js code? if yes, code snippet would be extremely helpful.
Thanks
|
Django i18n Problems
| 31,278,044 | 0 | 0 | 59 | 0 |
python,django,python-3.x,internationalization
|
I just needed to add'django.middleware.locale.LocaleMiddleware' to my settings.py file in the MIDDLEWARE_CLASSES section. I figured if internationalization was already on that this wouldn't be necessary.
| 0 | 0 | 0 | 0 |
2015-07-07T08:04:00.000
| 1 | 0 | false | 31,263,032 | 0 | 0 | 1 | 1 |
I have a Django 1.8 project that I would like to internationalize. I have added the code to do so in the application, and when I change the LANGUAGE_CODE tag, I can successfully see the other language used, but when I leave it on en-us, no other languages show up. I have changed my computer's language to the language in question (German), but calls to the site are still in English. What am I doing wrong?
Other things:
USE_I18N = true
LOCALE_PATHS works correctly (since changing the
LANGUAGE_CODE works)
I have also tried settings the LANGUAGES attribute although I don't think I have to anyway.
EDIT: I have also confirmed that the GET call has the header: Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4, which contains de like I want. My locale folder has a folder de in it.
|
Do I really need to use virtualenv with Django?
| 41,454,312 | 0 | 1 | 1,146 | 0 |
python,django,vagrant,virtualenv
|
No, in your case, you don't need to bother with virtualenv. Since you're using a dedicated virtual machine it's just a layer of complexity you, as a noob, don't really need.
Virtualenv is pretty simple, in concept and usage, so you'll layer it on simply enough when the need arises. But, imho, there is added value in learning how a python installation is truly laid out before adding indirection. When you hit a problem that it can solve, then go for it. But for now, keep it simple: don't bother.
| 0 | 0 | 0 | 0 |
2015-07-07T08:49:00.000
| 5 | 0 | false | 31,263,904 | 1 | 0 | 1 | 5 |
This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness?
|
Do I really need to use virtualenv with Django?
| 31,268,157 | 0 | 1 | 1,146 | 0 |
python,django,vagrant,virtualenv
|
if you develop multiple projects with different django versions, virtualenv is just a must thing, there is no other way (not that i know). you feel in heaven in virtualenv if you once experience the dependency hell. Even if you develop one project I would recommend to code inside virtualenv, you never know what comes next, back in the days, my old laptop was almost crashing because of so many dependency problems, after i discovered virtualenv, my old laptop became a brand new laptop for my eyes..
| 0 | 0 | 0 | 0 |
2015-07-07T08:49:00.000
| 5 | 0 | false | 31,263,904 | 1 | 0 | 1 | 5 |
This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness?
|
Do I really need to use virtualenv with Django?
| 31,264,702 | 1 | 1 | 1,146 | 0 |
python,django,vagrant,virtualenv
|
There are many benefit of working with virtual environment on your development machine.
You can go to any version of any supported module to check for issues
Your project runs under separate environment without conflicting with your system wide modules and settings
Testing is easy
Muliple version of same project can co-exist.
| 0 | 0 | 0 | 0 |
2015-07-07T08:49:00.000
| 5 | 0.039979 | false | 31,263,904 | 1 | 0 | 1 | 5 |
This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness?
|
Do I really need to use virtualenv with Django?
| 31,264,104 | 5 | 1 | 1,146 | 0 |
python,django,vagrant,virtualenv
|
I would always recommend you use a virtualenv as a matter of course. There is almost no overhead in doing so, and it just makes things easier. In conjunction with virtualenvwrapper you can easily just type workon myproject to activate and cd to your virtualenv in one go. You avoid any issues with having to use sudo to install things, as well as any possible version incompatibilities with system-installed packages. There's just no reason not to, really.
| 0 | 0 | 0 | 0 |
2015-07-07T08:49:00.000
| 5 | 0.197375 | false | 31,263,904 | 1 | 0 | 1 | 5 |
This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness?
|
Do I really need to use virtualenv with Django?
| 31,264,041 | 3 | 1 | 1,146 | 0 |
python,django,vagrant,virtualenv
|
I don't have any knowledge on Vagrant but I use virtualenvs for my Django projects. I would recommend it for anyone.
With that said, if you're only going to be using one Django project on a virtual machine you don't need to use a virtualenv. I haven't come across a situation where apps in the same project have conflicting dependencies. This could be a problem if you have multiple projects on the same machine however.
| 0 | 0 | 0 | 0 |
2015-07-07T08:49:00.000
| 5 | 1.2 | true | 31,263,904 | 1 | 0 | 1 | 5 |
This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness?
|
Django-nose test html error
| 31,290,685 | 0 | 0 | 61 | 0 |
python,django,unit-testing,code-coverage,django-nose
|
I fixed this by uninstalling coverage.py with pip and installing it using easy_install.
| 0 | 0 | 0 | 0 |
2015-07-07T11:19:00.000
| 1 | 1.2 | true | 31,267,147 | 0 | 0 | 1 | 1 |
I'm testing a web application using Django-nose to monitor the code coverage. At first it worked perfectly well, but when trying to generate HTML it fails with the error:
Imput error: No module named copy_reg
It happened after a few times (until then in worked). I tried it on a computer with newly installed django, django-nose and coverage and the very same code works fine. Re-installing django and django-nose didn't help.
Any suggestions? Should I re-install any library or something?
Thank you in advance!
|
Async Tasks for Django and Gunicorn
| 31,272,086 | 1 | 1 | 420 | 0 |
python,django,multithreading,celery
|
I'm assuming you don't want to wait because you are using an external service (outside of your control) for sending email. If that's the case then setup a local SMTP server as a relay. Many services such as Amazon SES, SendGrid, Mandrill/Mailchimp have directions on how to do it. The application will only have to wait on the delivery to localhost (which should be fast and is within your control). The final delivery will be forwarded on asynchronously to the request/response. STMP servers are already built to handle delivery failures with retries which is what you might gain by moving to Celery.
| 0 | 1 | 0 | 0 |
2015-07-07T12:24:00.000
| 1 | 1.2 | true | 31,268,494 | 0 | 0 | 1 | 1 |
I have a use case where I have to send_email to user in my views. Now the user who submitted the form will not receive an HTTP response until the email has been sent . I do not want to make the user wait on the send_mail. So i want to send the mail asynchronously without caring of the email error. I am using using celery for sending mail async but i have read that it may be a overkill for simpler tasks like this. How can i achieve the above task without using celery
|
Import Error related to Yahoo Finance tool / html5lib install
| 32,193,407 | 1 | 0 | 1,702 | 0 |
python,pandas,scrape,yahoo-finance
|
I had the same error about html5lib with Python 3.4 in PyCharm 4.5.3, even though I installed html5lib. When I restarted PyCharm console (where I run the code), the error disappeared and options loaded correctly.
| 0 | 0 | 1 | 0 |
2015-07-07T19:12:00.000
| 1 | 0.197375 | false | 31,277,368 | 1 | 0 | 1 | 1 |
I am trying to get stock data from Yahoo! Finance. I have it installed (c:\ pip install yahoo-finance), but the import in the iPython console is not working. This is the error I get: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 4: invalid start byte.
I am using Python 3.4 and Spyder 2.3.1.
Has anyone else encountered this?
Update:
The unicode error during import is no longer, but now it is replaced with the following when trying to use the yahoo_finance tool...
ImportError: html5lib not found, please install it
However, html5lib is listed when I run help('modules').
|
How to display a quote in django template
| 31,280,349 | 0 | 3 | 1,608 | 0 |
python,django
|
If you're passing data to JavaScript, use json.
| 0 | 0 | 0 | 0 |
2015-07-07T22:17:00.000
| 2 | 0 | false | 31,280,321 | 0 | 0 | 1 | 1 |
Passing "'2015/07/01'" to a django template is rendering this character when queried in my browser: '2015/07/01'.
I would need to have '2015/07/01' (that's to include it into a javascript function, and my browser doesn't interpret it as '2015/07/01' but sets it to '2015/07/01' in the javascript).
How could I print '2015/07/01' ?
|
Django: modifying data with user input through custom template tag?
| 31,282,108 | 0 | 0 | 333 | 0 |
python,django,django-templates
|
This type of logic does not belong in a template tag. It belongs in a view that will respond to AJAX requests and return a JSONResponse. You'll need some javascript to handle making the request based on the input as well.
| 0 | 0 | 0 | 0 |
2015-07-07T23:29:00.000
| 1 | 1.2 | true | 31,281,119 | 0 | 0 | 1 | 1 |
Is it possible to modify data through custom template tag in Django? More specifically, I have a model named Shift whose data I want to display in a calendar form. I figured using a custom inclusion tag is the best way to go about it, but I also want users to be able to click on a shift and buy/sell the shift (thus modifying the database). My guess is that you can't do this with an inclusion tag, but if I were to write a different type of custom template tag from the ground up, would this be possible? If so, can you direct me to a few resources that address how to write such a tag?
Thank you in advance.
|
Convert to PDF's Last Page using GraphicsMagick with Python
| 31,310,493 | 0 | 1 | 191 | 0 |
python,pdf,graphicsmagick
|
Future readers of this, if you're experiencing the same dilemma in GraphicsMagick. Here's the easy solution:
Simply write a big number to represent the "last page".
That is: something like:
convert file.pdf[4-99999] +adjoin file%02d.jpg
will work to convert from the 5th pdf page to the last pdf page, into jpgs.
Note: "+adjoin" & "%02d" have to do with getting all the images rather than just the last. You'll see what i mean if you try it.
| 0 | 0 | 0 | 0 |
2015-07-08T04:14:00.000
| 1 | 0 | false | 31,283,419 | 0 | 0 | 1 | 1 |
To convert a range of say the 1st to 5th page of a multipage pdf into single images is fairly straight forward using:
convert file.pdf[0-4] file.jpg
But how do i convert say the 5th to the last page when i dont know the number of pages in the pdf?
In ImageMagick "-1"represents the last page, so:
convert file.pdf[4--1] file.jpg works, great stuff,
but it doesnt work in GraphicsMagick.
Is there a way of doing this easily or do i need to find the number of pages?
PS: need to use graphicsmagick instead of imagemagick.
Thank you so much in advance.
|
Multiple verify_password callbacks on flask-httpauth
| 31,305,421 | 6 | 1 | 228 | 0 |
python,flask,flask-httpauth
|
The way I intended that to be handled is by creating two HTTPAuth objects. Each gets its own verify_password callback, and then you can decorate each route with the decorator that is appropriate.
| 0 | 0 | 0 | 0 |
2015-07-08T05:31:00.000
| 1 | 1.2 | true | 31,284,225 | 0 | 0 | 1 | 1 |
Working on a Flask application which will have separate classes of routes to be authenticated against: user routes and host routes(think Airbnb'esque where users and hosts differ substantially).
Creating a single verify_password callback and login_required combo is extremely straightforward, however that isn't sufficient, since some routes will need host authentication and others routes will necessitate user authentication. Essentially I will need to have one verify_password/login_required for user and one for host, but I can't seem to figure out how that would be done since it appears that the callback is global in respect to auth's scope.
|
Django 1.8 and Python 2.7 using PostgreSQL DB help in fetching
| 31,309,910 | 0 | 0 | 78 | 1 |
python,django,postgresql,python-2.7,django-1.8
|
I think you have misunderstood what inspectdb does. It creates a model for an existing database table. It doesn't copy or replicate that table; it simply allows Django to talk to that table, exactly as it talks to any other table. There's no copying or auto-fetching of data; the data stays where it is, and Django reads it as normal.
| 0 | 0 | 0 | 0 |
2015-07-08T14:15:00.000
| 1 | 1.2 | true | 31,295,352 | 0 | 0 | 1 | 1 |
I'm making an application that will fetch data from a/n (external) postgreSQL database with multiple tables.
Any idea how I can use inspectdb only on a SINGLE table? (I only need that table)
Also, the data in the database would by changing continuously. How do I manage that? Do I have to continuously run inspectdb? But what will happen to junk values then?
|
Query by empty JsonField in django
| 31,305,709 | -1 | 6 | 2,302 | 0 |
python,django,django-jsonfield
|
Try MyModel.objects.filter(myjsonfield='[]').
| 0 | 0 | 0 | 0 |
2015-07-08T23:04:00.000
| 4 | -0.049958 | false | 31,305,340 | 0 | 0 | 1 | 2 |
I need to query a model by a JsonField, I want to get all records that have empty value ([]):
I used MyModel.objects.filter(myjsonfield=[]) but it's not working, it returns 0 result though there's records having myjsonfield=[]
|
Query by empty JsonField in django
| 37,578,853 | 0 | 6 | 2,302 | 0 |
python,django,django-jsonfield
|
JSONfield should be default={} i.e., a dictionary, not a list.
| 0 | 0 | 0 | 0 |
2015-07-08T23:04:00.000
| 4 | 0 | false | 31,305,340 | 0 | 0 | 1 | 2 |
I need to query a model by a JsonField, I want to get all records that have empty value ([]):
I used MyModel.objects.filter(myjsonfield=[]) but it's not working, it returns 0 result though there's records having myjsonfield=[]
|
Sending data from JavaScript to Python LOCALLY
| 31,307,321 | 1 | 0 | 78 | 0 |
javascript,python,html,forms,local
|
The browsers security model prevents sending data to local processes. Your options are:
Write a browser extension that calls a python script.
Run a local webserver. Most Python web development frameworks have a simple one included.
| 0 | 0 | 1 | 0 |
2015-07-09T02:35:00.000
| 1 | 0.197375 | false | 31,307,147 | 0 | 0 | 1 | 1 |
I am writing a program that opens an html form in a browser window. From there, I need to get the data entered in the form and use it in python code. This has to be done completely locally. I do not have access to a webserver or I would be using PHP. I have plenty of experience with Python but not as much experience with JavaScript and no experience with AJAX. Please help! If you need any more information to answer the question, just ask. All answers are greatly appreciated.
|
How to change amazon aws S3 time zone setting for a bucket
| 31,314,635 | 4 | 1 | 8,668 | 0 |
python,django,amazon-web-services,amazon-s3,boto
|
This has nothing to do with timezone of machine or S3 bucket, your machine time is not correct and if machine time is off by more than 15 minutes, AWS will give error because of security. Just check if time is correct on machine.
| 0 | 0 | 1 | 0 |
2015-07-09T08:37:00.000
| 1 | 1.2 | true | 31,312,292 | 0 | 0 | 1 | 1 |
My server time is set as Asia/India. So when ever I am trying to post an image in S3 bucket I am getting the following error
RequestTimeTooSkewedThe difference between the request time and the current time is too large.Thu, 09 Jul 2015 17:53:21 GMT2015-07-09T08:23:22Z90000068B8486508D2695Ag6EfiNV8uJi8JY/Y2JWCIBi7fROEa/Uw2Yaw3fw3pfAbI+ZtaFZV7PnHhZ6Yxw07
How can I change the AWS S3 bucket time as IST?
|
Error when installing through Pip - Windows
| 31,645,802 | 0 | 1 | 1,486 | 0 |
python,django,pip
|
make sure that python.exe is allowed in your firewall settings and any antivirus firewall settings. I had the same problems, and had to allow the program under my AVG firewall settings cause it still wouldn't work even after I had allowed it under Windows firewall.
| 0 | 0 | 0 | 0 |
2015-07-09T10:16:00.000
| 1 | 0 | false | 31,314,540 | 0 | 0 | 1 | 1 |
When trying to install Django through pip we get an error message.
So it's an protocol error, and then since he has in Swedish it says something like:
"a try was made to get access to a socket in a way that is forbidden by the table of access"....
It seems like we need any admin access or something? We tried to run the command prompt as an administrator. By marking the "run as administrator" box in the command prompt settings. We are lost, any help is greatly appreciated.
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', error(10013, 'Ett f\xf6rs\xf6k gjordes att f\xe5 \xe5tkomst till en socket p\xe5 ett s\xe4tt som \xe4r f\xf6rbjudet av \xe5tkomstbeh\xf6righeterna'))': /simple/django/
Could not find a version that satisfies the requirement django (from versions: )
No matching distribution found for django
|
Django forms - Post
| 31,315,193 | 1 | 0 | 50 | 0 |
python,django,forms,post
|
You're getting the ID of the related object.
Since you say you're using a form, you shouldn't be accessing data via request.POST, but by form.cleaned_data, which will do the work to translate that into the actual object.
| 0 | 0 | 0 | 0 |
2015-07-09T10:40:00.000
| 1 | 1.2 | true | 31,315,046 | 0 | 0 | 1 | 1 |
One of my form fields is a Foreign Key drop down. When the form is submitted I need to get the selected value in views.py.
However, instead of getting (using request.POST.get('value', False)) the value I am getting a number (which seems to be arbitrary).
How can I get the selected value?
Thanks in advance!
|
Email Notification for test results by using Python in Robot Framework
| 33,709,506 | 2 | 0 | 9,967 | 0 |
python-2.7,robotframework
|
you can use jenkins to run your Robot Framework Testcases. There is a auto-generated mail option in jenkins to send mail with Test results.
| 0 | 0 | 0 | 1 |
2015-07-09T14:19:00.000
| 4 | 0.099668 | false | 31,320,273 | 0 | 0 | 1 | 1 |
We are using ROBOT framework to execute automation test cases.
Could anyone please guide me to write script for e-mail notification of test results.
Note:
I have e-mail server details.
Regards,
-kranti
|
Best practice for polling an AWS SQS queue and deleting received messages from queue?
| 31,322,766 | 25 | 25 | 19,484 | 0 |
python,amazon-web-services,boto,amazon-sqs
|
The long-polling capability of the receive_message() method is the most efficient way to poll SQS. If that returns without any messages, I would recommend a short delay before retrying, especially if you have multiple readers. You may want to even do an incremental delay so that each subsequent empty read waits a bit longer, just so you don't end up getting throttled by AWS.
And yes, you do have to delete the message after you have read or it will reappear in the queue. This can actually be very useful in the case of a worker reading a message and then failing before it can fully process the message. In that case, it would be re-queued and read by another worker. You also want to make sure the invisibility timeout of the messages is set to be long enough the the worker has enough time to process the message before it automatically reappears on the queue. If necessary, your workers can adjust the timeout as they are processing if it is taking longer than expected.
| 0 | 0 | 1 | 0 |
2015-07-09T15:31:00.000
| 3 | 1.2 | true | 31,321,996 | 0 | 0 | 1 | 1 |
I have an SQS queue that is constantly being populated by a data consumer and I am now trying to create the service that will pull this data from SQS using Python's boto.
The way I designed it is that I will have 10-20 threads all trying to read messages from the SQS queue and then doing what they have to do on the data (business logic), before going back to the queue to get the next batch of data once they're done. If there's no data they will just wait until some data is available.
I have two areas I'm not sure about with this design
Is it a matter of calling receive_message() with a long time_out value and if nothing is returned in the 20 seconds (maximum allowed) then just retry? Or is there a blocking method that returns only once data is available?
I noticed that once I receive a message, it is not deleted from the queue, do I have to receive a message and then send another request after receiving it to delete it from the queue? seems like a little bit of an overkill.
Thanks
|
what is the difference between django html_message and message in send mail
| 31,349,275 | 0 | 3 | 905 | 0 |
django,python-2.7,django-templates,django-views
|
Message is plain text, while html_message is a message formatted using HTML. If you set both, probably your email client is displaying the HTML version.
| 0 | 0 | 0 | 0 |
2015-07-10T19:48:00.000
| 2 | 1.2 | true | 31,349,125 | 0 | 0 | 1 | 1 |
When im trying to send a mail with django send mail only the html message is coming and not the normal message i want to know the difference between both.
Is there any best way to send mails through template or html files because i want a comming mailing system in my app.
Note:- the difference is of more important.
THIS IS WHAT I DID
msg_html = (' HELLLOOOOO')
msg_plain = 'Normalallalaa
send_mail("titleeeee", msg_plain,"sender@test",["reciever@tese",],html_message=msg_html)
My mail contained only Hello in bold
Where did my message go.
|
sql.h not found when installing PyODBC on Heroku
| 61,108,863 | 0 | 83 | 62,898 | 1 |
python,heroku,pyodbc
|
RedHat/CentOS:
dnf install -y unixODBC-devel
along with unixODBC installation
| 0 | 0 | 0 | 0 |
2015-07-11T03:31:00.000
| 7 | 0 | false | 31,353,137 | 0 | 0 | 1 | 4 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
|
sql.h not found when installing PyODBC on Heroku
| 59,790,771 | 1 | 83 | 62,898 | 1 |
python,heroku,pyodbc
|
I recently saw this error in Heroku. To fix this problem I took the following steps:
Add Apt File to the root folder, with the following:
unixodbc
unixodbc-dev
python-pyodbc
libsqliteodbc
Commit that
Run heroku buildpacks:clear
Run heroku buildpacks:add --index 1 heroku-community/apt
Push to Heroku
For me the problem was that I previously installed the buildpack for python, which was not needed. By running heroku buildpacks:clearI removed all un-needed buildpacka, then add back the one I needed. So if you do follow these steps be sure to make note of the build packs you need. To view the buildpacks you have run heroku buildpacks before following these steps.
| 0 | 0 | 0 | 0 |
2015-07-11T03:31:00.000
| 7 | 0.028564 | false | 31,353,137 | 0 | 0 | 1 | 4 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
|
sql.h not found when installing PyODBC on Heroku
| 47,557,567 | 1 | 83 | 62,898 | 1 |
python,heroku,pyodbc
|
The other answers are more or less correct; you're missing the unixodbc-dev[el] package for your operating system; that's what pip needs in order to build pyodbc from source.
However, a much easier option is to install pyodbc via the system package manager. On Debian/Ubuntu, for example, that would be apt-get install python-pyodbc. Since pyodbc has a lot of compiled components and interfaces heavily with the UnixODBC OS-level packages, it is probably a better fit for a system package rather than a Python/pip-installed one.
You can still list it as a dependency in your requirements.txt files if you're making code for distribution, but it'll usually be easier to install it via the system PM.
| 0 | 0 | 0 | 0 |
2015-07-11T03:31:00.000
| 7 | 0.028564 | false | 31,353,137 | 0 | 0 | 1 | 4 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
|
sql.h not found when installing PyODBC on Heroku
| 31,358,757 | 8 | 83 | 62,898 | 1 |
python,heroku,pyodbc
|
You need the unixODBC devel package. I don't know what distro you are using but you can google it and build from source.
| 0 | 0 | 0 | 0 |
2015-07-11T03:31:00.000
| 7 | 1 | false | 31,353,137 | 0 | 0 | 1 | 4 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
|
PyCharm 4.5.3 Django manage.py task not working after update
| 31,439,322 | 0 | 0 | 902 | 0 |
python,django,intellij-idea,pycharm
|
For version 4.5.3 only. The issue is with the Django Settings file, in TEMPLATES. It's a bug with the DIRS of TEMPLATES. See bug report. I commented out the TEMPLATES and it worked for me.
| 0 | 0 | 0 | 0 |
2015-07-11T20:54:00.000
| 1 | 1.2 | true | 31,361,738 | 1 | 0 | 1 | 1 |
After upgrading 4.5.3 my Tools->Run manage.py task no longer works. Clicking on the link does nothing. No window pops up. Going to "Python Console" does not work, same thing, nothing happens.
Going to Django in "Languages and Frameworks" and deselecting django allows the "python console" to properly work as a python interpreter. Reselecting Django reproduces the above results.
Settings for Django:
Django Project Root is the same folder which contains manage.py
Settings is my settings file
Manage script is manage.py
Settings for Django Console:
proper python interpreter
working directory is the same directory where manage.py is.
Project runs fine
Again, it all worked fine, updated to 4.5.3, and it stopped working. Any ideas?
Other issues, caused by the same thing, is that my test configuration doesn't run.
Edit: youtrack.jetbrains.com/issue/PY-16434 - Added
Edit: It's a bug, being fixed
|
Social login in using django-allauth without leaving the page
| 31,370,619 | 0 | 2 | 610 | 0 |
python,django,oauth-2.0,google-oauth,django-allauth
|
One option is that the primary form pops up social auth in a new window then uses AJAX to poll for whether the social auth has completed. As long as you are fine with the performance characteristics of this (it hammers your server slightly), then this is probably the simplest solution.
| 0 | 0 | 0 | 0 |
2015-07-12T17:25:00.000
| 2 | 0 | false | 31,370,534 | 0 | 0 | 1 | 2 |
I'm using django 1.8.3 and django-allauth 0.21.0 and I'd like the user to be able to log in using e.g. their Google account without leaving the page. The reason is that there's some valuable data from the page they're logging in from that needs to be posted after they've logged in. I've already got this working fine using local account creation, but I'm having trouble with social because many of the social networks direct the user away to a separate page to ask for permissions, etc. Ideally, I'd have all this happening in a modal on my page, which gets closed once authentication is successful.
The only possible (though not ideal) solution I can think of at the moment is to force the authentication page to open up in another tab (e.g. using target="_blank" in the link), then prompting the user to click on something back in the original window once the authentication is completed in the other tab.
However, the problem here is that I can't think of a way for the original page to know which account was just created by the previously-anonymous user without having them refresh the page, which would cause the important data that needs to be posted to be lost.
Does anyone have any ideas about how I could accomplish either of the two solutions I've outlined above?
|
Social login in using django-allauth without leaving the page
| 32,250,705 | 1 | 2 | 610 | 0 |
python,django,oauth-2.0,google-oauth,django-allauth
|
I ended up resolving this by using Django's session framework. It turns out that the session ID is automatically passed through the oauth procedure by django-allauth, so anything that's stored in request.session is accessible on the other side after login is complete.
| 0 | 0 | 0 | 0 |
2015-07-12T17:25:00.000
| 2 | 1.2 | true | 31,370,534 | 0 | 0 | 1 | 2 |
I'm using django 1.8.3 and django-allauth 0.21.0 and I'd like the user to be able to log in using e.g. their Google account without leaving the page. The reason is that there's some valuable data from the page they're logging in from that needs to be posted after they've logged in. I've already got this working fine using local account creation, but I'm having trouble with social because many of the social networks direct the user away to a separate page to ask for permissions, etc. Ideally, I'd have all this happening in a modal on my page, which gets closed once authentication is successful.
The only possible (though not ideal) solution I can think of at the moment is to force the authentication page to open up in another tab (e.g. using target="_blank" in the link), then prompting the user to click on something back in the original window once the authentication is completed in the other tab.
However, the problem here is that I can't think of a way for the original page to know which account was just created by the previously-anonymous user without having them refresh the page, which would cause the important data that needs to be posted to be lost.
Does anyone have any ideas about how I could accomplish either of the two solutions I've outlined above?
|
Customize frappe framework html layout
| 31,379,032 | 0 | 0 | 1,663 | 0 |
python,frameworks,erpnext,frappe
|
bench clear-cache will clear the cache. After doing this, refresh and check.
| 0 | 0 | 0 | 1 |
2015-07-13T07:01:00.000
| 3 | 0 | false | 31,377,196 | 0 | 0 | 1 | 3 |
ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update
|
Customize frappe framework html layout
| 58,808,805 | 0 | 0 | 1,663 | 0 |
python,frameworks,erpnext,frappe
|
It seems you're not in your bench folder.
When you create a new bench with, for example : bench init mybench it creates a new folder : mybench.
All bench commands must be run from this folder.
Could you try to run bench --help in this folder ? You should see the clear-cache command.
| 0 | 0 | 0 | 1 |
2015-07-13T07:01:00.000
| 3 | 0 | false | 31,377,196 | 0 | 0 | 1 | 3 |
ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update
|
Customize frappe framework html layout
| 68,402,268 | 0 | 0 | 1,663 | 0 |
python,frameworks,erpnext,frappe
|
If anyone stumbles on this. The command needed is bench build. That will compile any assets related to the build.json file in the public folder. (NOTE: You usually have to create build.json yourself).
| 0 | 0 | 0 | 1 |
2015-07-13T07:01:00.000
| 3 | 0 | false | 31,377,196 | 0 | 0 | 1 | 3 |
ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update
|
GAE middlewares for modules?
| 31,397,559 | 0 | 0 | 59 | 0 |
google-app-engine,middleware,google-app-engine-python
|
The way I approached such scenario (in a python-only project, donno about php) was to use a custom handler (inheriting webapp2.RequestHandler which I was already using for session support). In its customized dispatch() method the user info is collected and stored in the handler object itself.
The implementation of the handler exists in only one version controlled file, but which is symlinked (for GAE accessibility) in each module that references the handler. This way I don't have to manage multiple independent copies of the user and session verification code.
| 0 | 1 | 0 | 0 |
2015-07-13T08:08:00.000
| 1 | 0 | false | 31,378,288 | 0 | 0 | 1 | 1 |
Assume that I have few modules on my GAE project (say A, B, C). They shares the users database and sessions.
For example: module A will manage the login/logout actions (through cookies), module B,C will handle other actions. FYI, those modules are developed in both PHP and Python.
Now, I do not want to make user & session verification codes on all 3 modules.
Is there anyway for me to put a middleware that run before all 3 modules for each request. Such as X: it will add header for each request to set the user id and some user's information if the user has logged in.
I.E: after I can implement my above idea. Each request will run through 1 in below 3 cases:
X, A
X, B
X, C
What do you say?
Thanks
Update 1: more information
The middleware, I mean the request middle ware.
If X is a middleware then it will be run before the request is passed to the app (or module), it will change the request only such as:
Do some authentication actions
Add some headers:
X-User-Id: for authorized user id
X-User-Scopes: for scopes of authorized user
etc ...
And of course, it is independent to the inside module's language (PHP or Python or Java or ...)
The X middleware should be configured at app.yaml.
|
Wait 5 seconds before download button appear
| 31,387,864 | 0 | 0 | 1,293 | 0 |
javascript,python,django,timer
|
The only secure way would be to put the logic on the server that checks the time. Make an Ajax call to the server. If the time is under 5 seconds, do not return the HTML, if it is greater than , than return the html to show.
Other option is to have the link point to your server and if the time is less than five seconds it redirects them to a different page. If it is greater than 5, it will redirect them to the correct content.
Either way, it requires you to keep track of session time on the server and remove it from the client.
| 0 | 0 | 0 | 0 |
2015-07-13T15:44:00.000
| 3 | 1.2 | true | 31,387,762 | 0 | 0 | 1 | 2 |
I know how to do that with javascript but I need a secure way to do it.
Anybody can view page source, get the link and do not wait 5 seconds.
Is there any solution? I'm working with javascript and django.
Thanks!
|
Wait 5 seconds before download button appear
| 31,388,190 | 0 | 0 | 1,293 | 0 |
javascript,python,django,timer
|
Use server side timeout.. whenever there is (AJAX) request from client for download link with timestamp, compare the client sent timestamp with currenttime and derive how much time is required to halt the request at server side to make up ~5 seconds. So by comparing timestamp you can almost achieve accuracy of waiting time as the network delays would be taken into account automatically.
| 0 | 0 | 0 | 0 |
2015-07-13T15:44:00.000
| 3 | 0 | false | 31,387,762 | 0 | 0 | 1 | 2 |
I know how to do that with javascript but I need a secure way to do it.
Anybody can view page source, get the link and do not wait 5 seconds.
Is there any solution? I'm working with javascript and django.
Thanks!
|
How can i use the Javascript in adobe Acrobat to run a Python Script?
| 31,418,813 | 0 | 0 | 169 | 0 |
javascript,python-2.7,pdf,acrobat
|
Not possible.
And that is good so, because allowing such things would be extremely unsafe.
| 0 | 0 | 0 | 0 |
2015-07-14T09:53:00.000
| 1 | 0 | false | 31,403,321 | 0 | 0 | 1 | 1 |
I'm bit new to adobe acrobat JavaScript. I was wondering if it is possible to have a click button in the pdf, once i click it, it will run a JavaScript which will run a python code automatically? Thanks in advance.
|
How can i correctly pass arguments to classbasedviews testing Django Rest Framework?
| 31,435,440 | -1 | 4 | 3,215 | 0 |
python,django,testing,django-rest-framework
|
Just found the problem!!
I was using APIRequestFactory()
and should have been using the build in client factory from the APITestCase test class from Django Rest Framework
| 0 | 0 | 0 | 0 |
2015-07-14T12:05:00.000
| 3 | 1.2 | true | 31,406,106 | 0 | 0 | 1 | 1 |
I want to test some views in DRF project.
The problem comes when I try to check views that have arguments in the urls.
urls.py
url(r'^(?Pcompany_hash>[\d\w]+)/(?Ptimestamp>[\.\d]*)/employees/$',
EmployeeList.as_view(), name='employeelist'),
[edit: "<" in url has been deleted in purpose, just it isnt considered a tag and thus not shown]
views.py
class EmployeeList(ListCreateAPIView):
serializer_class = EmployeeDirectorySerializer
def inner_company(self):
company_hash = self.kwargs['company_hash']
return get_company(company_hash)
def get_queryset(self):
return Employee.objects.filter(company=self.inner_company())
test.py
class ApiTests(APITestCase):
def setUp(self):
self.factory = APIRequestFactory()
self.staff = mommy.make('directory.Employee', user__is_staff=True)
self.employee = mommy.make('directory.Employee')
self.hash = self.employee.company.company_hash
def getResponse(self, url, myView, kwargs):
view = myView.as_view()
request = self.factory.get(url, kwargs)
force_authenticate(request, user=user)
response = view(request)
return response
def test_EmployeeList(self):
kwargs = {'timestamp': 0, 'company_hash': self.hash}
url = reverse('employeelist', kwargs=kwargs)
testedView = EmployeeList
response = self.getResponse(url, testedView,
kwargs=kwargs)
self.assertEqual(response.status_code, 200)
I'm getting this error
company_hash = self.kwargs['company_hash']
KeyError: 'company_hash'
That is the args aren't been passed to the view.
I've tried in so many different ways to pass by the args, can't find a solution.
Any help is welcomed!
|
How can I get my Django admin styles to load?
| 31,523,646 | 0 | 0 | 110 | 0 |
python,django,amazon-s3
|
The reason my admin styles weren't loading was because I had DEBUG = True in a different settings file than my main settings file (which had DEBUG = False).
| 0 | 0 | 0 | 0 |
2015-07-14T17:01:00.000
| 1 | 1.2 | true | 31,412,936 | 0 | 0 | 1 | 1 |
I'm using Django 1.8 with static files and also using django-offsite-storage.
When I browse the Django admin in my browser on my local machine it's requesting the CSS file /admin/css/base.css from my S3 bucket but without the hashed version. I want it to use the hashed file name version because that's what get's uploaded to the S3 bucket.
I've tried switching DEBUG = True or False, but neither do the trick.
How can I get it to do this on my local machine?
|
travis setup heroku command on Windows 7 64 bit
| 31,445,471 | 0 | 1 | 456 | 0 |
python,ruby,windows,heroku,travis-ci
|
If you hadn't had Heroku Toolbelt setup to the $PATH environment variable during installation, here are some steps to check:
Check if Heroku toolbelt is set in PATH variable. If not, cd to your Heroku toolbelt installation folder, then click on the address bar and copy it.
Go to the Control Panel, then click System and Advanced System Protection.
Go to Environment Variables, then look for $PATH in the System Variables
After the last program in the variable, put a ; then paste in your Heroku CLI folder and click OK. (This requires cmd to be restarted manually)
Login to Heroku CLI
grab the token key from heroku auth:token
run travis setup heroku if the setup goes smoothly, you shouldn't get the command not found and prompt you for heroku auth key. It will ask that you want to encrypt the auth key (highly recommend) and verify the information you provided with the toolbelt and Travis CLI.
commit changes
you should be able to get your app up and running within your tests.
| 0 | 1 | 0 | 1 |
2015-07-15T04:55:00.000
| 1 | 0 | false | 31,421,793 | 0 | 0 | 1 | 1 |
Hi there I'm trying to deploy my python app using Travis CI but I'm running into problems when I run the "travis setup heroku" command in the cmd prompt.
I'm in my project's root directory, there is an existing ".travis.yml" file in that root directory.
I've also installed ruby correctly and travis correcty because when I run:
"ruby -v" I get "ruby 2.2.2p95 (2015-04-13 revision 50295) [x64-mingw32]"
"travis -v" I get "1.7.7"
When I run "travis setup heroku" I get this message "The system cannot find the path specified" then prompts me for a "Heroku API token:"
What's the issue?
|
Admob Ads with Python Subset For Android (PGS4A)
| 31,441,510 | 0 | 0 | 199 | 0 |
java,android,python,android-studio,admob
|
To access Java already implemented version you can use pyjnius. I tried to use it for something else and I didn't succeed. Well, I yielded pretty quickly because it wasn't necessary for my project.
Otherwise, I am afraid, you will have to implement it yourself from scratch.
I never heard about a finished solution for your problem.
If you succeeded to use PGU, it wouldn't be so hard.
If not, well, I wish you luck, and put your solution online for others.
There is an Eclipse plug-in for Python. I think that Android studio does not support PGS4A. Never needed it. Console is the queen.
| 0 | 0 | 0 | 1 |
2015-07-15T21:18:00.000
| 1 | 0 | false | 31,441,307 | 0 | 0 | 1 | 1 |
I'd like to have advertisements in an android App I've written and built using PGS4A. I've done my research and all, but there doesn't seem to be any online resources that explains how to do that just yet. I haven't much knowledge on Java either, which is clearly why I've written that in Python. Has anyone found a way to achieve that? If not, how difficult would it be to convert the project files into an Android Studio (or even an Eclipse) project? (so then one can just implement the ads following the Java Admob documentation found everywhere)
Thank you in advance.
|
Django ForeignKey form field widget
| 31,518,565 | 0 | 1 | 1,873 | 0 |
python,django,forms
|
If anyone has the same problem here is the solution (to my problem at least):
I tried to use the clean_<fieldname> method to change the user entered string to database id. The method wasn't executing because the validation process was stopping earlier because of the difference between the form field and the widget. I redefined the form field to CharField so that step of the validation was working and then the clean_<fieldname> method executes without a problem.
| 0 | 0 | 0 | 0 |
2015-07-16T13:45:00.000
| 1 | 1.2 | true | 31,456,035 | 0 | 0 | 1 | 1 |
First of all I have tried to research my problem but have not been able to find what I need. My problem might be related to the design of my project (any help would be appreciated).
The problem I am facing is as follows:
I have a few models
I have a model that would be used specifically to create a ModelForm
In this model I have ForeignKey field that is represented by default as a select/option input widget in the ModelForm (for each the value attribute is the ForeignKey and text between the tags is the __str__() of the model the ForeignKey points to. The user sees the __str__() and value attribute of the option tag is submitted which is great).
So far so good but I want to replace the widget with an input text field so I can implement it as a search field.
Now when the user submits the form the string entered in the text input field is submitted and of course django doesn't like that since it expects a foreign key
I already can think of a few solutions to the problem and I am sure I can make it work but each of them feels like I would be violating some best practices. So my question is what should I do?
Do I exclude this particular field from the ModelForm and implement it as an input text field then after form submission make a query with it's value and then store the ForeignKey to the DB
Do I manipulate the data with JavaScript upon submission so that Django receives correct information
Can I clean this fields data with Django and transform it from string to FK?
Am I going the wrong way with this or there is a Django feature for this type of situation?
|
Is there any way to create two tables in the database for one odoo module?
| 31,456,810 | 0 | 0 | 592 | 1 |
python-2.7,openerp,odoo
|
Yes you can, for every class new_class(... with a unique _name="new.class" is created a table in the data base, if you want more than one table, you need to create more than one class in your .py file
For more reference look the account module in account_invoice.py you have class account_invoice(models.Model): _name = "account.invoice" and class account_invoice_line(models.Model): _name = "account.invoice.line" for each class are a table in the data base.
I Hope this will can help you!
| 0 | 0 | 0 | 0 |
2015-07-16T14:00:00.000
| 1 | 1.2 | true | 31,456,406 | 0 | 0 | 1 | 1 |
Each module in odoo have a table in the database.
I'd like to know if I can create two tables in the odoo database for one module.
|
Extract a javascript variable with flask
| 31,458,066 | 1 | 0 | 257 | 0 |
javascript,python,flask,jinja2
|
That doesn't make sense!
Flask/Jinja can't read from javascript vars.
However, (as you said it is a POST request) you could:
Three ways:
Passing (dynamically modifying DOM link or form action URL) counter var value to POST request URL, like:
/path/postaction?counter=4;
If it's a POST request from form you could modify form action (see
above) or adding a input hidden to form;
Setting a cookie and get it in the next request (I don't like this option);
| 0 | 0 | 0 | 0 |
2015-07-16T14:30:00.000
| 1 | 1.2 | true | 31,457,146 | 0 | 0 | 1 | 1 |
I am working on a little flask project and in my javascript script I have a variable which works as a counter.
When I receive a POST request I would like my python script to extract this counter variable.
I tried to set a jinja2 variable by doing {%set extractor_var = js_counter%} but it seems to be impossible to use a javascript variable inside a jinja2 template.
Can anyone lead me to another solution?
|
Sentry limit notifications to certain users
| 31,466,945 | 0 | 1 | 162 | 0 |
python,sentry
|
I found it - found even two different ways:
I can make projects public (under project settings) so everyone (with a sentry account) can access it.
I can give a users access to a project and that user has to opt-out of emails by going to his Account (top right) and then notifications.
| 0 | 0 | 0 | 1 |
2015-07-17T00:48:00.000
| 2 | 0 | false | 31,466,828 | 0 | 0 | 1 | 2 |
Can I give a user access to a sentry project but not send him all the emails?
Sometimes I want to forward the error message to our mobile developers, so they can see the parameters, but they don't need to get all the other reports.
|
Sentry limit notifications to certain users
| 31,466,990 | 0 | 1 | 162 | 0 |
python,sentry
|
The best way to deal with this is to make the organization "open membership". Members can then choose to join or leave teams, and opt-out of project-specific notifications when they want.
| 0 | 0 | 0 | 1 |
2015-07-17T00:48:00.000
| 2 | 0 | false | 31,466,828 | 0 | 0 | 1 | 2 |
Can I give a user access to a sentry project but not send him all the emails?
Sometimes I want to forward the error message to our mobile developers, so they can see the parameters, but they don't need to get all the other reports.
|
Computing an index that accounts for score and date within Google App Engine Datastore
| 31,478,203 | 2 | 0 | 68 | 0 |
python,google-app-engine,google-bigquery,google-cloud-datastore,google-prediction
|
Such a system is often called "frecency", and there's a number of ways to do it. One way is to have votes 'decay' over time; I've implemented this in the past on App Engine by storing a current score and a last-updated; any vote applies an exponential decay to the score based on the last-updated time, before storing both, and a background process runs a few times a day to update the score and decay time of any posts that haven't received votes in a while. Thus, a post's score always tends towards 0 unless it consistently receives upvotes.
Another, even simpler system, is to serial-number posts. Whenever someone upvotes a post, increment its number. Thus, the natural ordering is by creation order, but votes serve to 'reshuffle' things, putting more upvoted posts ahead of newer but less voted posts.
| 0 | 1 | 0 | 0 |
2015-07-17T14:11:00.000
| 1 | 1.2 | true | 31,477,842 | 0 | 0 | 1 | 1 |
I'm working on an Google App Engine (python) based site that allows for user generated content, and voting (like/dislike) on that content.
Our designer has, rather nebulously, spec'd that the front page should be a balance between recent content and popular content, probably with the assumption that these are just creating a score value that weights likes/dislikes vs time-since-creation. Ultimately, the goals are (1) bad content gets filtered out somewhat quickly, (2) content that continues to be popular stays up longer, and (3) new content has a chance at staying long enough to get enough votes to determine if its good or bad.
I can easily compute a score based on likes/dislikes. But incorporating the time factor to produce a single score that can be indexed doesn't seem feasible. I would essentially need to reindex all the content every day to adjust its score, which seems cost prohibitive once we have any sizable amount of content. So, I'm at a loss for potential solutions.
I've also suggested something where where we time box it (all time, daily, weekly), but he says users are unlikely to look at the tabs other than the default view. Also, if I filtered based on the last week, I'd need to sort on time, and then the secondary popularity sort would essentially be meaningless since submissions times would be virtually unique.
Any suggestions on solutions that I might be overlooking?
Would something like Google's Prediction API or BigQuery be able to handle this better?
|
Flask: Prevent HTML escaping in the browser's URL bar
| 31,480,666 | 4 | 1 | 217 | 0 |
python,flask
|
It is not Flask that turns * into %2A, it is the browser.
Character * is not legal in an URL and there is nothing you can do about it. Browsers must escape illegal characters in sent requests. A browser might leave * in the address bar (and escape it silently), but you should not expect browsers to do so.
| 0 | 0 | 0 | 0 |
2015-07-17T15:56:00.000
| 1 | 1.2 | true | 31,479,882 | 0 | 0 | 1 | 1 |
I am working on a RESTful API in Flask. It allows wildcards to be used. The problem is that when a URL is entered, such as mysite.com/get/abc*, Flask turns this URL into mysite.com/get/abc%2A, both on the backend and in the browser's URL bar.
This is easy enough to handle on the backend, but how can I prevent the browser's URL bar from containing ugly things like '%2A'?
|
How do I redirect and pass my Google API data after handling it in my Oauth2callback handler on Google App Engine
| 31,496,649 | 1 | 0 | 37 | 0 |
python-2.7,google-app-engine,oauth-2.0
|
I think I found a better way of doing it, I just use the oauth callback to redirect only with no data, and then on the redirect handler I access the API data.
| 0 | 1 | 1 | 0 |
2015-07-18T23:34:00.000
| 1 | 0.197375 | false | 31,496,583 | 0 | 0 | 1 | 1 |
My Oauth2Callback handler is able to access the Google API data I want - I want to know the best way to get this data to my other handler so it can use the data I've acquired.
I figure I can add it to the datastore, or also perform redirect with the data. Is there a "best way" of doing this? For a redirect is there a better way than adding it to query string?
|
Django, Pinax, couldn't extract file
| 31,564,132 | 2 | 3 | 150 | 0 |
python,django,pinax,pythonanywhere
|
The .html at the end of /tmp/django_project_template_e1ulrY_download/master.html seems suspect to me. I'm guessing that you got an error html page instead of the archive you requested. Check the contents of that file to see what happened.
| 0 | 0 | 0 | 0 |
2015-07-19T17:55:00.000
| 1 | 0.379949 | false | 31,504,190 | 0 | 0 | 1 | 1 |
I'm following the exact directions of getting started with pinax-project-account. You can see them [here][1]. I just created my virtual environment and installed the requirements. The problem with when I run this command: django-admin.py startproject --template=https://github.com/pinax/pinax-project-account/zipball/master. I get this error:
CommandError: couldn't extract file /tmp/django_project_template_e1ulrY_downl
oad/master.html to /tmp/django_project_template_wU3ju6_extract: Path not a re
cognized archive format: /tmp/django_project_template_e1ulrY_download/master.
html
I can get this working on my local machine but I'm using python anywhere and it doesn't seem to like this command?
Any ideas?
|
Django app initialization process
| 31,506,797 | 0 | 0 | 835 | 0 |
python,django,deployment,development-environment
|
Sounds like the quickest (if not most elegant) solution would be to call 'python manage.py runserver' at the end of your script.
| 0 | 0 | 0 | 0 |
2015-07-19T22:06:00.000
| 2 | 0 | false | 31,506,425 | 0 | 0 | 1 | 1 |
There is a set of functions that I need to carry out during the start of my server. Regardless of path whether that be "/", "/blog/, "/blog/post". For developments purposes I'd love for this script to run every time I run python manage.py runserver and for production purposes I would love this script to run during deployment. Anyone know how this can be done?
My script is scraping data off and making a call to Facebook's Graph API with python and some of its libraries.
|
How to use _fields option in odoo 8
| 38,759,025 | 1 | 2 | 834 | 0 |
python-2.7,odoo-8,openerp-8
|
self._fields it will return the fields which are available in that model to a dictionary type
model._fields[fieldname] will return the datatype of field as key and the field name with respective model as value
example self._fields['price_unit'] in sale.order.line will return
Float: sale.order.line.price_unit
| 0 | 0 | 0 | 0 |
2015-07-20T11:28:00.000
| 2 | 0.099668 | false | 31,515,322 | 0 | 0 | 1 | 1 |
I am new to odoo.
Is anyone have tutorial of using _fields feature in odoo 8 ?
In odoo 8, _columns is deprecated.
A common pattern in OpenERP was to do Model fields introspection using _columns property. From 8.0 _columns is deprecated by _fields that contains list of consolidated fields instantiated using old or new API.
There is no clear documentation about _fields options.
Please give a right tutorial on this.
|
Django: How to dump the database in 1.8?
| 31,545,842 | 1 | 1 | 309 | 1 |
python,django,django-models
|
You can dump the db directly with mysqldump as allcaps suggested, or run manage.py migrate first and then it should work. It's telling you there are migrations that you have yet to apply to the DB.
| 0 | 0 | 0 | 0 |
2015-07-21T16:49:00.000
| 2 | 1.2 | true | 31,545,025 | 0 | 0 | 1 | 1 |
I used to use manage.py sqlall app to dump the database to sql statements. While, after upgrading to 1.8, it doesn't work any more.
It says:
CommandError: App 'app' has migrations. Only the sqlmigrate and
sqlflush commands can be used when an app has migrations.
It seems there is not a way to solve this.
I need to dump the database to sql file, so I can use it to clone the whole database else where, how can I accomplish this?
|
Django sqlite database is locked
| 31,547,325 | 9 | 3 | 4,653 | 1 |
python,django,sqlite
|
I have had a lot of these problems with Sqlite before. Basically, don't have multiple threads that could, potentially, write to the db. If you this is not acceptable, you should switch to Postgres or something else that is better at concurrency.
Sqlite has a very simple implementation that relies on the file system for locking. Most file systems are not built for low-latency operations like this. This is especially true for network-mounted filesystems and the virtual filesystems used by some VPS solutions (that last one got me BTW).
Additionally, you also have the Django layer on top of all this, adding complexity. You don't know when Django releases connections (although I am pretty sure someone here can give that answer in detail :) ). But again, if you have multiple concurrent writers, you need a database layer than can do concurrency. Period.
I solved this issue by switching to postgres. Django makes this very simple for you, even migrating the data is a no-brainer with very little downtime.
| 0 | 0 | 0 | 0 |
2015-07-21T18:47:00.000
| 2 | 1 | false | 31,547,234 | 0 | 0 | 1 | 1 |
I've been struggling with "sqlite3.OperationalError database is locked" all day....
Searching around for answers to what seems to be a well known problem I've found that it is explained most of the time by the fact that sqlite does not work very nice in multithreading where a thread could potentially timeout waiting for more than 5 (default timeout) seconds to write into the db because another thread has the db lock .
So having more threads that play with the db , one of them using transactions and frequently writing I've began measuring the time it takes for transactionns to complete. I've found that no transaction takes more than 300 ms , thus rendering as not plausible the above explication. Unless the thread that uses transactions makes ~21 (5000 ms / 300 ms) consecutive transactions while any other thread desiring to write gets ignored all this time
So what other hypothesis could potentially explain this behavior ?
|
PyCharm - how to use a folder that is not in the base directory
| 31,554,899 | 1 | 1 | 83 | 0 |
python,django,directory,pycharm
|
Open File > settings menu and then goto project: foo > Project Structure and press Add Content Root, then select destination directory.
and after folder added in list, right click on the folder and set as source, in last step press OK...
| 0 | 0 | 0 | 0 |
2015-07-22T03:27:00.000
| 1 | 1.2 | true | 31,553,322 | 1 | 0 | 1 | 1 |
I want to use a folder that is not in the base directory of my django project without adding it in to the base directory.
|
Setting query_string for next request / sending search urls around
| 31,573,526 | 0 | 1 | 55 | 0 |
python,pyramid,pylons
|
Search form is a classical example of a form which should use GET. Just use GET and get the correct behaviour for free :) I don't see anything requiring POST in your question.
| 0 | 0 | 0 | 0 |
2015-07-22T09:31:00.000
| 1 | 1.2 | true | 31,559,238 | 0 | 0 | 1 | 1 |
I have a mini-search form on a Pyramid app webpage, where contents are read and processed upon POST request when user presses a Search button.
I selected POST method of submitting since the web form is otherwise complex and processing them this way plays well with WTForms as well as it seems default and convenient way of handling forms in Pyramid (if request.method == 'POST': ... etc).
But that gets me a problem - I do not have query string (available in request.params) anymore to form an URL that can be copied and pasted elsewhere to redo the search.
request.params is a read-only NestedMultiDict, so I can't add query parameters in there.
Web forms are rendered using Chameleon and in typical way (return {..} for Chameleon template engine to get them and use for rendering HTML).
Is there a way of passing query string explicitly to the next request so that after pressing Search the user gets search query string added to URL? (I do not want to use kludges like HTTPFound redirect to the same view, etc).
|
How to tell scrapy crawler to STOP following more links dynamically?
| 31,590,603 | 0 | 1 | 433 | 0 |
python,scrapy
|
Scrapy's CrawlSpider has an internal _follow_links member variable which is not yet documented (experimental as for now)
setting self._follow_links = False will tell scrapy to stop following more links. But continue to finish up all the Request objects it has already created
| 0 | 0 | 1 | 0 |
2015-07-22T14:03:00.000
| 3 | 1.2 | true | 31,565,422 | 0 | 0 | 1 | 1 |
Basically I have a regex rule for following pages
Each page has 50 links
When i hit a link that is too old (based on a pre-defined date-time)
I want to tell scrapy to stop following more pages, but NOT stop it entirely, it must continue to scrape the links it has already decided to scrape -> (complete all Request objects created). JUST that it must NOT follow any more links. So the program will eventually grind to a stop (when it's done scraping all the links)
Is there any way i can do this inside the spider?
|
For Django Rest Framework, what is the difference in use case for HyperLinkedRelatedField and HyperLinkedIdentityField?
| 31,571,773 | 5 | 15 | 3,157 | 0 |
python,django,django-rest-framework
|
The obvious answer is that HyperLinkedIdentityField is meant to point to the current object only, whereas HyperLinkedRelatedField is meant to point to something that the current object references. I suspect under the hood the two are different only in that the identity field has less work to do in order to find the related model's URL routes (because the related model is the current model), while the related field has to actually figure out the right URLs for some other model.
In other words, HyperLinkedIdentityField is lighter-weight (more efficient), but won't work for models other than the current model.
| 0 | 0 | 0 | 0 |
2015-07-22T14:51:00.000
| 2 | 0.462117 | false | 31,566,675 | 0 | 0 | 1 | 1 |
I've of course reviewed the docs, but was wondering if anyone could more succinctly explain the difference in use case and application between these fields. Why would one use one field over the other? Would there be a difference between these fields for a OneToOne relationship?
|
Using a Remote Deskdop Connection to run Python, empty html files
| 31,594,070 | 0 | 0 | 45 | 0 |
python,html,virtual-machine,anaconda,userappdatapath
|
The problem was that Internet Explorer on the VM was very old and therefore not running the html code properly. Updated to Firefox and it worked!
| 0 | 0 | 0 | 0 |
2015-07-22T19:41:00.000
| 1 | 1.2 | true | 31,572,540 | 0 | 0 | 1 | 1 |
I am trying to run my python files from a Remote Desktop Connection (virtual machine?). I copied over a few folders I thought would be relevant and ran Anaconda to install python and the add-ons.
My code runs, but the output is html files and in the VM they are empty. I checked the code for the html and it looks like it writes information from my local C:\ drive. For example, this is a snippet from the html: BEGIN C:\Users\jbyrusb\AppData\Local\Continuum\Anaconda\lib\site-packages\bokeh\server\static\css/bokeh.min.css
I tried to copy the AppData folder over to the VM. Still, the html files come up empty.
Does anyone know why/ a better way to move my things onto a VM? This is my first time using one.
|
How can I create an HTML page that depends on form inputs?
| 31,598,300 | 0 | 0 | 72 | 0 |
python,html,forms
|
Flask+Jinja should work well for what you're trying to do. Essentially, your first page would be a form with a number of form elements. When your form is submitted, that data gets passed back to your flask app where you can extract the users selections. Using these selections you can generate/populate the next html page.
Since the user can seemingly select any combination of fields, the template for your second html page should contain all the possible tables and then only show the selected ones using an if...else statement
| 0 | 0 | 0 | 0 |
2015-07-23T21:12:00.000
| 2 | 1.2 | true | 31,598,119 | 0 | 0 | 1 | 1 |
For starters - I am using a combination of HTML, Python+Flask/jinja
I have an HTML page which contains a basic form. When users input data to that form, it is passed through my Python/flask script and populates a different HTML template with the inputted form values.
What I need to accomplish is creating variations of the final HTML template based on what fields users choose in the beginning form. e.g.
The flow would appear as:
User selects fields to use in HTML form > data is passed through flask app > data is populated in final HTML template, which is designed around the fields selected in the original form.
The final HTML template is essentially a series of tables. Based on which fields the user selects in the form, some tables will be needed and others not. Whether the user selects a field should determine whether or not the table appears in the final HTML templates code or not.
I'm not entirely sure what tools I can use to accomplish this, and whether I will need something to supplement flask/jinja. Thanks for any input.
|
How to find unused code in Python web site?
| 31,603,369 | 1 | 5 | 1,412 | 0 |
python,django,code-coverage
|
On a well tested project, coverage would be ideal but with some untested legacy code I don't think there is a magical tool.
You could write a big test loading all the pages and run coverage to get some indication.
Cowboy style:
If it's not some critical code and you're fairly sure it's unused (i.e. not handling payments, etc.). Comment it out, check that the tests pass, deploy and wait a week or so before removing it definitely (or putting it back if you got a notification).
| 0 | 0 | 0 | 1 |
2015-07-24T03:43:00.000
| 3 | 0.066568 | false | 31,601,820 | 0 | 0 | 1 | 1 |
We have been using Django for a long time. Some old code is not being used now. How can I find which code is not being used any more and remove them.
I used coverage.py with unit tests, which works fine and shows which part of code is never used, but the test covered is very low. Is there any way to use it with WSGI server to find which code have never served any web requests?
|
Google Analytics Management API - Insert method - Insufficient permissions HTTP 403
| 31,866,981 | 0 | 2 | 480 | 1 |
api,python-2.7,google-analytics,insert,http-error
|
The problem was I using a service account when I should have been using an installed application. I did not need a service account since I had access using my own credentials.That did the trick for me!
| 0 | 1 | 0 | 0 |
2015-07-24T23:46:00.000
| 2 | 0 | false | 31,621,373 | 0 | 0 | 1 | 1 |
I am trying to add users to my Google Analytics account through the API but the code yields this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/analytics/v3/management/accounts/**accountID**/entityUserLinks?alt=json returned "Insufficient Permission">
I have Admin rights to this account - MANAGE USERS. I can add or delete users through the Google Analytics Interface but not through the API. I have also added the service account email to GA as a user. Scope is set to analytics.manage.users
This is the code snippet I am using in my add_user function which has the same code as that provided in the API documentation.
def add_user(service):
try:
service.management().accountUserLinks().insert(
accountId='XXXXX',
body={
'permissions': {
'local': [
'EDIT',
]
},
'userRef': {
'email': '[email protected]'
}
}
).execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
return None
Any help will be appreciated. Thank you!!
|
image.url returns local full path in view.py Django 1.6
| 37,597,365 | 0 | 0 | 1,037 | 0 |
python,django,django-models,django-views
|
I had this problem as well.
A common reason for getting a full local path is when you use a function to change the path in the upload_to field. You just have to make sure that function returns a relative path, not a full path.
In my case my function would create de necessary dirs if none existed. I needed to use the full MEDIA_ROOT path for that, but had to make sure it returned a relative path value for upload_to.
| 0 | 0 | 0 | 0 |
2015-07-25T03:20:00.000
| 2 | 0 | false | 31,622,553 | 0 | 0 | 1 | 1 |
Image.url of ImageField returns local path in view.py. I've checked MEDIA_ROOT and MEDIA_URL in settings.py but image.url (moreover, image.path and image.name) always return local full path. I can access the file with the correct url and the file saved the correct path.
Please help me!
|
multiple versions of django/python in a single project
| 31,639,931 | 3 | 1 | 67 | 0 |
python,django,heroku,scrapy
|
It's impossible for apps in the same project to be on different Python versions; the server has to run on one or the other. But it would be possible to have two projects, with your models in a shared app that is installed in both models, and the configuration pointing to the same database.
| 0 | 0 | 0 | 0 |
2015-07-26T16:59:00.000
| 1 | 0.53705 | false | 31,639,596 | 0 | 0 | 1 | 1 |
I have been building a project on Ubuntu 15.04 with Python 3.4 and django 1.7. Now I want to use scrapy djangoitem, but that only runs on python 2.7. It's easy enough to have separate virtualenvs to do the developing in, but how can i put these different apps together in a single project, not only on my local machine, but later on heroku?
If it was just content, I could move the scrapy items over once the work was done, but the idea of djangoitem is that it uses the django model. Does that mean the django model has to be on python 2.7 also in order for djangoitem to access it? Even that is not insurmountable if I then port it to python 3, but it isn't very DRY, especially when i have to run scrapy for frequent updates. Is there a more direct solution, such as a way to have one app be 2.7 and another be 3.4 in the same project? Thanks.
|
How can I ensure cron job runs only on one host at any time
| 31,645,469 | -1 | 0 | 416 | 0 |
python,database,cron,crontab,distributed
|
simple way:
- start cron before need time (for example two minutes)
- force synchronize time (using ntp or ntpdate) (optional paranoid mode)
- wait till expected time, run job
| 0 | 1 | 0 | 0 |
2015-07-27T05:11:00.000
| 1 | -0.197375 | false | 31,645,343 | 0 | 0 | 1 | 1 |
I have a django management command run as a cron job and it is set on multiple hosts to run at the same time. What is the best way to ensure that cron job runs on only one host at any time? One approach is to use db locks as the cron job updates a MySQL db but I am sure there are better(django or pythonic) approaches to achieve what I am looking for
|
Can AWS ElasticMapReduce take S3 folders as Input?
| 32,658,324 | 0 | 0 | 71 | 0 |
python-2.7,amazon-web-services,amazon-s3,amazon-emr
|
Yes you can specify a folder whose sub folders contain all the input files. However in your code you need to ensure that your functions look for the sub-folders in the input, and not just take the main folder as input.
| 0 | 0 | 0 | 0 |
2015-07-27T08:54:00.000
| 1 | 0 | false | 31,648,780 | 0 | 0 | 1 | 1 |
i'm currently trying to run a mapreduce job where the inputs are scattered in different folders underneath catch-all bucket in S3.
My original approach was to create a cluster for each of the input files and write separate outputs for each of them. However, that would require spinning up more than 200+ clusters and I don't think thats the most efficient way.
I was wondering if I could instead of specifying a file as input into EMR, specify a folder whose subfolders contain all of the input files.
Thanks!
|
How to create empty wordpress permalink and redirect it into django website?
| 31,649,614 | 2 | 0 | 138 | 0 |
python,django,wordpress
|
There are many ways to do this. You will have to provide more info about what you are trying to accomplish to give the right advise.
make a page with a redirect (this is an ugly solution in seo and user perspective)
handle this on server level.
load your Django data with an ajax call
| 0 | 0 | 0 | 0 |
2015-07-27T09:21:00.000
| 2 | 0.197375 | false | 31,649,314 | 0 | 0 | 1 | 1 |
I need to do such thing, but I don't even know if it is possible to accomplish and if so, how to do this.
I wrote an Django application which I would like to 'attach' to my wordpress blog. However, I need a permalink (but no page in wordpress pages section) which would point to Django application on the same server. Is that possible?
|
Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter?
| 31,662,734 | 0 | 1 | 599 | 1 |
java,python,excel,apache-poi,xlsxwriter
|
255 characters in a URL is an Excel 2007+ limitation. Try it in Excel.
I think the XLS format allowed longer URLs (so perhaps that is the difference).
Also XlsxWriter doesn't use the HYPERLINK() function internally (although it is available to the user via the standard interface).
| 0 | 0 | 0 | 0 |
2015-07-27T19:19:00.000
| 2 | 0 | false | 31,661,485 | 0 | 0 | 1 | 2 |
I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't seem to have that issue. Is it because they directly write it into the cell itself or is there a different reason? Also, is there a workaround in Python that can solve this issue?
|
Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter?
| 36,582,681 | 1 | 1 | 599 | 1 |
java,python,excel,apache-poi,xlsxwriter
|
Obviously the length limitation of a hyperlink address in .xlsx (using Excel 2013) is 2084 characters. Generating a file with a longer address using POI, repairing it with Excel and saving it will yield an address with a length of 2084 characters.
The Excel UI and .xls files seem to have a limit of 255 characters, as already mentioned by other commenters.
| 0 | 0 | 0 | 0 |
2015-07-27T19:19:00.000
| 2 | 0.099668 | false | 31,661,485 | 0 | 0 | 1 | 2 |
I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't seem to have that issue. Is it because they directly write it into the cell itself or is there a different reason? Also, is there a workaround in Python that can solve this issue?
|
multidomain configuration for flask application
| 33,986,241 | 0 | 5 | 504 | 0 |
python,flask
|
Use app.run(host='0.0.0.0') if you want flask to accept any host name.
| 0 | 0 | 0 | 0 |
2015-07-27T20:47:00.000
| 2 | 0 | false | 31,663,007 | 0 | 0 | 1 | 2 |
I need configure flask application to handle requests with any host in HTTP header
If some fqdn is specified in SERVER_NAME I have 404 error if request goes with any other domain.
How should be defined SERVER_NAME in configuration?
How can be requested/routed/blueprint-ed HTTP hostname?
|
multidomain configuration for flask application
| 34,395,827 | 0 | 5 | 504 | 0 |
python,flask
|
To allow any domain name just remove 'SERVER_NAME' from application config
| 0 | 0 | 0 | 0 |
2015-07-27T20:47:00.000
| 2 | 0 | false | 31,663,007 | 0 | 0 | 1 | 2 |
I need configure flask application to handle requests with any host in HTTP header
If some fqdn is specified in SERVER_NAME I have 404 error if request goes with any other domain.
How should be defined SERVER_NAME in configuration?
How can be requested/routed/blueprint-ed HTTP hostname?
|
How to return 401 authentication from flask API?
| 31,666,814 | 3 | 4 | 13,044 | 0 |
python-2.7,flask,basic-authentication,www-authenticate
|
This is a common problem when working with REST APIs and browser clients. Unfortunately there is no clean way to prevent the browser from displaying the popup. But there are tricks that you can do:
You can return a non-401 status code. For example, return 403. Technically it is wrong, but if you have control of the client-side API, you can make it work. The browser will only display the login dialog when it gets a 401.
Another maybe a bit cleaner trick is to leave the 401 in the response, but not include the WWW-Authenticate header in your response. This will also stop the login dialog from appearing.
And yet another (that I haven't tried myself, but have seen mentioned elsewhere) is to leave the 401 and the WWW-Authenticate, but change the auth method from Basic to something else that is unknown to the browser (i.e. not Basic and not Digest). For example, make it CustomBasic.
| 0 | 0 | 1 | 0 |
2015-07-28T02:54:00.000
| 2 | 1.2 | true | 31,666,601 | 0 | 0 | 1 | 1 |
I have developed an API in flask which is using HttpBasicAuth to authenticate users. API is working absolutely fine in fiddler and returning 401 when we pass wrong credential but when I am using the same on login page I am getting extra pop up from browser. I really don't want to see this extra pop-up which is asking for credential (default behaviour of browser when returning
401
with
WWW-Authenticate: Basic realm="Authentication Required"
).
It is working fine when deployed locally but not working when hosted on remote server.
How can we implement 401 which will not let browser to display popup asking for credentials.
|
Access ORM models from different classes in Odoo/OpenERP
| 31,680,404 | 0 | 0 | 714 | 0 |
python,odoo
|
It's pretty basic and simple any python class can be called from it's name space, so call your class from namespace and instanciate the class.
Even Model class or any class inherited from Model can be called and instanciated like this.
Self.pool is just orm cache to access framework persistent layer.
Bests
| 0 | 0 | 0 | 0 |
2015-07-28T12:05:00.000
| 2 | 0 | false | 31,675,839 | 0 | 0 | 1 | 1 |
I am aware that you can get a reference to an existing model from within another model by using self.pool.get('my_model')
My question is, how can I get a reference to a model from a Python class that does NOT extend 'Model'?
|
What is the best way to use git with Django?
| 31,681,446 | 2 | 0 | 71 | 0 |
python,django,git
|
When doing Django development in Git you'll typically want to exclude *.db files, *.pyc files, your virtualenv directory, and whatever files your IDE and OS may create (eg: DS_store, *.swp, *.swo)
| 0 | 0 | 0 | 0 |
2015-07-28T15:47:00.000
| 1 | 0.379949 | false | 31,681,276 | 0 | 0 | 1 | 1 |
I'm starting with python and Django development and I'm creating a project that I want to share it with git. When I started the app I saw folders like "local", "lib", "bin", and "include". Should I ignore this folders or can I commit it?
There's a .gitignore "master" to django files? I found some files on google but any of them mentioned this folders.
|
Configure Web2Py to use Anaconda Python
| 31,687,769 | 2 | 0 | 712 | 0 |
python-2.7,web2py,anaconda,gensim
|
The Windows binary includes it's own Python interpreter and will therefore not see any packages you have in your local Python installation.
If you already have Python installed, you should instead run web2py from source.
| 0 | 0 | 0 | 0 |
2015-07-28T19:05:00.000
| 1 | 1.2 | true | 31,685,048 | 1 | 0 | 1 | 1 |
I am new to Web2Py and Python stack. I need to use a module in my Web2Py application which uses "gensim" and "nltk" libraries. I tried installing these into my Python 2.7 on a Windows 7 environment but came across several errors due to some issues with "numpy" and "scipy" installations on Windows 7. Then I ended up resolving those errors by uninstalling Python 2.7 and instead installing Anaconda Python which successfully installed the required "gensim" and "nltk" libraries.
So, at this stage I am able to see all these "gensim" and "nltk" libraries resolving properly without any error in "Spyder" and "PyCharm". However, when I run my application in Web2Py, it still complains about "gensim" and gives this error: <type 'exceptions.ImportError'> No module named gensim
My guess is if I can configure Web2Py to use the Anaconda Python then this issue would be resolved.
I need to know if it's possible to configure Web2Py to use Anaconda Python and if it is then how do I do that?
Otherwise, if someone knows of some other way resolve that "gensim" error in Web2Py kindly share your thoughts.
All your help would be highly appreciated.
|
is testing compulsory if it works fine on realtime on browser
| 31,694,536 | 1 | 1 | 33 | 0 |
python,django,testing
|
Whether it’s compulsory depends on organization you work for. If others say it is, then it is. Just check how tests are normally written in the company and follow existing examples.
(There’re a lot of ways Django-based website can be tested, different companies do it differently.)
Why write tests?
Regression testing. You checked that your code is working, does it still work now? You or someone else may change something and break your code at some point. Running test suite makes sure that what was written yesterday still works today; that the bug fixed last week wasn’t accidentally re-introduced; that things don’t regress.
Elegant code structuring. Writing tests for your code forces you to write code in certain way. For example, if you must test a long 140-line function definition, you’ll realize it’s much easier to split it into smaller units and test them separately. Often when a program is easy to test it’s an indicator that it was written well.
Understanding. Writing tests helps you understand what are the requirements for your code. Properly written tests will also help new developers understand what the code does and why. (Sometimes documentation doesn’t cover everything.)
Automated tests can test your code under many different conditions quickly, sometimes it’s not humanly possible to test everything by hand each time new feature is added.
If there’s the culture of writing tests in the organization, it’s important that everyone follows it without exceptions. Otherwise people would start slacking and skipping tests, which would cause regressions and errors later on.
| 0 | 0 | 1 | 1 |
2015-07-29T05:47:00.000
| 1 | 0.197375 | false | 31,692,090 | 0 | 0 | 1 | 1 |
I am working for a company who wants me to test and cover every piece of code I have.
My code works properly from browser. There is no error no fault.
Except my code works properly on browser and my system is responding properly do I need to do testing? Is it compulsory to do testing?
|
Run NodeJS server without exposing its source code
| 31,705,928 | -2 | 1 | 3,620 | 0 |
java,python,node.js,source-code-protection
|
Do you know how easy it is to decompile java class files?
Seriously, you pop the jar into IntelliJ IDEA (or almost any other IDE) and it spits out decompiled code that's readable enough to reverse engineer. Compiled code offers no security advantages versus interpreted code.
Rather than trying to "encrypt" or "hide" your NodeJS code, why not secure the server better? You will never outpace people reverse engineering your code, you are much better off defending the box that the chocolates are in than poisoning the chocolates.
| 0 | 0 | 0 | 0 |
2015-07-29T09:46:00.000
| 2 | -0.197375 | false | 31,696,857 | 0 | 0 | 1 | 1 |
For an usual NodeJS instance, we can start it by node server.js. The problem with this is that, in a production server, when a hacker compromises my machine they will be able to view and copy all of my server-side source code. This is a big risk, since the source code contains intellectual property. Is there a way to prevent it from happening?
For example, in Java, code is usually built into jar package or .class files and we only deploy the built file. When a hacker compromises the machine, they can only see the jar or .class file which is only byte code and not understandable.
I have a similar concern on my Python Flask server.
|
Python Script on Google App Engine, which scrapes only updates from a website
| 31,717,519 | 1 | 1 | 65 | 0 |
python,google-app-engine,web-scraping
|
Doesn't the website have RSS or API or something?
Anyway, you could store the list of scraped news titles (might not be unique though) / IDs / URLs as entity IDs in the datastore right after you send them to your email & just before sending the email you would first check whether the news IDs exist in the datastore with simply not including the onces that do.
Or depending in what strucure the articles are being published and what data is available (Do they have an incrimental post ID? Do they have a date of when an article was posted?) you may simply need to remember the highest value of your previous scrapping and only send email to yourself with the articles where that value is higher than the one previously saved.
| 0 | 1 | 0 | 0 |
2015-07-30T06:42:00.000
| 1 | 0.197375 | false | 31,716,833 | 0 | 0 | 1 | 1 |
I am hosting a Python script on Google App Engine which uses bs4 and mechanize to scrap news section of a website, it runs every 2 hours and sends an email to me all the news.
The Problem is, I want only the Latest news to be sent as mail, As of now it sends me all the news present every time.
I am storing all the news in a list, is there a way to send only the latest news, which has not been mailed to me, not the complete list every time?
|
PasswordType not supported in Postgres
| 31,781,831 | 0 | 0 | 84 | 1 |
python,postgresql,sqlalchemy
|
Actually it was a problem with Alembic migration, in migration table must be also created with the PasswordType, not String or any other type
| 0 | 0 | 0 | 0 |
2015-07-30T20:38:00.000
| 1 | 0 | false | 31,733,583 | 0 | 0 | 1 | 1 |
In SQLAlchemy, when I try to query for user by
request.db.query(models.User.password).filter(models.User.email == email).first()
Of course it works with different DB (SQLite3).
The source of the problem is, that the password is
sqlalchemy.Column(sqlalchemy_utils.types.passwordPasswordType(schemes=['pbkdf2_sha512']), nullable=False)
I really don't know how to solve it
I'm using psycopg2
|
Changing FirefoxProfile() preferences more than once using Selenium/Python
| 31,735,197 | 0 | 2 | 667 | 0 |
python-2.7,selenium-webdriver
|
You can define it only while initializing driver. So to do it with a new path you should driver.quit and start it again.
| 0 | 0 | 1 | 0 |
2015-07-30T21:33:00.000
| 2 | 0 | false | 31,734,447 | 0 | 0 | 1 | 1 |
So I am trying to download multiple excel links to different file paths depending on the link using Selenium.
I am able to set up the FirefoxProfile to download all links to a certain single path, but I can't change the path on the fly as I try to download different files into different file paths. Does anyone have a fix for this?
self.fp = webdriver.FirefoxProfile()
self.ft.set_preferences("browser.download.folderList", 2)
self.ft.set_preferences("browser.download.showWhenStarting", 2)
self.ft.set_preferences("browser.download.dir", "C:\SOURCE FILES\BACKHAUL")
self.ft.set_preferences("browser.helperApps.neverAsk.saveToDisk", ("application/vnd.ms-excel))
self.driver = webdriver.Firefox(firefox_profile = self.fp)
This code will set the path I want once. But I want to be able to set it multiple times while running one script.
|
Using Allauth and Redirecting to irrelevent URL
| 31,740,408 | 0 | 0 | 54 | 0 |
python,django,authentication,django-allauth
|
In django admin, update your Site object's domain to your server's ip or your domain name.
| 0 | 0 | 0 | 0 |
2015-07-31T07:07:00.000
| 1 | 0 | false | 31,740,079 | 0 | 0 | 1 | 1 |
I'm using Django-Allauth, but when I upload my project in the server and click on the button to login via Google or Facebook, I redirect to http://127.0.0.1:8001/accounts/google/login/callback/?state=*****
instead of http://example.com/accounts/google/login/callback/?state=*****
I am newbie, so please help me in-depth step by step.
|
Will my database connections have problems?
| 31,741,461 | 0 | 0 | 49 | 0 |
python,sql,django,celery
|
The only time when you are going to run into issues while using db with celery is when you use the database as backend for celery because it will continuously poll the db for tasks. If you use a normal broker you should not have issues.
| 0 | 1 | 0 | 0 |
2015-07-31T07:09:00.000
| 2 | 0 | false | 31,740,127 | 0 | 0 | 1 | 2 |
In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run the task every 2 hours.
If I check a view of my django project, which also requests information from the database while the task is asynchronously running in the background, will I run into any problems?
|
Will my database connections have problems?
| 31,740,391 | 0 | 0 | 49 | 0 |
python,sql,django,celery
|
While requesting information from your database you are reading your database. And in your celery task your are writing data into your database. You can write only once at a time but read as many times as you want as there is no lock permission on database while reading.
| 0 | 1 | 0 | 0 |
2015-07-31T07:09:00.000
| 2 | 0 | false | 31,740,127 | 0 | 0 | 1 | 2 |
In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run the task every 2 hours.
If I check a view of my django project, which also requests information from the database while the task is asynchronously running in the background, will I run into any problems?
|
Python will not execute Java program: 'java' is not recognized
| 31,745,847 | 0 | 1 | 1,290 | 0 |
python,python-2.7,command-line,subprocess
|
give absolute path of java location
in my system path is C:\Program Files\Java\jdk1.8.0_45\bin\java.exe
| 0 | 1 | 0 | 0 |
2015-07-31T12:01:00.000
| 2 | 0 | false | 31,745,699 | 0 | 0 | 1 | 2 |
I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because when executing through Python, it will not be able to find java.exe like a normal command would.
|
Python will not execute Java program: 'java' is not recognized
| 61,620,608 | 0 | 1 | 1,290 | 0 |
python,python-2.7,command-line,subprocess
|
You have to set the PATH variable to point to the java location.
import os
os.environ["PATH"] += os.pathsep + os.pathsep.join([java_env])
java_env will be a string containing the directory to java.
(tested on python 3.7)
| 0 | 1 | 0 | 0 |
2015-07-31T12:01:00.000
| 2 | 0 | false | 31,745,699 | 0 | 0 | 1 | 2 |
I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because when executing through Python, it will not be able to find java.exe like a normal command would.
|
What do I need to do to be able to use Panda3D from my Python text editors?
| 32,128,178 | 1 | 0 | 155 | 0 |
python,panda3d
|
This depends on your operating system. Panda3D uses the system's python on OS X and Linux and should "just work".
For Windows Panda3D installs its own copy of Python into Panda3D's install directory (defaults to C:\Panda3D I think), and renames the executable to ppython to prevent name collisions with any other python installs you might have. In your editor you have to change which interpreter it uses to the ppython.exe in the panda3d directory.
| 0 | 0 | 0 | 1 |
2015-07-31T14:26:00.000
| 1 | 0.197375 | false | 31,748,596 | 0 | 0 | 1 | 1 |
I just installed Panda3D, and I can run the example programs by double clicking them, but I can't run them from IDLE or Sublime.
I get errors like ImportError: No module named direct.showbase.ShowBase
I some people bring this up before and the responses suggested using ppython, I can't figure out how run that from Sublime, and I really the auto complete function there.
How can I either configure the Python 2.7 version that I already have to run Panda3D programs or run ppython from SUblime?
|
Cursor Instance Error when connecting to mongo db?
| 31,865,825 | 0 | 0 | 1,948 | 1 |
python,mongodb,flask,pymongo
|
Well, it ended up being an issue with the String specifying the working directory. Once it was resolved I was able to connect to the database.
| 0 | 0 | 0 | 0 |
2015-07-31T21:14:00.000
| 1 | 0 | false | 31,755,276 | 0 | 0 | 1 | 1 |
I have a web application that uses flask and mongodb. I recently downloaded a clone of it from github onto a new Linux machine, then proceeded to run it. It starts and runs without any errors, but when I use a function that needs access to the database, I get this error:
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 533, in __ getitem__
raise IndexError("no such item for Cursor instance")
IndexError: no such item for Cursor instance
This isn't happening on any of the other computers running this same application. Does anybody know what's going on?
|
sql alchemy filter: string split and comparison of list elements
| 31,759,707 | 1 | 0 | 1,109 | 1 |
python,sqlalchemy
|
I figured it out. Basically one needs to use the like command and or_(
carl
| 0 | 0 | 0 | 0 |
2015-08-01T07:05:00.000
| 1 | 0.197375 | false | 31,759,266 | 0 | 0 | 1 | 1 |
I have a string of categories stored in a table. The categories are separated by a ',', so that I can turn the string into a list of strings as
category_string.split(',')
I now want to select all elements of a sql table which have one of the the following categories [catergory1, catagory2].
I have many such comparisons and the list of categories to compare with is not necessarily 2 elements long, so I would need a comparison of elements of two lists. I know that list comparisons are done as
Table.categories.in_(category_list)
in sql-alchemy but I also need to convert a table string element in a list and do the comparison of list elements.
any ideas?
thanks
carl
|
Is it possible to develop the back-end of a native mobile app using the python powered framework Django?
| 31,760,094 | 0 | 1 | 3,574 | 0 |
android,python,ios,django
|
Sure. I've done this for my first app and others then. The backend technology is totally up to you, so feel free to take whatever you like.
The connection between backend and your apps should (but don't have to be) be something JSON-based. Standard REST works fine, Websockets also but have some issues on iOS.
| 0 | 0 | 0 | 0 |
2015-08-01T08:53:00.000
| 2 | 0 | false | 31,760,059 | 0 | 0 | 1 | 1 |
I want to develop a online mobile app. I am thinking about using native languages to develop the front-ends, so Java for Android and Objective-C for iOS. However, for the back-end, can I use something like Django?
I have used django for a while, but the tutorials are really lacking, so can anyone point me to something that will help me understand how to show data handled by Django models on a front-end developed by Java for an android device (that is, by using XML I suppose).
|
How to access django database from another application
| 31,785,953 | 3 | 1 | 2,301 | 0 |
python,django
|
If you are using sqlite3 you don't need to tell any ip or port to your teammate. He just needs the path/name of the sqlite database. You can find the name in your settings.py file in the 'DATABASE' variable.
| 0 | 0 | 0 | 0 |
2015-08-03T11:03:00.000
| 1 | 1.2 | true | 31,785,593 | 0 | 0 | 1 | 1 |
I am maintaining one database in django and some another application written in java want to access that database and add some data in it. Here i want common database for Java application and Django application. So whenever need data, we can make query to that database directly. How is it possible???
|
Worker role and web role counterpart in GAE
| 31,790,837 | 1 | 1 | 97 | 0 |
python,google-app-engine,azure,web-applications
|
yes there is. look at backend and frontend instances. your question is too broad to go into more detail. in general the backend type of instance is used for long running tasks but you could also do everyrhing in the frontend instance.
| 0 | 1 | 0 | 0 |
2015-08-03T14:34:00.000
| 2 | 0.099668 | false | 31,790,076 | 0 | 0 | 1 | 1 |
I am currently working with MS Azure. There I have a worker role and a web role. In worker role I start an infinite loop to process some data continously. The web role is performing the interaction with the client. There I use a MVC Framework, which on server side is written in C# and on client side in Javascript.
Now I'm interested in GAE engine. I read a lot about the app engine. I want to build an application in Python. But I don't really understand the architecture. Is there a counterpart in the project structure like the worker and web role in Azure?
|
What does app configuration mean?
| 31,796,794 | 1 | 1 | 3,561 | 0 |
python-2.7,google-app-engine,web-applications,configuration,app.yaml
|
To "configure your app," generally speaking, is to specify, via some mechanism, parameters that can be used to direct the behavior of your app at runtime. Additionally, in the case of Google App Engine, these parameters can affect the behavior of the framework and services surrounding your app.
When you specify these parameters, and how you specify them, depends on the app and the framework, and sometimes also on your own philosophy of what needs to be parameterized. Readable data files in formats like YAML are a popular choice, particularly for web applications and services. In this case, the configuration will be read and obeyed when your application is deployed to Google App Engine, or launched locally via GoogleAppEngineLauncher.
Now, this might seem like a lot of bother to you. After all, the easiest way you have to change your app's behavior is to simply write code that implements the behavior you want! When you have configuration via files, it's generally more work to set up: something has to read the configuration file and twiddle the appropriate switches/variables in your application. (In the specific case of app.yaml, this is not something you have to worry about, but Google's engineers certainly do.) So what are some of the advantages of pulling out "configuration" into files like this?
Configuration files like YAML are relatively easy to edit. If you understand what the parameters are, then changing a value is a piece of cake! Doing the same thing in code may not be quite as obvious.
In some cases, the configuration parameters will affect things that happen before your app ever gets run – such as pulling out static content and deploying that to Google App Engine's front-end servers for better performance and lower cost. You couldn't direct that behavior from your app because your app is not running yet – it's still in the process of being deployed when the static content is handled.
Sometimes, you want your application to behave one way in one environment (testing) and another way in another environment (production). Or, you might want your application to behave some reasonably sensible way by default, but allow someone deploying your application to be able to change its behavior if the default isn't to their liking. Configuration files make this easier: to change the behavior, you can simply change the configuration file before you deploy/launch the application.
| 0 | 1 | 0 | 0 |
2015-08-03T16:27:00.000
| 2 | 0.099668 | false | 31,792,302 | 0 | 0 | 1 | 1 |
I am working on Google App Engine (GAE) which has a file called (app.yaml). As I am new to programming, I have been wondering, what does it mean to configure an app?
|
Is the application code visible to others when it is run?
| 31,794,311 | 1 | 0 | 75 | 0 |
python,flask
|
No. The code won't be viewable. Server side code is not accessible unless you give someone access or post it somewhere public.
| 0 | 1 | 0 | 0 |
2015-08-03T18:22:00.000
| 2 | 1.2 | true | 31,794,152 | 0 | 0 | 1 | 1 |
I don't want other people to see my application code. When I host my application, will others be able to see the code that is running?
|
Fresh Install SQLite3 in Django 1.8
| 34,991,644 | 0 | 0 | 544 | 0 |
django,python-3.x,sqlite
|
Just delete the db.sqlite3 file in the project directory
Recently in my django project, I faced similar problems. Initially I created two classes in my models.py, then after a custom migration for populating the database with initial data, I needed to make three classes in models.py, where the third table would need to be populated with data in the second migration. This caused similar problems. I simply deleted the db.sqlite3 file in the project directory, backed up my custom migrations, made necessary changes to my models.py, then ran makemigrations followed by a migrate. Everything went just fine. Hope it helps.
| 0 | 0 | 0 | 0 |
2015-08-04T11:42:00.000
| 1 | 0 | false | 31,808,379 | 0 | 0 | 1 | 1 |
I named an app incorrectly in Django which I have renamed but I'm now getting migration errors for non-existent parent nodes. So I'd like to fresh install. Is there a django native way of doing this or best practice? At this stage I think I'll just start a new app and copy the db over.
|
How to handle thousands of legacy urls in Django, Varnish, Nginx?
| 31,821,657 | 2 | 2 | 107 | 0 |
python,django,redirect,nginx,varnish
|
It all depends on the load actually... if you have a lot of requests going to the old urls than it might be useful to have some caching. But in general I would say that doing it in Django, adding all of the urls to a database model and querying (optionally caching the results in Django or even Varnish) should do the trick.
These things are not impossible to do in Varnish or Nginx but Django will be far easier to link up to a database so that would have my vote.
| 0 | 0 | 0 | 0 |
2015-08-05T00:34:00.000
| 1 | 0.379949 | false | 31,821,553 | 0 | 0 | 1 | 1 |
We are building a Django app to replace a legacy system which used custom URLs for almost every resource. No pattern to the URLs at all. There are about 350,000 custom URLs that we now need to 301 redirect to a correct URL in the new system.
Our new system will use Django, but will also have Varnish and Nginx in front of it, so we could use any of these tools for the solution.
In Django, I think we could either make a very very large custom urls.py file, or maybe make a middleware that does a database lookup against a table with all the redirects.
Or perhaps there's a way to handle this in Varnish or Nginx so the requests never even hit Django.
My question: what's the most performant way to handle thousands of custom URL redirects?
|
Paypal button is not showing in Sale Order and Accounting Invoice odoo?
| 31,846,746 | 1 | 0 | 497 | 0 |
python-2.7,odoo-8
|
We need to give the Rights of "View Online Payment Options" in the user form, after that user will able to see the payment button in sale_order as well as invoice also and also see in a website.
| 0 | 0 | 0 | 0 |
2015-08-05T06:35:00.000
| 1 | 1.2 | true | 31,825,032 | 0 | 0 | 1 | 1 |
Once i installed payment_paypal module, still it is not showing after confirm the sale order and validate the invoice.
|
Django SECRET_KEY : Copying hashed passwords into different Django project
| 31,828,245 | 3 | 1 | 184 | 0 |
python,django,security,django-models,django-settings
|
No, settings.SECRET_KEY is not used for password hashing
| 0 | 0 | 0 | 0 |
2015-08-05T09:11:00.000
| 1 | 1.2 | true | 31,828,141 | 0 | 0 | 1 | 1 |
I have a Django powered site(Project-1) running with some users registered on it. I am now creating a revamped version of the site in a new separate Django project(Project-2) which I would make live once finished. I would need to populate the User data along with their hashed passwords currently in database of Project-1 into database of Project-2. Would having different SECRET_KEYs for Project-1 and Project-2 be an issue to get the hashed passwords migrated and working in Project-2?
|
Scrapy - Use RabbitMQ only or Celery + RabbitMQ for scraping multiple websites?
| 31,841,508 | 1 | 2 | 1,802 | 0 |
python,web-scraping,scrapy,rabbitmq,celery
|
Yes, using RabbitMQ is very helpful for your use case since your crawling agent can utilize a message queue for storing the results while your document processor can then store that in both your database back end (in this reply I'll assume mongodb) and your search engine (and I'll assume elastic search here).
What one gets in this scenario is a very rapid and dynamic search engine and crawler that can be scaled.
As for celery+rabbitmq+scrapy portion; celery would be a good way to schedule your scrapy crawlers and distribute your crawler bots across your infrastructure. Celery is just using RabbitMQ as its back end to consolidate and distribute the jobs between each instance. So for your use case to use both celery and scrapy just write the code for your scrapy bot to use its own rabbitmq queue for storing the results then write up a document processor to store the results into your persistent database back end. Then setup celery to schedule the batches of site crawls. Throw in sched module to maintain a bit of sanity in your crawling scheude.
Also, review the works done at google for how they resolve the issues for over crawling a site in thier algorithm plus respect sane robots.txt settings and your crawler should be good to go.
| 0 | 0 | 1 | 0 |
2015-08-05T14:01:00.000
| 1 | 1.2 | true | 31,834,738 | 0 | 0 | 1 | 1 |
I want to run multiple spiders to crawl many different websites. Websites I want to crawl take different time to be scraped (some take about 24h, others 4h, ...). I have multiple workers (less than the number of websites) to launch scrapy and a queue where I put the websites I want to crawl. Once a worker has finished crawling a website, the website goes back to the queue waiting for a worker to be available to launch scrapy, and so on.
The problem is that small website will be crawled more times than big ones and I want all websites to be crawled the same number of time.
I was thinking about using RabbitMQ for queue management and to prioritise some websites.
But when I search for RabbitMQ, it is often used with Celery. What I understood about these tools is that Celery will allow to launch some code depending on a schedule and RabbitMQ will use message and queues to define the execution order.
In my case, I don't know if using only RabbitMQ without Celery will work. Also, is using RabbitMQ helpful for my problem?
Thanks
|
Check for pending Django migrations
| 53,122,187 | 0 | 74 | 57,259 | 0 |
python,django,django-migrations
|
I checked it by looking up the table django_migrations, which stores all applied migrations.
| 0 | 0 | 0 | 0 |
2015-08-05T17:16:00.000
| 10 | 0 | false | 31,838,882 | 0 | 0 | 1 | 1 |
In Django, is there an easy way to check whether all database migrations have been run? I've found manage.py migrate --list, which gives me the information I want, but the format isn't very machine readable.
For context: I have a script that shouldn't start running until the database has been migrated. For various reasons, it would be tricky to send a signal from the process that's running the migrations. So I'd like to have my script periodically check the database to see if all the migrations have run.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.