Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Scrapy installed, but won't run from the command line
| 55,285,170 | 16 | 6 | 24,069 | 0 |
python,scrapy
|
I had the same error. Running scrapy in a virtual environment solved it.
Create a virtual env : python3 -m venv env
Activate your env : source env/bin/activate
Install Scrapy with pip : pip install scrapy
Start your crawler : scrapy crawl your_project_name_here
For example my project name was kitten, I just did the following in step 4
scrapy crawl kitten
NOTE: I did this on Mac OS running Python 3+
| 0 | 0 | 0 | 0 |
2016-06-10T21:12:00.000
| 9 | 1 | false | 37,757,233 | 0 | 0 | 1 | 5 |
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
|
Scrapy installed, but won't run from the command line
| 37,914,201 | 0 | 6 | 24,069 | 0 |
python,scrapy
|
I had the same issue. sudo pip install scrapy fixed my problem, although I don't know why must use sudo.
| 0 | 0 | 0 | 0 |
2016-06-10T21:12:00.000
| 9 | 0 | false | 37,757,233 | 0 | 0 | 1 | 5 |
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
|
How to show version info on index.html page
| 37,800,352 | 5 | 5 | 1,277 | 0 |
python,python-sphinx
|
You can use
:Version: |version|
in your rst
| 0 | 0 | 0 | 0 |
2016-06-13T04:43:00.000
| 2 | 1.2 | true | 37,781,940 | 1 | 0 | 1 | 1 |
I am using sphinx and I would like to show the version of my project from conf.py on my main page for documentation.
|
How to make sudo execute in current python virtual environment?
| 37,787,526 | 3 | 0 | 340 | 0 |
python,django,sudo,port80
|
No, you don't need to do this. You shouldn't be trying to run the development server on port 80; if you're setting up a production environment, use a proper server.
| 0 | 0 | 0 | 0 |
2016-06-13T10:32:00.000
| 4 | 0.148885 | false | 37,787,435 | 1 | 0 | 1 | 1 |
I have a django website setup and configured in a python virtual environment (venv) on Ubuntu and all is working fine. Now in order to to run my server on port80 I need to use "sudo" which does not execute in the context of the virtual environment, raising errors (i.e no module named django ...)
Is there a way to get "sudo" to execute in the context of the python virtual environment?!
|
Error while running MRJOB on AWS
| 38,152,994 | 0 | 0 | 137 | 0 |
python,amazon-web-services,mrjob,bigdata
|
python mr_statistics.py -r emr s3://bimalucmbucket/inputFile.txt --output-dir=s3://bimalucmbucket/output --no-output -c ~/mrjob.conf
| 0 | 0 | 0 | 0 |
2016-06-14T06:20:00.000
| 1 | 1.2 | true | 37,804,327 | 0 | 0 | 1 | 1 |
I put the mrjob.conf file in /home directory and tried to run the job from command and I am getting this error:
File "/Users/bimalthapa/anaconda/lib/python2.7/site-packages/mrjob-0.4.6- py2.7.egg/mrjob/conf.py", line 283, in conf_object_at_path
with open(conf_path) as f:
IOError: [Errno 2] No such file or directory: 'mrjob.conf'
This is my command:
python mr_statistics.py -c ~/mrjob.conf -r emr s3://bimalucmbucket/inputFile.txt --output-dir=s3://bimalucmbucket/output --no-output
What is correct way of placing mrjob.conf and correct command ?
|
Recognize user returning to URL from same device
| 37,820,294 | 0 | 0 | 42 | 0 |
python,django,session,web
|
Coookies are your answer. They will work for any URL for the same browser. assuming your user has agreed to use them.
An alternative would be to tag the links with parameters, but that is specific to the link and could be shared with others.
| 0 | 0 | 0 | 0 |
2016-06-14T19:08:00.000
| 1 | 1.2 | true | 37,820,234 | 0 | 0 | 1 | 1 |
I'm new to web programming, so I guess my question would seem very stupid :)
I have simple website on Python/Django. There is some url, which users may open without any authentication.
I need to remember this user somehow and recognize him when he re-opens this url once again (not for a long time - say, for several hours).
By "same user" I mean "user uses same browser on same device".
How can I achieve this? Thanks in advance :)
|
Python scripts not working after switch_to.context('webview') when testing on hybrid Android app?
| 37,913,958 | 0 | 0 | 89 | 0 |
python,hybrid-mobile-app,python-appium
|
Question has resolved via update chrome driver to latest version(51.0.2704.103 m (64-bit)).
| 0 | 0 | 0 | 0 |
2016-06-15T04:13:00.000
| 1 | 1.2 | true | 37,826,018 | 0 | 0 | 1 | 1 |
My python scripts as below:
wrong at the red arrow:
|
Adding h5 files in a zip to use with PySpark
| 37,876,437 | 0 | 0 | 84 | 0 |
python,pyspark,caffe
|
Found that you can add the additional files to all the workers by using --files argument in spark-submit.
| 0 | 0 | 0 | 0 |
2016-06-15T10:33:00.000
| 1 | 1.2 | true | 37,832,937 | 0 | 1 | 1 | 1 |
I am using PySpark 1.6.1 for my spark application. I have additional modules which I am loading using the argument --py-files. I also have a h5 file which I need to access from one of the modules for initializing the ApolloNet.
Is there any way I could access those files from the modules if I put them in the same archive? I tried this approach but it was throwing an error because the files are not there in every worker. I can think of copying the file to each of the workers but I want to know if there are better ways to do it?
|
Odoo python fileopendialog
| 37,839,268 | 1 | 1 | 108 | 0 |
python,openerp,fileopendialog
|
You can define binary fields in Odoo, like other fields. Look into ir.attachment model definition and its view definitions to get a good hint, how do it for such fields.
| 0 | 0 | 0 | 0 |
2016-06-15T12:53:00.000
| 2 | 0.099668 | false | 37,836,077 | 0 | 0 | 1 | 1 |
does anybody knows how to open a filedialog on Odoo? I've add a button on a custom view, now I would like to browse for a file on THE CLIENT when this button is clicked.
Any ideas?
Thanks!
|
Load data on startup
| 37,864,661 | 1 | 1 | 112 | 0 |
python,django
|
It seems that what I am looking for doesn't exist. Django trusts the user to deal with migrations and such and doesn't check the database on load. So there is no place in the system where you can load some data on system start and be sure that you can actually load it. What I ended up doing is loading the data in ready(), but do a sanity check first by doing MyModel.objects.exist() in a try: except: block and returning if there was an exception. This is not ideal, but I haven't found any other way.
| 0 | 0 | 0 | 0 |
2016-06-15T20:25:00.000
| 1 | 1.2 | true | 37,845,130 | 0 | 0 | 1 | 1 |
I have a file with a bunch of data common between several projects. The data needs to be loaded into the Django database. The file doesn't change that much, so loading it once on server start is sufficient. Since the file is shared between multiple projects, I do not have full control over the format, so I cannot convert this into a fixture or something.
I tried loading it in ready(), but then I run into a problem when creating a new database or migrating an existing database, since apparently ready() is called before migrations are complete and I get errors from using models that do not have underlying tables. I tried to set it in class_prepared signal handler, but the loading process uses more than one model, so I cannot really be sure all required model classes are prepared. Also it seems that ready() is not called when running tests, so unit tests fail because the data is missing. What is the right place to do something like this?
|
Get a queryset from a queryset
| 37,845,897 | 0 | 1 | 71 | 0 |
python,django,python-3.x
|
I assume that your models look something like this
class Contest(Model):
... something ...
class Picture(Model):
user = ForeignKey(User)
contest = ForeignKey(Contest)
... something ...
So, Picture.objects.filter(user=user) gives you pictures by a particular user (don't have to specify _id, filters operate on model objects just fine). And to get contests with pictures by a particular user you can do
pics_by_user = Picture.objects.filter(user=user)
contests_by_user = Contest.objects.filter(id__in=pics_by_user.values_list('contest', flat=True))
There might be an easier way though
| 0 | 0 | 0 | 0 |
2016-06-15T20:42:00.000
| 2 | 0 | false | 37,845,389 | 0 | 0 | 1 | 1 |
I have a queryset from Picture.objects.filter(user_ID=user). The Picture model has "contest_ID" as a foreign key.
I'm looking to get a queryset of Contests which have Pictures, so from the queryset I already have, how do I pull a list of Contest objects?
|
Odoo website, Creating a signup page for external users
| 37,852,231 | 1 | 0 | 1,173 | 0 |
python-2.7,openerp,odoo-9
|
User Signup is a standard feature provided by Odoo, and it seems that you already found it.
The database selector shows because you have several PostgresSSQL databases.
The easiest way is to set a filter that limits it to the one you want:
start the server with the option --dbfilter=^MYDB$, where MYDBis the database name.
User data is stored both in res.userand res.partner: the user specific data, such as login and password, are stored in res.user. Other data, such as the Name is stored in a related res.partner record.
| 0 | 0 | 0 | 0 |
2016-06-16T04:55:00.000
| 2 | 0.099668 | false | 37,850,154 | 0 | 0 | 1 | 2 |
How can I create a signup page in odoo website. The auth_signup module seems to do the job (according to their description). I don't know how to utilize it.
In the signup page there shouldn't be database selector
Where should I store the user data(including password); res.users or res.partner
|
Odoo website, Creating a signup page for external users
| 37,852,264 | 2 | 0 | 1,173 | 0 |
python-2.7,openerp,odoo-9
|
you can turn off db listing w/ some params in in odoo.cfg conf
db_name = mydb
list_db = False
dbfilter = mydb
auth_signup takes care of the registration, you don't need to do anything. A res.user will be created as well as a partner related to it.
The pwd is stored in the user.
| 0 | 0 | 0 | 0 |
2016-06-16T04:55:00.000
| 2 | 1.2 | true | 37,850,154 | 0 | 0 | 1 | 2 |
How can I create a signup page in odoo website. The auth_signup module seems to do the job (according to their description). I don't know how to utilize it.
In the signup page there shouldn't be database selector
Where should I store the user data(including password); res.users or res.partner
|
django-python3-ldap authentication
| 37,865,795 | 0 | 1 | 1,417 | 0 |
django,python-3.x,ldap,django-authentication
|
From the documentation:
When a user attempts to authenticate, a connection is made to the LDAP
server, and the application attempts to bind using the provided
username and password. If the bind attempt is successful, the user
details are loaded from the LDAP server and saved in a local Django
User model. The local model is only created once, and the details will
be kept updated with the LDAP record details on every login.
It authenticates by binding each time, and updates the information from LDAP (as you have it configured) each time. The Django user won't be removed from Django's user table if removed from LDAP; if you set multiple auth backends to also using the Django default auth, the user should be able to login (perhaps after a password reset) if removed from LDAP. If you look in your auth_user table you will noticed users using Django auth have their passwords hashed with pbkdf2_sha256, and the LDAP users passwords do not.
| 0 | 0 | 0 | 0 |
2016-06-16T14:28:00.000
| 1 | 1.2 | true | 37,862,112 | 0 | 0 | 1 | 1 |
I am using django-python3-ldap for LDAP authentication in Django. This works completely fine but whenever an LDAP user is (successfully) authenticated the user details are stored in the local Django database (auth_user table).
My question now is when the same
(LDAP) user tries to authenticate next time, the user will be authenticated by LDAP or by the default Django authentication (since the user details are now stored in the local Django database)?
If the user is authenticated using local Django database then the user can still able to get access even after the user is removed from the LDAP server? This is a real concern for me?.
If this is the case is there a way, so that the LDAP user details is removed from the database (auth_user table) everytime the user is logged out and created every time the user is logged in?. Any help in the right direction is highly appreciated. Thank you for your valuable inputs.
|
Are "dummy URLs" required to make function calls to Flask from the front-end?
| 37,865,699 | 0 | 0 | 381 | 0 |
python,flask
|
As others mentioned, you can secure the endpoint so that a user has to provide credentials to issue a successful request to that endpoint.
In addition, your endpoint should be using proper HTTP semantics if its creating / updating data. i.e. POST to create a drink, PUT to update a drink. This will also protect you from someone just putting the URL into a browser since that is a GET request.
TL;DR
Secure the endpoint (if possible)
Add checks that the proper request body is provided
Use proper HTTP semantics
| 0 | 0 | 0 | 0 |
2016-06-16T16:49:00.000
| 1 | 0 | false | 37,865,055 | 0 | 0 | 1 | 1 |
I am making my first web app with Flask wherein a database of drinks is displayed on the front-end based on selected ingredients. Then the user selects a drink and an in-page pop-up appears with some drink info and a button "make", when the user hits "make" it calls some python code on the back end (Flask) to control GPIO on my raspberry pi to control some pumps.
Does this "make" need to call some route (e.g. /make/<drink>) in order to call the python function on the back end? I don't like the idea that any user could just enter the URL (example.com/make/<drink>) where is something in the database to force the machine to make the drink even if the proper ingredients are not loaded. Even if I did checking on the back end to ensure the user had selected ingredients were loaded, I want the user to have to user the interface instead of just entering URLs.
Is there a way so that the make button calls the python code without using a "dummy URL" for routing the button to the server?
|
attributeerror 'module' object has no attribute 'python_2_unicode_compatible'
| 37,871,359 | 0 | 0 | 700 | 0 |
django,python-2.7
|
Never mind, I solved this problem by creating a python27_env virtual environment and pip installed all required modules, and then it worked.
I'm guessing it's due to something got messed up in my desktop setup for python27.
Thanks guys.
| 0 | 0 | 0 | 0 |
2016-06-16T16:50:00.000
| 1 | 1.2 | true | 37,865,079 | 0 | 0 | 1 | 1 |
I was trying to start my django server, but constantly getting the above error
django version is 1.5 (due to my project's backward compatibility issue, we cannot upgrate it to a newer version)
python version is 2.7.7
I've searched online and find that usually, this is due to Django version, once switched to 1.5, it'll be fine, but for me, it's still there.
Any help please?
|
Faster Google App Engine Managed VM deploys (Python compat env)?
| 37,896,863 | 0 | 1 | 36 | 0 |
google-app-engine,docker,google-app-engine-python
|
Well, in general, 10 minute deployment isn't that bad. I use AWS Elastic Beanstalk and it's about the same for a full deployment of a production environment. However, this is totally unacceptable for your everyday development.
Since you use docker, I really don't understand, why not to spin up the same container on your local machine and test it locally before releasing to staging?
If that is not an option, for some reason, my second bet would be updating code directly inside the container. I've used that trick a lot. As Python a dynamic language, all you need just a fresh copy of your repo, so, you can ssh into your container and check out the code. That said, the feedback loop will be reduced to the time of committing and checking out the code. Additionally, if you set up some hooks on commit, you don't even need to check out the code manually.
All in all, this is just my two cents and it would be nice to hear more opinions on that really important issue.
| 0 | 1 | 0 | 0 |
2016-06-18T08:20:00.000
| 1 | 0 | false | 37,894,857 | 0 | 0 | 1 | 1 |
We're using Google App Engine (GAE) with Managed VMs for a Python compat environment, and deployments take too much time. I haven't done strict calculations, but I'm sure each deployment takes over 10 mins.
What can we do to accelerate this? Is this more a GAE or a Docker issue? Haven't tried deploying Docker in other platforms so I'm not sure standard/acceptable deployment times.
Having to wait so much to test an app in the staging servers damages our productivity quite a bit. Any help is appreciated. :)
|
How does one store descriptive dates like the "The last day in Feb", "Fourth Saturday in April"?
| 38,021,560 | 1 | 1 | 75 | 0 |
python,django,date,datetime
|
If you have only a handful of reoccurring descriptive dates, the easiest thing to do would be to create a dictionary that can translate them to the explicit dates you want whenever they pop up in your data.
If you have arbitrary descriptive dates or a large number of them, however, it seems that, as was discussed in the comments, NLP is the way to go.
| 0 | 0 | 0 | 0 |
2016-06-18T21:26:00.000
| 2 | 0.099668 | false | 37,901,786 | 1 | 0 | 1 | 1 |
I have some data that has descriptive dates (e.g., Monday before Thanksgiving, Last day in February, 4th Saturday in April) as part of describing start and end times. Some of the dates are explicit (e.g., October 31st). I want to store the descriptive and the explicit values so for any year I can then calculate when the exact dates are. I did some searching and came up short.
This feels like a common thing, and someone must have solved it.
I'm also curious if these kinds of descriptive dates have a proper name.
As in the tags, my app uses Python + Django.
Thanks!
|
Heroku SQLAlchemy database does not exist
| 46,558,758 | 1 | 1 | 2,039 | 1 |
python,postgresql,heroku,sqlalchemy,heroku-postgres
|
Old question, but the answer seems to be that database_exists and create_database have special case code for when the engine URL starts with postgresql, but if the URL starts with just postgres, these functions will fail. However, SQLAlchemy in general works fine with both variants.
So the solution is to make sure the database URL starts with postgresql:// and not postgres://.
| 0 | 0 | 0 | 0 |
2016-06-19T17:44:00.000
| 3 | 1.2 | true | 37,910,066 | 0 | 0 | 1 | 2 |
I'm running a python 3.5 worker on heroku.
self.engine = create_engine(os.environ.get("DATABASE_URL"))
My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist.
easnjeezqhcycd is my user, not database name. As I'm not using flask's sqlalchemy, I haven't found a single person dealing with the same person
I tried destroying my addon database and created standalone postgres db on heroku - same error.
What's different about heroku's URL that SQLAlchemy doesn't accept it? Is there a way to establsih connection using psycopg2 and pass it to SQLAlchemy?
|
Heroku SQLAlchemy database does not exist
| 62,351,512 | 0 | 1 | 2,039 | 1 |
python,postgresql,heroku,sqlalchemy,heroku-postgres
|
so I was getting the same error and after checking several times I found that I was giving a trailing space in my DATABASE_URL. Which was like DATABASE_URL="url<space>".
After removing the space my code runs perfectly fine.
| 0 | 0 | 0 | 0 |
2016-06-19T17:44:00.000
| 3 | 0 | false | 37,910,066 | 0 | 0 | 1 | 2 |
I'm running a python 3.5 worker on heroku.
self.engine = create_engine(os.environ.get("DATABASE_URL"))
My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist.
easnjeezqhcycd is my user, not database name. As I'm not using flask's sqlalchemy, I haven't found a single person dealing with the same person
I tried destroying my addon database and created standalone postgres db on heroku - same error.
What's different about heroku's URL that SQLAlchemy doesn't accept it? Is there a way to establsih connection using psycopg2 and pass it to SQLAlchemy?
|
double include url schema in django
| 37,910,649 | 2 | 0 | 144 | 0 |
python,django,python-3.x,django-views,django-urls
|
Yes, that is perfectly fine.
Django will look for a matching url in the first one, and if it doesn't find it, it will move on to the next one.
| 0 | 0 | 0 | 0 |
2016-06-19T18:42:00.000
| 1 | 0.379949 | false | 37,910,615 | 0 | 0 | 1 | 1 |
Is it acceptable to use two includes for the same base url routing schema?
e.g. - I have allauth installed which uses r'^accounts/', include('allauth.urls')
and I want to extend this further with my own app, which extends the allauth urls even further.
An example of this would be accounts/profile or some other extension of the base accounts/ url.
Is it fine to do the following?
r'^accounts/', include('myapp.urls')
In additon to:
r'^accounts/', include('allauth.urls')
As far as I can tell both will just be included with the base url routing schema and it will just look for the allauth urls first?
|
Soft delete django database objects
| 37,922,348 | 2 | 6 | 5,266 | 0 |
python,django,web,django-rest-framework
|
Just override delete method of model A and check relation before delete. If it isn't empty - move object to another table/DB.
| 0 | 0 | 0 | 0 |
2016-06-20T11:59:00.000
| 4 | 0.099668 | false | 37,922,029 | 0 | 0 | 1 | 1 |
Suppose that I have django models such A ->B ->C ->D in the default database.
C is a foreign key in D,similarly B in C and C in A.
On the deletion of the object A,the default behaviour of Django is that all the objects related to A directly or indirectly will get deleted automatically(On delete cascade).Thus, B,C,D would get automatically deleted.
I want to implement deletion in a way such that on deletion of an object of A it would get moved to another database named 'del'.Also along with it all other related objects of B,C,D will also get moved.
Is there an easy way of implementing this in one go?
|
Building a link shortener in Django
| 37,922,848 | 1 | 0 | 819 | 0 |
python,django,web,url-redirection
|
You could just navigate to the URL via HttpResponseRedirect
| 0 | 0 | 0 | 0 |
2016-06-20T12:29:00.000
| 3 | 0.066568 | false | 37,922,631 | 0 | 0 | 1 | 1 |
I am using Django 1.9 to build a link shortener. I have created a simple HTML page where the user can enter the long URL. I have also coded the methods for shortening this URL. The data is getting stored in the database and I am able to display the shortened URL to the user.
I want to know what I have to do next. What happens when a user visits the shorter URL? Should I use redirects or something else? I am totally clueless about this topic.
|
Running Scrapy on Dokku using a Digital Ocean server
| 37,969,257 | 1 | 0 | 430 | 0 |
python,heroku,scrapy,digital-ocean,dokku
|
I 'fixed' this issue by not using a Digital Ocean server. The website that I am trying to crawl, which is craigslist.org, just did not respond well to a DO server. It takes a long time to respond to a request. Other websites like Google or Amazon work just fine with DO.
My scraper works just fine on craigslist when using a VPS from another provider.
| 0 | 0 | 0 | 0 |
2016-06-20T14:48:00.000
| 1 | 0.197375 | false | 37,925,504 | 0 | 0 | 1 | 1 |
Not sure how to describe this but I am running a Scrapy spider on a Digital Ocean server ($5 server), the Scrapy project is deployed as a Dokku app.
However, it runs very slowly compared to the speed on my local computer and on a Heroku free tier dyno. On Dokku it crawls at a speed of 30 pages per minute while locally and on Heroku the speed is 200+ pages per minutes.
I do not know how to debug, analyze or where to start in order to fix the problem. Any help, clues or tips on how to solve this?
|
How to serve previously uploaded video files in Django
| 37,942,478 | 0 | 0 | 329 | 0 |
python,django,video
|
Is there a way to load video files via Django but them serve them
using a different server?
You should define what you mean by "different server". but I assume, you mean different project that is not written in Django.
Since video files land in file system ( if you design so), you can access them as you want if different project is running on the same server. otherwise you would need some file sync between servers. if you want to distinguish which video file belongs to which object in db, I would insert the object name into filepath.
if I didnot fully answer your question, let me know below
| 0 | 0 | 0 | 0 |
2016-06-21T10:42:00.000
| 1 | 0 | false | 37,942,244 | 0 | 0 | 1 | 1 |
I'm developing a Django site which allows users to upload PDF, image and video files. Django is able to serve the pdf and image files comfortably for my purposes but cannot cope with the video downloads. Is there a way to load video files via Django but them serve them using a different server?
|
Fatal error C1083: Cannot open include file: 'openssl/opensslv.h'
| 38,144,434 | 7 | 20 | 26,978 | 0 |
python,openssl,cryptography,scrapy
|
Copy "openssl" folder from C:\OpenSSL-Win32\include\ to C:\Pyhton27\include\
and copy all libs from C:\OpenSSL-win32\lib to C:\Python27\Libs\
| 0 | 1 | 0 | 0 |
2016-06-21T17:52:00.000
| 2 | 1 | false | 37,951,303 | 0 | 0 | 1 | 1 |
I'm trying to install Scrapy, but got this error during installing: build\temp.win-amd64-2.7\Release_openssl.c(429) : fatal error C1083: Cannot open include file: 'openssl/opensslv.h': No such file or directory
I've checked that the file "opensslv.h" is in here "C:\OpenSSL-Win64\include\openssl". And I've also included this "C:\OpenSSL-Win64\include" in the Path, system variables.
Stuck on this for hours, can someone please help out? Thanks.
The same issue was found for the "cryptography-1.5.2" package
|
When to make a django app, rather than just a model
| 37,961,101 | 0 | 0 | 40 | 0 |
python,django,django-models
|
In your case, I think the better is to put your two models in one app.
| 0 | 0 | 0 | 0 |
2016-06-21T21:51:00.000
| 2 | 0 | false | 37,955,199 | 0 | 0 | 1 | 1 |
Recently I've been making a few test projects in Django and while I've found the structure to be better than that of other Web Frameworks, I am a little confused on the concept of different 'apps'.
Here is a test case example:
Suppose I have a simple CRUD application where users post a picture and a title, with a small description, but I want other users to have the ability to create a review of this picture.
Seeing as both the "Post" and "Review" models in this case require CRUD functionality, would I just have two models in the same app, and associate them with one another? Or have two separate apps with different urls.py and views.py files?
I have a hunch I've been doing it wrong and it should be just two models, if this is the case how would I go about writing the urls and views for two models in the same app?
Thanks and any input is appreciated!
|
Dynamodb max value
| 37,960,297 | 1 | 11 | 9,688 | 0 |
python,amazon-dynamodb,boto3
|
There is no cheap way to achieve this in Dynamodb. There is no inbuild function to determine the max value of an attribute without retrieving all items and calculate programmatically.
| 0 | 0 | 1 | 0 |
2016-06-22T05:47:00.000
| 3 | 0.066568 | false | 37,959,515 | 0 | 0 | 1 | 1 |
I'm using Dynamodb. I have a simple Employee table with fields like id, name, salary, doj, etc. What is the equivalent query of select max(salary) from employee in dynamodb?
|
Odoo, prevent redirecting after web login
| 37,964,958 | 2 | 1 | 2,805 | 0 |
python,odoo,odoo-9
|
If you wanna set this for all website users you need to set them to portal users. Also, you can set under Users->Preferences->Home Action set to Website.
UPDATE
For signup new users you need to create template user account and check portal options for that user. Next, go to Settings->General Settings under Portal Access find Template user for new users created through signup choose your template user.
| 0 | 0 | 0 | 0 |
2016-06-22T07:45:00.000
| 2 | 1.2 | true | 37,961,668 | 0 | 0 | 1 | 1 |
After I login to odoo from localhost:8069/web/login I get redirected to Odoo backend, from where I need to click Website to come back to Home Page.
How can I prevent this? I need to stay inside the home page after login.
EDIT:
@moskiSRB 's answer solves the problem for simple login.
But after Signup there is auto login which still leads to backend
|
Using CodeDeploy ValidateService Hook with Python Application
| 37,980,335 | 2 | 1 | 1,608 | 0 |
python,amazon-web-services,amazon-ec2,aws-code-deploy,aws-codepipeline
|
I would loop in the ValidateService hook, checking for the condition you expect, OR just sleep for 60 seconds, assuming that is the normal initialization time.
The ValidateService hook should do just that: make sure the service is fully running before continuing/finalizing the deployment. That depends on your app of course. But consider a loop that pulls a specially designed page EG http://localhost/service-ready. In that URL, test and confirm anything and everything appropriate for your service. Return a -Pending- string if the service is not yet validated. Return a -OK- when everything is 100%
Perhaps loop that 10-20 times with a 10 second sleep, and exit when it returns -OK- then throw an error if the service never validates.
| 0 | 1 | 0 | 0 |
2016-06-22T12:19:00.000
| 1 | 0.379949 | false | 37,967,838 | 0 | 0 | 1 | 1 |
I have an heavy app hosted on AWS.
I use CodeDeploy & Code Pipeline (updating from github) to update the servers when a new release is ready (currently running 6 ec2 instances on production environment).
I've setup the codedeploy to operate one-by-one and also defined a 300 seconds connection draining on the load balancer.
Still, my application is heavy (it loads large dictionary pickle files from the disk to the memory), the process of firing up takes about ~60 seconds. In those 60 seconds CodeDeploy marks the process of deployment to an instance as completed, causing it to join back as a healthy instance to the load balancer - this might cause errors to users trying to reach the application.
I thought about using the ValidateService hook, but i'm not sure how to in my case..
Any ideas on how to wait for a full load and readyness of the application before proceeding to the next instance?
This is my current AppSpec.yml
version: 0.0
os: linux
files:
- source: /deployment
destination: /deployment
- source: /webserver/src
destination: /vagrant/webserver/src
permissions:
- object: /deployment
pattern: "**"
owner: root
mode: 644
type:
- directory
- object: /webserver/src
owner: root
mode: 644
except: [/webserver/src/dictionaries]
type:
- directory
hooks:
ApplicationStop:
- location: /deployment/aws_application_stop.sh
BeforeInstall:
- location: /deployment/aws_before_install.sh
AfterInstall:
- location: /deployment/aws_after_install.sh
ApplicationStart:
- location: /deployment/aws_application_start.sh
|
Java calling python function with tensorflow graph
| 37,997,580 | 0 | 1 | 1,916 | 0 |
java,python-2.7,tensorflow
|
I've had the same problem, Java+Python+TensorFlow. I've ended up setting up a simple http server. If that's too slow for you, you can shave off some overhead by employing sockets directly.
| 0 | 0 | 0 | 0 |
2016-06-23T12:48:00.000
| 3 | 0 | false | 37,992,129 | 0 | 1 | 1 | 1 |
So I have a neural network in tensorflow (python2.7) and I need to retrieve its output using Java. I have a simple python function getValue(input) which starts the session and retrieves the value. I am open to any suggestions. I believe Jython wont work cause tensorflow is not in the library. I need the call to be as fast as possible. JNI exists for Java calling C so can I convert with cython and compile then use JNI? Is there a way to pass the information in RAM or some other way I haven't thought of?
|
How to use pyramid cookie to authenticate user in tornado web framework?
| 38,032,565 | 1 | 2 | 83 | 0 |
python,session,cookies,tornado,pyramid
|
The two locations are separate origins in HTTP language. By default, they should not share cookies.
Before trying to figure out how to pass cookies around I'd try to set up a front end web server like Nginx that would proxy requests between two different backend servers. Both applications could get their own path, served from www.abcd.com.
| 0 | 1 | 0 | 1 |
2016-06-23T12:52:00.000
| 1 | 0.197375 | false | 37,992,209 | 0 | 0 | 1 | 1 |
In a server, I have a pyramid app running and I plan to have another tornado app running in parallel in the same machine. Let's say the pyramid app is available in www.abcd.com, then the tornado app will be available in www.abcd.com:9000.
I want only the authenticated users in the pyramid webapp to be able to access the tornado webapp.
My guess is somehow using cookie set by the pyramid app in tornado.
Is it possible? What is the best way to do that?
|
Apache web server 32bit on 64bit computer
| 38,040,277 | 0 | 0 | 203 | 1 |
python,django,apache,32bit-64bit,32-bit
|
I would assume so. You should definitely go for a 64-bit version of Apache to make use of all the memory available.
| 0 | 0 | 0 | 0 |
2016-06-26T15:45:00.000
| 1 | 0 | false | 38,040,240 | 0 | 0 | 1 | 1 |
simple question - if I run apache 32bit version, on 64bit OS, with a lot of memory (32GB RAM). Does this mean all the memory will go to waste since 32bit apache can't use more then 3GB ram?
|
unexpected Chinese output from eclipse console
| 38,047,976 | 1 | 1 | 39 | 0 |
python,eclipse,utf-8
|
Edit -> Set encoding UTF-16 screwed up my text again. another ctrl-z and Edit->set encoding ASCII fixed it.
| 0 | 0 | 0 | 1 |
2016-06-27T07:14:00.000
| 1 | 1.2 | true | 38,047,915 | 0 | 0 | 1 | 1 |
I attempted to change the character encoding to UTF-16 and it changed all of my text in Eclipse's text editor to Chinese. A ctrl-z saved my work, but now the console is stuck in Chinese.
When running an arbitrary python script, the script terminates immediately and gives the following message: "†䙩汥•䌺屄敶屗..." (The string goes on for much longer, but stackoverflow detects it as spam)
What does this mean? I've tried resetting things to default but to no avail.
|
PyCharm is not able find app.yaml when pushing to GAE
| 38,060,684 | 0 | 0 | 98 | 0 |
python,google-app-engine
|
The parameter to the appcfg update command is the yaml file or directory containing yaml file
| 0 | 1 | 0 | 0 |
2016-06-27T16:53:00.000
| 1 | 0 | false | 38,059,310 | 0 | 0 | 1 | 1 |
I wrote a simple pymongo code insert a few values in MongoDB instance on GAE and My app got deployed properly from Pycharm but,
I am getting the same error while running following command
appcfg.py -A login-services-1354 -V v1 update . on my cloud shell
The following is the error I got Usage: appcfg.py [options] update | [file, ...] appcfg.py: error: Directory '/home/seshanthnadal' does not contain configuration file app.yaml
Any help would be appreciated!
|
Handling time consuming requests in Flask-UWSGI app
| 38,080,033 | 1 | 1 | 750 | 0 |
python,nginx,flask,uwsgi
|
As an option you can do the following:
Separate the heavy logic from the function which is being called
upon @route and move it into a separate place (a file, another
function, etc)
Introduce Celery to run that pieces of heavy logic
(it will be processed in a separate thread from the @route-decorated functions).
A quick way of doing this is using Redis as a message broker.
Schedule the time-consuming functions from your @route-decorated
functions in Celery (it is possible to pass parameters as well)
This way the HTTP requests won't be blocked for the complete function execution time.
| 0 | 0 | 0 | 0 |
2016-06-28T09:56:00.000
| 1 | 0.197375 | false | 38,072,956 | 0 | 0 | 1 | 1 |
Am running an app with Flask , UWSGI and Nginx. My UWSGI is set to spawn out 4 parallel processes to handle multiple requests at the same time. Now I have one request that takes lot of time and that changes important data concerning the application. So, when one UWSGI process is processing that request and say all others are also busy, the fifth request would have to wait. The problem here is I cannot change this request to run in an offline mode as it changes important data and the user cannot simply remain unknown about it. What is the best way to handle this situation ?
|
how to set default ip as 0.0.0.0 for a django project to debug in visual studio 2015?
| 38,126,584 | 6 | 2 | 5,943 | 0 |
python,django,visual-studio-2015,django-rest-framework,django-cors-headers
|
After so many struggle I found one solution, want to share it with you. Hope you will like it.
open <your python location>\Lib\site-packages\django\core\management\commands\runserver.py and find one code where it deal with self.addr.
if not self.addr:
self.addr = '::1' if self.use_ipv6 else '127.0.0.1'
It sets default address to 127.0.0.1 change it to '0.0.0.0'. Now, if you run your server with only command ./manage.py runserver It will run on 0.0.0.0, even from visual studio.
Good luck.
| 0 | 0 | 0 | 0 |
2016-06-29T09:33:00.000
| 3 | 1 | false | 38,095,689 | 0 | 0 | 1 | 1 |
I am developing a django rest framework application using visual studio 2015, python 2.7, django 1.9. I have enabled CORS. I can access it from other origin when I run it through command prompt as python manage.py runserver 0.0.0.0:8086. But, in visual studio auto debug, it runs on 127.0.0.0. I want to configure visual studio to run the server on specified ip (ie. 0.0.0.0). So, that debugging will be easy.
I have tried with setting default port and address from site-packages\django\core\management\commands\runserver.py.
and also able to set default port in visual studio property of the project. But, unable to set the default ip.
Can any one help me to configure the ip 0.0.0.0 as default not the default one (127.0.0.1) in visual studio.
Thanks in advance.
|
Graceful exit server when using Django's autoreloader
| 38,171,865 | 1 | 0 | 150 | 0 |
python,django
|
Try using the atexit module to catch the termination. It should work for everything which acts like SIGINT or SIGTERM, SIGKILL cannot be interrupted (but should not be sent by any auto-restart script without sending SIGTERM before).
| 0 | 0 | 0 | 0 |
2016-07-02T13:15:00.000
| 1 | 1.2 | true | 38,160,577 | 0 | 0 | 1 | 1 |
I am using a custom Django runserver command that is supposed to run a bunch of cleanup functions upon termination. This works fine as long as I don't use the autoreloader: by server catches the KeyboardInterrupt exception properly and exits gracefully.
However, if I use Django's autoreloader, the reloader seems to simply kill the server thread without properly terminating it (as far as I can tell, it doesn't have any means to do this).
This seems inherently unsafe, so I can't really believe that there's not a better way of handling this.
Can I somehow use the autoreloader functionality without having my server thread be killed uncleanly?
|
Python in Knime: Downloading files and dynamically pressing them into workflow
| 38,161,395 | 1 | 1 | 1,032 | 0 |
python-2.7,file-io,knime
|
There are multiple options to let things work:
Convert the files in-memory to a Binary Object cells using Python, later you can use that in KNIME. (This one, I am not sure is supported, but as I remember it was demoed in one of the last KNIME gatherings.)
Save the files to a temporary folder (Create Temp Dir) using Python and connect the Pyhon node using a flow variable connection to a file reader node in KNIME (which should work in a loop: List Files, check the Iterate List of Files metanode).
Maybe there is already S3 Remote File Handling support in KNIME, so you can do the downloading, unzipping within KNIME. (Not that I know of, but it would be nice.)
I would go with option 2, but I am not so familiar with Python, so for you, probably option 1 is the best. (In case option 3 is supported, that is the best in my opinion.)
| 0 | 0 | 1 | 0 |
2016-07-02T13:17:00.000
| 2 | 0.099668 | false | 38,160,597 | 0 | 0 | 1 | 1 |
I'm using Knime 3.1.2 on OSX and Linux for OPENMS analysis (Mass Spectrometry).
Currently, it uses static filename.mzML files manually put in a directory. It usually has more than one file pressed in at a time ('Input FileS' module not 'Input File' module) using a ZipLoopStart.
I want these files to be downloaded dynamically and then pressed into the workflow...but I'm not sure the best way to do that.
Currently, I have a Python script that downloads .gz files (from AWS S3) and then unzips them. I already have variations that can unzip the files into memory using StringIO (and maybe pass them into the workflow from there as data??).
It can also download them to a directory...which maybe can them be used as the source? But I don't know how to tell the ZipLoop to wait and check the directory after the python script is run.
I also could have the python script run as a separate entity (outside of knime) and then, once the directory is populated, call knime...HOWEVER there will always be a different number of files (maybe 1, maybe three)...and I don't know how to make the 'Input Files' knime node to handle an unknown number of input files.
I hope this makes sense.
Thanks!
|
Selenium not freeing up memory even after calling close/quit
| 53,867,150 | 0 | 11 | 10,312 | 0 |
python,selenium,firefox,selenium-webdriver,selenium-chromedriver
|
I have experienced similar issue and destroying that driver my self (i.e setting driver to None) prevent those memory leaks for me
| 0 | 0 | 1 | 0 |
2016-07-02T21:29:00.000
| 4 | 0 | false | 38,164,635 | 0 | 0 | 1 | 2 |
So I've been working on scraper that goes on 10k+pages and scrapes data from it.
The issue is that over time, memory consumption raises drastically. So to overcome this - instead of closing driver instance only at the end of scrape - the scraper is updated so that it closes the instance after every page is loaded and data extracted.
But ram memory still gets populated for some reason.
I tried using PhantomJS but it doesn't load data properly for some reason.
I also tried with the initial version of the scraper to limit cache in Firefox to 100mb, but that also did not work.
Note: I run tests with both chromedriver and firefox, and unfortunately I can't use libraries such as requests, mechanize, etc... instead of selenium.
Any help is appreciated since I've been trying to figure this out for a week now. Thanks.
|
Selenium not freeing up memory even after calling close/quit
| 38,164,741 | 2 | 11 | 10,312 | 0 |
python,selenium,firefox,selenium-webdriver,selenium-chromedriver
|
Are you trying to say that your drivers are what's filling up your memory? How are you closing them? If you're extracting your data, do you still have references to some collection that's storing them in memory?
You mentioned that you were already running out of memory when you closed the driver instance at the end of scraping, which makes it seem like you're keeping extra references.
| 0 | 0 | 1 | 0 |
2016-07-02T21:29:00.000
| 4 | 0.099668 | false | 38,164,635 | 0 | 0 | 1 | 2 |
So I've been working on scraper that goes on 10k+pages and scrapes data from it.
The issue is that over time, memory consumption raises drastically. So to overcome this - instead of closing driver instance only at the end of scrape - the scraper is updated so that it closes the instance after every page is loaded and data extracted.
But ram memory still gets populated for some reason.
I tried using PhantomJS but it doesn't load data properly for some reason.
I also tried with the initial version of the scraper to limit cache in Firefox to 100mb, but that also did not work.
Note: I run tests with both chromedriver and firefox, and unfortunately I can't use libraries such as requests, mechanize, etc... instead of selenium.
Any help is appreciated since I've been trying to figure this out for a week now. Thanks.
|
Can't log in to admin site in Django
| 53,273,173 | 0 | 10 | 8,739 | 0 |
python,django
|
I think you should try these step:
create new admin user
python manage.py createsuperuser
use new account log in admin site
reset password for your account and remeber it
log in admin site with your original account
| 0 | 0 | 0 | 0 |
2016-07-03T20:49:00.000
| 6 | 0 | false | 38,174,216 | 0 | 0 | 1 | 4 |
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said
"Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive."
So I Googled the issue and tried everything I could. Here are all the problems I investigated:
Database not synced: I synced it and nothing changed.
No django_session table: I checked; it's there.
Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS.
User not configured correctly: is_staff, is_superuser, and is_active are all True.
Old sessions: I checked the django_session table and it's empty.
Oversized message cookie: I checked and I don't even have one.
Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work.
Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py.
Entering wrong username: The username I created was "admin" and that's the same one I'm typing in.
Wrong server command: I'm using python manage.py runserver.
Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed.
Is there anything I haven't tried yet, or am I missing something?
|
Can't log in to admin site in Django
| 68,123,164 | 0 | 10 | 8,739 | 0 |
python,django
|
my problem was using the correct settings module because I use different databases for local/local_proxy and production
DJANGO_SETTINGS_MODULE=serverless_django.settings.local_proxy python manage.py createsuperuser
worked for me
| 0 | 0 | 0 | 0 |
2016-07-03T20:49:00.000
| 6 | 0 | false | 38,174,216 | 0 | 0 | 1 | 4 |
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said
"Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive."
So I Googled the issue and tried everything I could. Here are all the problems I investigated:
Database not synced: I synced it and nothing changed.
No django_session table: I checked; it's there.
Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS.
User not configured correctly: is_staff, is_superuser, and is_active are all True.
Old sessions: I checked the django_session table and it's empty.
Oversized message cookie: I checked and I don't even have one.
Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work.
Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py.
Entering wrong username: The username I created was "admin" and that's the same one I'm typing in.
Wrong server command: I'm using python manage.py runserver.
Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed.
Is there anything I haven't tried yet, or am I missing something?
|
Can't log in to admin site in Django
| 68,131,885 | 1 | 10 | 8,739 | 0 |
python,django
|
Check your is_active model field. You may have set its default value to False, hence the reason it might not let you log in.
If it's like this -- is_active = models.BooleanField(default=False) -- change it to True, or inspect the database and change the value in is_active for the created superuser to 1.
| 0 | 0 | 0 | 0 |
2016-07-03T20:49:00.000
| 6 | 0.033321 | false | 38,174,216 | 0 | 0 | 1 | 4 |
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said
"Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive."
So I Googled the issue and tried everything I could. Here are all the problems I investigated:
Database not synced: I synced it and nothing changed.
No django_session table: I checked; it's there.
Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS.
User not configured correctly: is_staff, is_superuser, and is_active are all True.
Old sessions: I checked the django_session table and it's empty.
Oversized message cookie: I checked and I don't even have one.
Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work.
Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py.
Entering wrong username: The username I created was "admin" and that's the same one I'm typing in.
Wrong server command: I'm using python manage.py runserver.
Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed.
Is there anything I haven't tried yet, or am I missing something?
|
Can't log in to admin site in Django
| 68,274,827 | 1 | 10 | 8,739 | 0 |
python,django
|
This could also happen if the database being used is the default sqlite3 database and the settings.py has the DATABASES property referring to a db.sqlite3 file that is NOT in the same directory as manage.py is.
| 0 | 0 | 0 | 0 |
2016-07-03T20:49:00.000
| 6 | 0.033321 | false | 38,174,216 | 0 | 0 | 1 | 4 |
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said
"Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive."
So I Googled the issue and tried everything I could. Here are all the problems I investigated:
Database not synced: I synced it and nothing changed.
No django_session table: I checked; it's there.
Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS.
User not configured correctly: is_staff, is_superuser, and is_active are all True.
Old sessions: I checked the django_session table and it's empty.
Oversized message cookie: I checked and I don't even have one.
Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work.
Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py.
Entering wrong username: The username I created was "admin" and that's the same one I'm typing in.
Wrong server command: I'm using python manage.py runserver.
Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed.
Is there anything I haven't tried yet, or am I missing something?
|
Create .msg file with task without having outlook installed
| 38,212,655 | 1 | 1 | 2,600 | 0 |
python,django,outlook,msg
|
Why not create an EML file? It is MIME, so there are hundreds of libraries out there. Outlook will be able to open an EML file just fine.
In your particular case, create a MIME file with the vTodo MIME part as the primary MIME part.
| 0 | 1 | 0 | 0 |
2016-07-04T08:52:00.000
| 2 | 0.099668 | false | 38,180,458 | 0 | 0 | 1 | 1 |
Is there any chance to create a outlook .msg file without having outlook installed.
We use a django backend and need to create a msg file containing a task for importing in outlook. Since we use unix-based servers we dont have any chance to get outlook installed (except wine etc..)
Is there a component to generate such .msg files in any programming language without having outlook installed?
|
ImportError: No module named win32service
| 38,198,005 | 25 | 5 | 32,335 | 0 |
python,openerp,odoo-8
|
You need to install pywin32.
Either use pip install pywin32 or download from GitHub https://github.com/mhammond/pywin32/releases
| 0 | 1 | 0 | 0 |
2016-07-05T07:43:00.000
| 2 | 1.2 | true | 38,197,879 | 0 | 0 | 1 | 1 |
I am using odoo8 with python 2.7.9 (64 bit) on eclipse IDE. Python software got corrupted so I had to reinstall it.Now I am facing this new problem ImportError: No module named win32service
|
Most efficient solution for sharing code between two django projects
| 38,208,333 | 1 | 0 | 1,082 | 0 |
python,django,git,django-rest-framework
|
I will go with python package, after all this is what it is for.
| 0 | 0 | 0 | 0 |
2016-07-05T16:01:00.000
| 2 | 0.099668 | false | 38,207,914 | 0 | 0 | 1 | 1 |
I have to find a solution for sharing code between two big Django projects. The main things to share are models and serializers and template tags. I've came up with 3 different solutions and I need you to find pro and cons to be able to make a choice.
I'll list you the solutions I found:
git submodules
Create a repository where to store my *.py files and include them as a django app such as 'common_deps'
Even if this is the purpose of git submodules there are a bit hard to use and its easy to fall into traps.
python package
Create a python package to store my *.py files.
It seems to be the best option to me event if that means that I'll need to change my requirements.txt file on my projects on each new release.
Simple git repository
Create a new repository to store my *.py files and include them as a django app such as 'common_deps'. Then add it to my PYTHON_PATH
I need some advices, I haven't chosen yet. I'm just telling myself that git submodules seems to be a bas idea.
Tell me guys.
|
why am I not being redirected in the browser after Mailchimp API oauth2 initial request is sent?
| 38,216,891 | 0 | 0 | 356 | 0 |
python,django,redirect,oauth-2.0,mailchimp
|
Your email_host, user, password and port match with your mail-chimp credentials? second thing you need to check mail-chimp api log for status. you will get some glimpse from there.
| 0 | 0 | 0 | 0 |
2016-07-06T02:48:00.000
| 1 | 0 | false | 38,215,657 | 0 | 0 | 1 | 1 |
I am trying to set up Oauth2 with the Mailchimp API. So far things seem to be working correctly except that after having the user login at Mailchimp, the browser doesn't redirect back to my redirect_uri. It just stays on the Mailchimp login page.
For the code:
I redirect the user to the authorize url/mailchimp login:
authorize_uri = 'https://login.mailchimp.com/oauth2/authorize? response_type=code&client_id=%s&client_secret=%s&redirect_uri=%s' % (settings.MAILCHIMP_CLIENT_ID, settings.MAILCHIMP_CLIENT_SECRET, redirect_uri)
my redirect_uri is redirect_uri = 'http://127.0.0.1:8000/mailchimp/connect'
So the authorize_url redirects to the login page, and I login with credentials that absolutely work to login the regular non-oauth way. Also I see the 302 redirect with the code I need in my logs, but the browser seems to just refresh the Mailchimp login page and the view(I'm using django) for processing the GET request below is never triggered.
[06/Jul/2016 02:31:43] "GET /mailchimp/connect?code=36ad22daa3d0f8b3804f7e340e5d50f1 HTTP/1.1" 302 0
I have no idea what I'm doing wrong...
|
Pip uninstall Scrapy with all its dependencies
| 38,218,276 | -1 | 0 | 5,640 | 0 |
python,python-3.x,pip
|
pip uninstall currently doesn't support removing the dependencies. You can manually go to the folder where scrapy is installed and delete it. For example: /usr/local/lib/python2.7/dist-packages/scrapy.
For example if it is at '/PATH/TO/SCRAPY', run this command on the terminal:
sudo rm -rf /PATH/TO/SCRAPY
| 0 | 0 | 0 | 0 |
2016-07-06T07:09:00.000
| 3 | -0.066568 | false | 38,218,132 | 1 | 0 | 1 | 1 |
I had installed Scrapy with pip install scrapy. It also install all its requirement packages Installing collected packages: zope.interface, Twisted, six, cssselect, w3lib, parsel, pycparser, cffi, pyasn1, idna, cryptography, pyOpenSSL, attrs, pyasn1-modules, service-identity, queuelib, PyDispatcher, scrapy. So, is it possible to uninstall scrapy and all its requirement packages with a terminal command?
|
How to add a production config when deploying a docker container?
| 38,221,268 | 0 | 0 | 38 | 0 |
python,docker
|
You should copy your production-ready config file into the docker container as part of your image-building process (COPY directive in your dockerfile), and then proceed with the same deployment steps you would normally use.
| 0 | 1 | 0 | 0 |
2016-07-06T09:25:00.000
| 1 | 0 | false | 38,220,530 | 0 | 0 | 1 | 1 |
I used to deploy Python web applications on AWS EC2 instances with Ansible.
In my development environment, I use config from a module local_config.py, but in the deployment process, I use an Ansible task to replace this file with a production-ready config.
How do I do something similar when building a Docker image?
|
If a django migration is migrated to db, what is the best practice if the migration is deleted at a later date?
| 38,225,266 | 0 | 0 | 37 | 0 |
python,sql,django,migration
|
The first approach I'd try would be to check out the last good commit and recreate the model changes in question so the migration could be regenerated and checked in.
And while it's good to have a contingency plan for things like this, if it's a real concern I'd suggest evaluating your deployment process to make this issue less likely.
| 0 | 0 | 0 | 0 |
2016-07-06T10:59:00.000
| 1 | 0 | false | 38,222,375 | 0 | 0 | 1 | 1 |
While this shouldn't happen its not impossible.
So what to do in the event that a migration has been run into a database and the migration file has then been deleted and is not recoverable?
this assumes that the database cannot just be dropped.
|
Securely transfer a banch of files from one Heroku app to another
| 38,232,752 | 1 | 0 | 46 | 0 |
python,ruby-on-rails,heroku,transfer
|
I would suggest writing a secured JSON or XML API to transfer the data from app to app,. Once the data is received I would then generate the .csv or .html files from the received data. It keeps things clean and easy to modify for future revisions because now you'll have an API to interact with.
| 0 | 0 | 0 | 0 |
2016-07-06T17:31:00.000
| 1 | 0.197375 | false | 38,230,178 | 0 | 0 | 1 | 1 |
I need to setup a Heroku app (python) which would perform scheduled tasks that include fetching a set of data (.csv an .html) files from other Heroku app (ROR) and returning a result back to that app.
Also it should be restricted only to my app to be able to connect to the ROR app because it deals with sensitive information. There would be from 20 to 100 files each time so I want them to be compressed somehow to transfer them quiclky (to avoid bothering the server for too long).
I'm interested in possible ways to accomplish it. The first thought is to send HTTP GET request to the ROR app and fetch the necessary files yet it generally not secured at all. Would SCP work in some way in this situation or you have any other ideas?
Thanks in advance!
|
How to save spaCy model onto cache?
| 41,644,953 | 1 | 1 | 1,333 | 0 |
python,caching,spacy
|
First of all you, if you only do NER, you can install the parser without vectors.
This is possible giving the argument parser to:
python -m spacy.en.download parser
This will prevent the 700MB+ Glove vectors to be downloaded, slimming the memory needed for a single run.
Then, well, it depends on your application/usage you make of the library.
If you call it often it will be better to pass spacy.load('en') to a module/class variable loaded at the beginning of your stack.
This will slow down a bit your boot time, but spacy will be ready (in memory) to be called.
(If the boot time is a big problem, you can do lazy loading).
| 0 | 0 | 0 | 0 |
2016-07-08T09:35:00.000
| 1 | 0.197375 | false | 38,263,384 | 0 | 0 | 1 | 1 |
I'm using spaCy with Python for Named Entity Recognition, but the script requires the model to be loaded on every run and takes about 1.6GB memory to load it.
But 1.6GB is not dispensable for every run.
How do I load it into the cache or temporary memory so as to enable the script to run faster?
|
Django forms in ReactJs
| 38,281,765 | 2 | 4 | 1,649 | 0 |
python,reactjs,django-forms
|
The {{ form }} statement is relative to Django template. Django templates responsible for rendering HTML and so do React, so you don't have to mix the two together.
What you probably want to do is to use the Django form validation mechanism server side, let React render the form client-side. In your Django view, simply return a JSON object that you can use in your React code to initialize your form component.
| 0 | 0 | 0 | 0 |
2016-07-09T10:30:00.000
| 1 | 1.2 | true | 38,280,859 | 0 | 0 | 1 | 1 |
Is there any way I can use Django forms inside a ReactJS script, like include {{ form }} in the JSX file?
I have a view which displays a from and it is rendered using React. When I load this page from one page the data in these fields should be empty, but when I hit this view from another page I want date to be prefilled in this form. I know how to do this using Django forms and form views, but I am clueless where to bring in React.
|
Dynamic css import with Jinja2
| 38,301,898 | 2 | 2 | 1,671 | 0 |
python,html,django,dynamic,jinja2
|
I found a solution that works out pretty well.
I use
<link rel="stylesheet" href="{% block css %}{% endblock %}"> in the template
and then: {% block css%}{% static 'home/css/file.css' %}{% endblock % in each page
| 0 | 0 | 0 | 0 |
2016-07-10T11:29:00.000
| 2 | 1.2 | true | 38,291,388 | 0 | 0 | 1 | 1 |
I am trying to make my stylesheets dynamic with django (jinja2) and I want to do something like this:
<link rel="stylesheet" href="{% static 'home/css/{{ block css }}{{ endblock }}.css' %}">
Apparently, I can't use Jinja in Jinja :), and I don't know how to make this work another way.
|
How can I simulate onclick event in python?
| 38,298,895 | 7 | 5 | 13,226 | 0 |
javascript,python,selenium,web-scraping
|
Ideally you don't even need to clicks buttons in these kind of cases.
All you need is to see at what webservice does the form sends request when clicked on submit button.
For that open your developer's control in the browser, Go to the Network tab and select 'preserve log'. Now submit the form manually and look for the first xhr GET/POST request sent. It would be POST request 90% of times.
Now when you select that request in the request parameters it would show the values that you entered while submitting the form. Bingo!!
Now all you need to do is mimic this request with relevant request headers and parameters in your python code using requests. And Wooshh!!
Hope it helps..
| 0 | 0 | 1 | 0 |
2016-07-11T02:47:00.000
| 2 | 1 | false | 38,298,459 | 0 | 0 | 1 | 1 |
I am working on a small project where I have to submit a form to a website.
The website is, however, using onclick event to submit the form (using javascript).
How can the onclick event be simulated in python?
Which modules can be used? I have heard about selenium and mechanize modules. But, which module can be used or in case of both, which one is better?
I am new to web scraping and automation.So,it would be very helpful.
Thanks in advance.
|
Sublime Text: How do you exit the multiple row layout
| 38,330,853 | 2 | 0 | 98 | 0 |
python,sublimetext2,sublimetext3,sublimetext,text-editor
|
In the menu bar: View > Layout > Single
Or from the keyboard (on Windows): Alt + Shift + 1
To find your default shortcuts, Preferences > Key Bindings - Default, and search for "set_layout".
| 0 | 0 | 0 | 0 |
2016-07-12T13:47:00.000
| 2 | 0.197375 | false | 38,330,752 | 0 | 0 | 1 | 2 |
I was wondering how you exit the multiple row layout on Sublime Text. I switched to a 3 row layout when editing one of my Django projects, but how do I exit from it (remove the extra rows I have added).
Thanks,
Henry
|
Sublime Text: How do you exit the multiple row layout
| 38,330,833 | 2 | 0 | 98 | 0 |
python,sublimetext2,sublimetext3,sublimetext,text-editor
|
Use View -> Layout menu. If you choose View -> Layout -> Single, other rows will be removed. Short keys depends on OS.
| 0 | 0 | 0 | 0 |
2016-07-12T13:47:00.000
| 2 | 1.2 | true | 38,330,752 | 0 | 0 | 1 | 2 |
I was wondering how you exit the multiple row layout on Sublime Text. I switched to a 3 row layout when editing one of my Django projects, but how do I exit from it (remove the extra rows I have added).
Thanks,
Henry
|
Django Embed Admin Form
| 38,363,690 | 2 | 0 | 674 | 0 |
python,django,django-forms,django-admin
|
I fixed the issue by using an iframe to embed the page itself. I used the ?_popup=1 argument so that the navbar and other parts of the admin site wouldn't show up.
| 0 | 0 | 0 | 0 |
2016-07-13T01:36:00.000
| 1 | 1.2 | true | 38,341,325 | 0 | 0 | 1 | 1 |
I am trying to embed the exact form that appears in the Django admin when I edit a model in a different page on my website. My plan is to have an Edit button that, when clicked, displays a modal with the edit page inside of it.
The issue with using a ModelForm is that this particular model has two generic foreign keys. The admin handles this perfectly, providing the ability to add, edit, or remove these objects. If I could embed the admin page (with its HTML or perhaps my own), that would be all I need.
Thanks!
|
required field difference in python file and xml file
| 38,368,546 | 0 | 0 | 227 | 0 |
python,openerp
|
The difference is that in the python .py when you set a fields required argument to True, it's creates a NOT NULL constraint directly on the database, this means that no matter what happens (Provided data didn't already exist in the table) you can never insert data into that table without that field containing a value, if you try to do so directly from psql or Odoo's xmlrpc or jsonrpc api you'll get an SQL NOT NULL error, with something like this
`ERROR: null value in column "xxx" violates not-null constraint
On the other-hand if you set a field to be required on the view (xml) then no constraint is set on the database, this means that the only restriction is the view and you can bypass that and write to the database directly or if you're making an external web service you can use Odoo's ORM methods to write to the database directly
If you really want to make sure a column is not null and is required, then it's better to set that in the python code itself instead of the view.
| 0 | 0 | 1 | 0 |
2016-07-14T06:44:00.000
| 1 | 0 | false | 38,367,206 | 0 | 0 | 1 | 1 |
What is the difference between giving required field in python file and xml file in openerp?
In xml file :field name="employee_id" required="1"
In python file: 'employee_id' : fields.char('Employee Name',required=True),
|
Appengine Search API: Exclude one document field from search
| 38,381,484 | 0 | 0 | 34 | 0 |
python,google-app-engine,full-text-search
|
If you just say "no", you'll search all fields in the document. However, if you prefix your term with a field name like "field2:no" you will only search the values of that field.
| 0 | 0 | 0 | 0 |
2016-07-14T11:48:00.000
| 1 | 0 | false | 38,373,407 | 0 | 0 | 1 | 1 |
In the search document I have two fields which have value as Yes or No.
field1 - have value as Yes or No
field2 - have value as Yes or No
from function foo(), I want to search a document which have value as "no" and it should not search in field1.
How to archive this ?
|
Does AWS Lambda allows to upload binaries separately to avoid re-upload
| 38,442,640 | 0 | 0 | 345 | 0 |
python,amazon-web-services,aws-lambda,continuous-deployment
|
No, there is no way to accomplish this. Your Lambda function is always provisioned as a whole from the latest zipped package you provide (or S3 bucket/key if you choose that method).
| 0 | 0 | 0 | 1 |
2016-07-15T16:33:00.000
| 1 | 0 | false | 38,401,090 | 0 | 0 | 1 | 1 |
I am new to AWS Lambda, I have phantomjs application to run there.
There is a python script of 5 kb and phantomjs binary which makes the whole uploadable zip to 32MB.
And I have to upload this bunch all the time. Is there any way of pushing phantomjs binary to AWS lambda /bin folder separately ?
|
xml + xslfo to PDF python
| 38,413,882 | 1 | 1 | 1,430 | 0 |
xml,python-2.7,xslt,pdf-generation,xsl-fo
|
XSL FO requires a formatting engine to create print output like PDF from XSL FO input. Freely available one is Apache FOP. There are several other commercial products also. I know of no XSL FO engines written in Python though some have Python interfaces.
| 0 | 0 | 1 | 0 |
2016-07-16T14:45:00.000
| 2 | 0.099668 | false | 38,412,298 | 0 | 0 | 1 | 1 |
Is there a simple way to get a PDF from a xml with an xsl-fo?
I would like to do it in python.
I know how to do an html from an xml&xsl, but I haven't find a code example to get a PDF.
Thanks
|
Using a fresh initial migration to squash migrations
| 38,413,258 | 3 | 1 | 459 | 0 |
python,django,migration,squash
|
If you don't have any important data in your test or production databases your can use fresh initial migration and it will be appropriate solution.
I've used this trick a lot of times and it works for me.
A few thoughts:
sometimes, first you need to create migrations for one of your local application and then for all the others;
to be sure that all be ok you can commit your migrations and backup you db before you run ./migrate with empty db.
NOTE: to speed up your tests you can try to run in memory tests and/or run tests with sqlite if its possible.
| 0 | 0 | 0 | 0 |
2016-07-16T16:19:00.000
| 1 | 1.2 | true | 38,413,099 | 0 | 0 | 1 | 1 |
I have an app with 35 migrations which take a while to run (for instance before tests), so I would like to squash them.
The squashmigrations command reduces the operations from 99 to 88, but it is still far from optimal. This is probably due to the fact that I have multiple RunPython operations preventing Django from optimizing other operations. All these RunPython operations are useless in the squashed migration because the database is empty. In Django 1.10 the elidable parameter will allow to skip them in this case, but still, a lot of clutter remains.
What I had in mind for the squashed migration was closer to the initial migrations Django generates, hence my question:
Is it advisable to use a fresh initial migration as a squashed version of a long list of migrations? How would you do that?
|
How do I control a python script through a web interface?
| 38,418,381 | 0 | 0 | 229 | 0 |
javascript,python,html,raspberry-pi2
|
I can suggest a way to handle that situation but I'm not sure how much will it suit for your scenario.
Since you are trying to use a wifi network, I think it would be better if you can use a sql server to store commands you need to give to the vehicle to follow from the web interface sequentially. Make the vehicle to read the database to check whether there are new commands to be executed, if there are, execute sequentially.
From that way you can divide the work into two parts and handle the project easily. Handling user inputs via web interface to control vehicle. Then make the vehicle read the requests and execute them.
Hope this will help you in someway. Cheers!
| 0 | 0 | 1 | 1 |
2016-07-17T05:18:00.000
| 1 | 0 | false | 38,418,140 | 0 | 0 | 1 | 1 |
For a college project I'm tasked with getting a Raspberry Pi to control an RC car over WiFi, the best way to do this would be through a web interface for the sake of accessibility (one of the key reqs for the module). However I keep hitting walls, I can make a python script control the car, however doing this through a web interface has proven to be dificult to say the least.
I'm using an Adafruit PWM Pi Hat to control the servo and ESC within the RC car and it only has python libraries as far as I'm aware so it has to be witihn python. If there is some method of passing variables from javascript to python that may work, but in a live environment I don't know how reliable it would be.
Any help on the matter would prove most valuable, thanks in advance.
|
Web-scraping advice/suggestions
| 38,419,736 | 1 | 0 | 167 | 0 |
php,python,web-scraping
|
Ethics
Using a bot to get at the content of sites can be beneficial to you and the site you're scraping. You can use the data to refer to content of the site, like search engines do. Sometimes you might want to provide a service to user that the original website doesn't offer.
However, sometimes scraping is used for nefarious purposes. Stealing content, using the computer resources of others, or worse.
It is not clear what intention you have. Helping you, might be unethical. I'm not saying it is, but it could be. I don't understand 'AucT', saying it is bad practice and then give an answer. What is that all about?
Two notes:
Search results take more resources to generate than most other webpages. They are especially vulnerable to denial-of-service attacks.
I run serveral sites, and I have notices that a large amount of traffic is caused by bots. It is literally costing me money. Some sites have more traffic from bots than from people. It is getting out of hand, and I had to invest quite a bit of time to get the problem under control. Bots that don't respect bandwidth limits are blocked by me, permanently. I do, of course, allow friendly bots.
| 0 | 0 | 1 | 0 |
2016-07-17T08:57:00.000
| 3 | 0.066568 | false | 38,419,528 | 0 | 0 | 1 | 1 |
This is my first attempt at scraping. There is a website with a search function that I would like to use.
When I do a search, the search details aren't shown in the website url. When I inspect the element and look at the Network tab, the request url stays the same (method:post), but when I looked at the bottom, in the Form Data section, I clicked view source and there were my search details in url form.
My question is:
If the request url = http://somewebsite.com/search
and the form data source = startDate=09.07.2016&endDate=10.07.2016
How can I connect the two to pull data for scraping? I'm new to scraping, so if I'm going about this wrong, please tell me.
Thanks!
|
Python/Flask : psutil date ranges
| 38,521,678 | 0 | 0 | 172 | 0 |
python,flask,psutil
|
For future need, I found a way to this. Using ElasticSearch and Psutil.
I indexed the psutil values to elasticsearch then used the date-range and date-histogram aggs.
Thanks!
| 0 | 0 | 0 | 0 |
2016-07-18T05:38:00.000
| 1 | 0 | false | 38,429,271 | 1 | 0 | 1 | 1 |
I'm currently writing a web application using Flask in python that generates the linux/nix performances(CPU, Disk Usage, Memory Usage). I already implemented the python library psutil.
My question is how can I get the values of each util with date ranges. For example: Last 3 hours of CPU, Disk Usage, Memory usage.
Sorry for the question I'm a beginner in programming.
|
How to keep request.referer value from redirect after a failed login
| 38,453,860 | 4 | 3 | 611 | 0 |
python,session,login,form-submit,pyramid
|
I recommend you to pass a parameter like login/?next=pageA.html
If the login fails, you could then forward your parameter next to /login again, even if the referrer points now to /login.
Then when the user will successfully log in, you could redirect if to pageA.html that will be held in your next parameter.
You will indeed need to check if your parameter next is a valid one, as someone could copy-paste or try to tamper with this parameter.
| 0 | 0 | 0 | 0 |
2016-07-19T08:52:00.000
| 1 | 0.664037 | false | 38,453,730 | 0 | 0 | 1 | 1 |
I'm adding authentication to an existing pyramid project. The simplest form that I'm currently trying (will be expending later) is for all pages to raise HTTPForbidden. The exception view is /login, which will ask for login details and, on success, return HTTPFound with request.referer as the location.
So far so good, this does what I want, which is bringing users back to the page they were trying to access when the login page interrupted them. Let's call this Page A.
The login page is a simple HTML form with a submit button.
However if the user mistypes username or password, I want to return to the login page with an error message saying "Wrong password" or similar. When that happens, request.referer is now the login page instead of Page A.
How do I 'store' Page A (or rather its URL) so that, when the user eventually succeeds in logging in, they find themselves back on Page A? Is the session used for things like this, and are there non-session ways of implementing it? I don't (yet) have a session for this simple page, and am trying to avoid adding different components in one pass.
|
How can I incorporate Python scripts into my website? Do I need a framework?
| 38,468,623 | 4 | 2 | 184 | 0 |
javascript,python
|
If you are using only javascript and don't feel like a framework is the solution, you'd better rewrite your python script using javascript. These two languages have a lot in common and most of the stuff are transferable. Calling python from javascript would most likely not going to work that great. Again, unless you share your python script(which is encouraged in SO because text only question does not quite fit in here), all answers are opinion based.
| 0 | 0 | 0 | 0 |
2016-07-19T21:07:00.000
| 3 | 0.26052 | false | 38,468,540 | 0 | 0 | 1 | 2 |
I'm wondering if there's a good way for me to incorporate Python scripts into my current website.
I have a personal website & my own server that I have been working with for a while. So far it's just been html / css / javascript. I have made a Python script in the past that uses another website's API to retrieve something that I would like to display on my website. I've only used Python from the terminal to take input and spit out results. Is there a way for me to run a Python script from javascript through Ajax to get some content back?
I don't really want to use a framework like Django or Flask because I feel as though those are mostly for entire projects. I only want to use one Python script on one page for my website. Is this even something I should do? Any advice would be great.
|
How can I incorporate Python scripts into my website? Do I need a framework?
| 38,468,783 | 2 | 2 | 184 | 0 |
javascript,python
|
I completly agree with you about Django, but I think you can give a chance to Flask, it is really light and I can be used for many porpouses. Anyway if you want to call a python scripts you need a way to call it. I think you need a "listener" for the script for example a service or a web service (for this reason I think Flask can be an really easy solution).
Be careful about calling the script, a web service can be reachable from the frontend but this can not be done from a "standard" script.
My suggestion is take a look at Flask is more light that you think.
| 0 | 0 | 0 | 0 |
2016-07-19T21:07:00.000
| 3 | 0.132549 | false | 38,468,540 | 0 | 0 | 1 | 2 |
I'm wondering if there's a good way for me to incorporate Python scripts into my current website.
I have a personal website & my own server that I have been working with for a while. So far it's just been html / css / javascript. I have made a Python script in the past that uses another website's API to retrieve something that I would like to display on my website. I've only used Python from the terminal to take input and spit out results. Is there a way for me to run a Python script from javascript through Ajax to get some content back?
I don't really want to use a framework like Django or Flask because I feel as though those are mostly for entire projects. I only want to use one Python script on one page for my website. Is this even something I should do? Any advice would be great.
|
How is Flask-Login's request_loader related to user_loader?
| 65,905,459 | 1 | 11 | 7,541 | 0 |
python,flask,flask-login
|
I need to make this clear.
This is the reason why you shoud use request_loader with flask_login.
There will be a lot of @login_required from flask_login used in your api to guard the request access.
You need to make a request to pass the check of auth.
And there will be a lot of current_user imported from flask_login,
Your app need to use them to let the request act as the identity of the current_user.
There are two ways to achieve the above with flask_login.
Using user_loader makes the request to be OK for @login_required.
It is often used for UI logins from browser.
It will store session cookies to the browser and use them to auth later.
So you need to login only once and the session will keep for a time.
Using request_loader will also be OK with @login_required.
But it is often used with api_key or basic auth.
For example used by other apps to interact with your flask app.
There will be no session cookies,
so you need to provide the auth info every time you send request.
With both user_loader and request_loader,
now you got 2 ways of auth for the same api,
protected by @login_required,
and with current_user usable,
which is really smart.
| 0 | 0 | 0 | 0 |
2016-07-20T13:43:00.000
| 3 | 0.066568 | false | 38,483,026 | 0 | 0 | 1 | 1 |
I apologize in advance for asking a rather cryptic question. However, I did not understand it despite going through a lot of material. It would be great if you could shed some light on this.
What is the purpose of a request_loader in flask-login? How does it interact with the user_loader decorator?
If I am using a token based authentication system (I am planning on sending the token to my angularJS front end, storing the token there and sending that token in the authorization-token header), will I need a request_loader or will a user_loader (where I check the auth header and see if the user exists) suffice?
|
how can i restore the backup.py plugin data of errbot running in a docker container
| 38,557,886 | 1 | 2 | 75 | 0 |
python,docker,errbot
|
I think the best if you run Errbot in a container is to run it with a real database for the persistence (redis for example).
Then you can simply run backup.py from anywhere (including your dev machine).
Even better, you can just do a backup of your redis directly.
| 0 | 1 | 0 | 0 |
2016-07-20T19:12:00.000
| 1 | 1.2 | true | 38,488,977 | 0 | 0 | 1 | 1 |
I'm running errbot in a docker container, we did the !backup and we have the backup.py, but when i start the docker container it just run /app/venv/bin/run.sh
but i cannot pass -r /srv/backup.py to have all my data restored.
any ideas?
all the data is safe since the /srv is a mounted volume
|
Pyspark command in terminal launches Jupyter notebook
| 38,510,399 | 11 | 2 | 3,753 | 0 |
python,pyspark,jupyter,jupyter-notebook
|
The PYSPARK_DRIVER_PYTHON variable is set to start ipython/jupyter automatically (probably as intended.) Run unset PYSPARK_DRIVER_PYTHON and then try pyspark again.
If you wish this to be the default, you'll probably need to modify your profile scripts.
| 0 | 1 | 0 | 0 |
2016-07-20T21:13:00.000
| 1 | 1.2 | true | 38,490,946 | 1 | 0 | 1 | 1 |
I have run into an issue with spark-submit , throws an error is not a Jupyter Command i.e, pyspark launches a web ui instead of pyspark shell
Background info:
Installed Scala , Spark using brew on MAC
Installed Conda Python 3.5
Spark commands work on Jupyter Notebook
'pyspark' on terminal launches notebook instead of shell
Any help is much appreciated.
|
Getting Django for Python 3 Started for Mac django-admin not working
| 38,493,996 | 10 | 6 | 6,001 | 0 |
python,django,python-3.x,django-admin
|
Activate virtualenv and install Django there (with python -m pip install django). Try python -m django startproject mysite. You can use python -m django instead of django-admin since Django 1.9.
| 0 | 0 | 0 | 0 |
2016-07-21T00:44:00.000
| 3 | 1.2 | true | 38,493,057 | 0 | 0 | 1 | 1 |
I have been trying to set up Django for Python 3 for for 2 days now. I have installed python 3.5.2 on my Mac Mini. I have also have pip3 installed succesfully. I have installed Django using pip3 install Django. The problem is that when I try to start my project by typing django-admin startproject mysite, I get the error -bash: django-admin: command not found. If you need any more info, just let me know, I am also new to Mac so I may be missing something simple. How do I get django-admin working? I have tried pretty much everything I could find on the web.
|
How Do I Authenticate OneNote Without Opening Browser?
| 38,515,902 | 0 | 2 | 505 | 0 |
python,authentication,terminal,onenote,onenote-api
|
If this is always with the same account - you can make the "browser opening and password typing" a one time setup process. Once you've authenticated, you have the "access token" and the "refresh token". You can keep using the access token for ~1hr. Once it expires, you can use the "refresh token" to exchange it for an "access token" without any user interaction. You should always keep the refresh token so you can get new access tokens later.
This is how "background" apps like "IFTTT" keep access to your account for a longer period of time.
Answer to your updated question:
The initial setup has to be through UI in a browser. If you want to automate this, you'll have to write some UI automation.
| 0 | 0 | 1 | 0 |
2016-07-21T23:01:00.000
| 2 | 0 | false | 38,515,700 | 0 | 0 | 1 | 1 |
I want to create a python script that will allow me to upload files to OneNote via command line. I have it working perfectly and it authenticates fine. However, everytime it goes to authenticate, it has to open a browser window. (This is because authentication tokens only last an hour with OneNote, and it has to use a refresh token to get a new one.) While I don't have to interact with the browser window at all, the fact that it needs to open one is problematic because the program has to run exclusively in a terminal environment. (E.g. the OneNote authentication code tries to open a browser, but it can't because there isn't a browser to open).
How can I get around this problem? Please assume it's not possible to change the environment setup.
UPDATE:
You have to get a code in order to generate an access token. This is the part that launches the browser. It is only required the first time though, for that initial token. Afterwards, refresh token requests don't need the code. (I was calling it for both, which was the issue).
That solves the problem of the browser opening each time I run my program. However, it still leaves the problem of the browser having to open that initial time. I can't do that in a terminal environment. Is there a way around that?
E.g. Can I save the code and call it later to get the access token (how long until it expires)? Will the code work for any user, or will it only work for me?
|
Class Based View to get user authentication in Django
| 38,517,140 | 0 | 0 | 523 | 0 |
python,django,class,authentication,request
|
I dont know whats the context of your ClassBasedView ... but you can use the LoginRequiredMixin to require the login before calling your class :
class ServerDeleteView(LoginRequiredMixin, DeleteView):
model = Server
success_url = reverse_lazy('ui:dashboard')
| 0 | 0 | 0 | 0 |
2016-07-22T01:59:00.000
| 2 | 0 | false | 38,517,032 | 0 | 0 | 1 | 1 |
Ok, have a class based view that passes a query_set into my AssignedToMe class. The point of this class based view is to see if a user is logged in and if they are, they can go to a page and it will display all of records that are assigned to their ID. Currently, it is working how I want it to but only if a user is logged in. If a user is not logged in, I get the following error 'AnonymousUser' object is not iterable.
I want it to redirect the user to the login page if there is no user logged in. Thank you in advance. Please look at the screenshot
|
OpenERP - Odoo - How to have the percentage of quote that become a sale orders
| 38,689,879 | 0 | 0 | 118 | 0 |
python,openerp,openerp-7
|
Yes it is possible by using status bar.
In order for you to compute the percentage of sales order, you should determine how much is the quota for each sale order.
| 0 | 0 | 0 | 0 |
2016-07-22T08:09:00.000
| 1 | 0 | false | 38,521,380 | 0 | 0 | 1 | 1 |
Is it possible to have easily the percentage of sales orders / quotes per users?
The objective it is to know what the percentage of quote that become a sale order per user.
I have not a clue how I can do it.
I am using OpenERP 7
|
AWS IOT with DynamoDB logging service issue
| 38,537,832 | 0 | 1 | 66 | 0 |
python,amazon-web-services,amazon-dynamodb,iot
|
No, it does not. I have done similar setup and it is working fine. Are you sure that your IoT device does not go into some kind of sleep mode after a while ?
| 0 | 1 | 0 | 1 |
2016-07-22T13:21:00.000
| 1 | 0 | false | 38,527,573 | 0 | 0 | 1 | 1 |
We have implemented a simple DynamoDB database that is updated by a remote IoT device that does not have a user (i.e. root) constantly logged in to the device. We have experience issues in logging data as the database is not updated if a user (i.e. root) is not logged into the device (we log in via a ssh session). We are confident that the process is running in the background as we are using a Linux service that runs on bootup to execute a script. We have verified that the script runs on bootup and successfully pushes data to Dynamo upon user log in (via ssh). We have also tried to disassociate a screen session to allow for the device to publish data to Dynamo but this did not seem to fix the issue. Has anyone else experienced this issue? Does amazon AWS require a user (i.e. root) to be logged in to the device at all times in order to allow for data to be published to AWS?
|
Anyone know how to set group volume in soco (python)?
| 38,955,622 | 1 | 1 | 442 | 0 |
python,sonos
|
you can easily iterate over the group, and change all their volumes, for example to increate the volume on all speakers by 5:
for each_speaker in my_zone.group:
each_speaker.volume += 5
(assuming my_zone is you speaker object)
| 0 | 0 | 0 | 0 |
2016-07-23T10:04:00.000
| 1 | 1.2 | true | 38,540,517 | 0 | 0 | 1 | 1 |
I am trying to set group volume in soco (python) for my Sonos speakers. It is straightforward to set individual speaker volume but I have not found any way to set volume on group level (without iterating through each speaker setting the volume individually). Any idea to do this?
|
Is there a way to block django views from serving multiple requests concurrently?
| 38,556,568 | 2 | 1 | 179 | 0 |
python,django
|
You need some kind of mutex. Since your operations involve the filesystem already, perhaps you could use a file as a mutex. For instance, at the start of the operation, check if a specific file exists in a specific place; if it does, return an error, but if not, create it and proceed, deleting it at the end of the operation (making sure to also delete it in the case of any error).
| 0 | 0 | 0 | 0 |
2016-07-24T20:26:00.000
| 2 | 0.197375 | false | 38,556,461 | 0 | 0 | 1 | 1 |
I have a django app, where I am using one of the views to fetch data from local filesystem and parse it and add it to my database. Now the thing is, I want to restrict this view from serving multiple requests concurrently, I want them to be served sequentially instead. Or just block the new request when one request is already being served. Is there a way to achieve it?
|
flask serving byte-order-mark 
| 38,558,411 | 2 | 0 | 142 | 0 |
python-3.x,utf-8,flask
|
The string (BOM) is most likely included in your template file. Open/save it in some editor which doesn't include unnecessary symbols in UTF-8 files. For example Notepad++.
| 0 | 0 | 0 | 0 |
2016-07-25T00:59:00.000
| 1 | 0.379949 | false | 38,558,368 | 0 | 0 | 1 | 1 |
I am trying to use Flask and for some reason it is rendering with a byte-order mark that's a quirk of something using UTF8 (the mark is  in particular for people googling the same issue).
I do not know how to get rid of it or if it is a source of some of my problems. I am using Flask on Windows 10.
I wish I knew how to reproduce this issue.
|
Django library Unresolved Import LiClipse
| 38,581,034 | 0 | 0 | 66 | 0 |
python,django,liclipse
|
Instead of adding django package as external library, add the containing folder of django. For example if folder hierarchy is something like /site-package/django than add site-package as external library and not django.
| 0 | 0 | 0 | 0 |
2016-07-25T06:42:00.000
| 1 | 0 | false | 38,561,207 | 0 | 0 | 1 | 1 |
I am creating my first Django project from docs.djangoproject.com. After completing tutorial 4, I tried to import my project in LiClipse. But LiClipse is showing error of Unresolved Import however my projects works perfectly fine.
I have added django in external library.
Please help me with this issue.
LiClipse shows error only with django libraries and not with any other python library
|
war/ear file deployment using jython/python scrpting from remote location in webspehere
| 38,612,101 | 0 | 0 | 1,027 | 0 |
java,python,maven,deployment,ant
|
Every one suggesting set the class path to wasanttask.jar or com.ibm.websphere.v61_6.1.100.ws_runtime.jar
and get the details.
but there is no jars is available with that name in WAS 8.5
| 0 | 0 | 0 | 0 |
2016-07-25T06:59:00.000
| 2 | 0 | false | 38,561,479 | 0 | 0 | 1 | 1 |
I'm new to jython and python scripts.
My new requirement is to deploy a war file from windows client to windows server, using scripts.
I have done using ant, in local environment completed. From remote I have done R&D but I didn't get solution.
That's why I moved to jython scripting, and local environment deployment completed.
But remote deployment is not working.
Can you please share any ideas and how to deploy the war file from my environment to a remote locations, please?
|
Advice on structuring a growing Django project (Models & API)
| 39,084,030 | 0 | 0 | 429 | 0 |
python,django,django-rest-framework
|
I'm usually a huge proponent for DRF. It's simple to implement an easy use case, yet INCREDIBLY powerful for more complex uses.
However, if you are not using Django models for all your data, I think JsonResponse might be easier. Running queries and manual manipulation (especially if it is only a single endpoint) might be the way to go.
Sorry for not weighing in on the other part of the question.
| 0 | 0 | 0 | 0 |
2016-07-25T14:31:00.000
| 3 | 0 | false | 38,570,535 | 0 | 0 | 1 | 2 |
I’m currently working on improving a Django project that is used internally at my company. The project is growing quickly so I’m trying to make some design choices now before it’s unmanageable to refactor. Right now the project has a two really important models the rest of the data in the database that supports the each application in the project is added into the database through various separate ETL processes. Because of this the majority of data used in the application is queried in each view via SQLAlchemy using a straight up multiline string and passing the data through to the view via the context param when rendering rather than using the Django ORM.
Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?
I think it also makes sense to start building an API rather than passing a gigantic amount of information through to a single view via the context param, but I’m unsure of how to structure the API. I’ve read that some people create an entirely separate app named API and make all the views in it return a JsonResponse. I’ve also read others do this same view based API but they simply include an api.py file in each application in their Django project. Others use the Django REST API Framework, which seems simple but is slightly more complicated than just returning JsonResponse via a view. There is really only one place where a users interaction does anything but GET data from the database and that portion of the project uses Django REST Framework to perform CRUD operations. That being said:
Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?
Thanks you in advance for any resources or advice anyone has regarding these questions. Please let me know if I can add any additional context.
|
Advice on structuring a growing Django project (Models & API)
| 38,571,117 | 2 | 0 | 429 | 0 |
python,django,django-rest-framework
|
Would there be a distinct advantage in building models for all the
tables that are populated via ETL processes so I can start using the
Django ORM vs using SQLAlchemy and query strings?
Yes, a centralized, consistent way of accessing the data, and of course, one less dependency on the project.
Which of these API structures is the most typical, and what do I
gain/lose by implementing JsonResponse views as an API vs using the
Django REST Framework?
In general terms, JSON is used for data, and REST for APIs. You mentioned that Django-REST is already in use, so if there's any tangible benefit from having a REST API, I'd go with it
| 0 | 0 | 0 | 0 |
2016-07-25T14:31:00.000
| 3 | 0.132549 | false | 38,570,535 | 0 | 0 | 1 | 2 |
I’m currently working on improving a Django project that is used internally at my company. The project is growing quickly so I’m trying to make some design choices now before it’s unmanageable to refactor. Right now the project has a two really important models the rest of the data in the database that supports the each application in the project is added into the database through various separate ETL processes. Because of this the majority of data used in the application is queried in each view via SQLAlchemy using a straight up multiline string and passing the data through to the view via the context param when rendering rather than using the Django ORM.
Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?
I think it also makes sense to start building an API rather than passing a gigantic amount of information through to a single view via the context param, but I’m unsure of how to structure the API. I’ve read that some people create an entirely separate app named API and make all the views in it return a JsonResponse. I’ve also read others do this same view based API but they simply include an api.py file in each application in their Django project. Others use the Django REST API Framework, which seems simple but is slightly more complicated than just returning JsonResponse via a view. There is really only one place where a users interaction does anything but GET data from the database and that portion of the project uses Django REST Framework to perform CRUD operations. That being said:
Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?
Thanks you in advance for any resources or advice anyone has regarding these questions. Please let me know if I can add any additional context.
|
How to import a class from a python file in django?
| 38,576,546 | 3 | 4 | 9,514 | 0 |
django,python-3.x
|
You can simply put the mypythonfile.py in the same directory of your views.py file. And from mypythonfile import mystuff in your views.py
| 0 | 0 | 0 | 0 |
2016-07-25T19:57:00.000
| 2 | 1.2 | true | 38,576,350 | 1 | 0 | 1 | 1 |
I'm currently using django and putting python code in my views.py to run them on my webpages. There is an excerpt of code that requires a certain class from a python file to work sort of like a package but it is just a python file I was given to use in order to execute a certain piece of code. How would I be able to reference the class from my python file in the django views.py file? I have tried putting the python file in my site packages folder in my Anaconda3 folder and have tried just using from [name of python file] import [class name] in the views.py file but it does not seem to recognize that the file exists in the site packages folder. I also tried putting the python file in the django personal folder and using from personal import [name of file] but that doesn't work either
|
redirect in loadhook in web.py
| 38,581,743 | 0 | 0 | 96 | 0 |
python,http,web.py
|
I think redirection is in dead loop. Maybe your certain var doesn't be saved in session after calling web.seeother
| 0 | 0 | 0 | 0 |
2016-07-26T04:00:00.000
| 1 | 0 | false | 38,580,816 | 0 | 0 | 1 | 1 |
I'm making a web.py app. I'm using a unloadhook function to check if a certain var is in the session for each call.
I need to redirect (to index) if it's not there. However, firefox gives me the message that the redirect will never terminate when I call web.seeother in the unloadhook function. I can correctly detect both cases in the unloadhook and treat the case with the var in the session, but not the second.
def xCheck():
if 'x' in session:
print >> sys.stderr, "x in"
print >> sys.stderr, str(dict(session))
return
else:
print >> sys.stderr, "x out"
return web.seeother('/')
app.add_processor(web.unloadhook(sessionCheck))
|
Storing entries in a very large database
| 38,587,539 | 1 | 1 | 66 | 1 |
python,django,database,postgresql,saas
|
Store one at a time until you absolutely cannot anymore, then design something else around your specific problem.
SQL is a declarative language, meaning "give me all records matching X" doesn't tell the db server how to do this. Consequently, you have a lot of ways to help the db server do this quickly even when you have hundreds of millions of records. Additionally RDBMSs are optimized for this problem over a lot of years of experience so to a certain point, you will not beat a system like PostgreSQL.
So as they say, premature optimization is the root of all evil.
So let's look at two ways PostgreSQL might go through a table to give you the results.
The first is a sequential scan, where it iterates over a series of pages, scans each page for the values and returns the records to you. This works better than any other method for very small tables. It is slow on large tables. Complexity is O(n) where n is the size of the table, for any number of records.
So a second approach might be an index scan. Here PostgreSQL traverses a series of pages in a b-tree index to find the records. Complexity is O(log(n)) to find each record.
Internally PostgreSQL stores the rows in batches with fixed sizes, as pages. It already solves this problem for you. If you try to do the same, then you have batches of records inside batches of records, which is usually a recipe for bad things.
| 0 | 0 | 0 | 0 |
2016-07-26T09:13:00.000
| 1 | 1.2 | true | 38,585,719 | 0 | 0 | 1 | 1 |
I am writing a Django application that will have entries entered by users of the site. Now suppose that everything goes well, and I get the expected number of visitors (unlikely, but I'm planning for the future). This would result in hundreds of millions of entries in a single PostgreSQL database.
As iterating through such a large number of entries and checking their values is not a good idea, I am considering ways of grouping entries together.
Is grouping entries in to sets of (let's say) 100 a better idea for storing this many entries? Or is there a better way that I could optimize this?
|
Celery: How to get the task completed time from AsyncResult
| 38,587,766 | 1 | 1 | 696 | 0 |
python,celery
|
You can get it from the _cache object of the AsyncResult after you have called res.result
for example
res._cache['date_done']
| 0 | 1 | 0 | 0 |
2016-07-26T09:59:00.000
| 1 | 1.2 | true | 38,586,767 | 0 | 0 | 1 | 1 |
I need to trace the status for the tasks. i could get the 'state', 'info' attribute from the AsyncResult obj. however, it looks there's no way to get the 'done_date'. I use MySQL as result backend so i could find the date_done column in the taskmeta table, but how could i get the task done date directly from AysncResult obj? thanks
|
Amazon SWF to schedule task
| 38,601,627 | 1 | 0 | 218 | 0 |
python,amazon-web-services,boto,amazon-swf
|
There is no delay option when scheduling an activity. The solution is to schedule a timer with delay based on activity execution count and when the timer fires schedule an activity execution.
| 0 | 0 | 1 | 0 |
2016-07-26T10:57:00.000
| 1 | 0.197375 | false | 38,588,000 | 0 | 0 | 1 | 1 |
I am using python boto library to implement SWF.
We are simulating a workflow where we want to execute same task 10 times in a workflow. After the 10th time, the workflow will be marked complete.
The problem is, we want to specify an interval for execution which varies based on the execution count. For example: 5 minutes for 1st execution, 10 minutes for 2nd execution, and so on.
How do I schedule a task by specifying time to execute?
|
Export from python 3.5 to csv
| 38,590,656 | 0 | 0 | 104 | 0 |
python,python-3.x,csv,beautifulsoup,export
|
Here are a few hints:
When you have a string like you want to separate on a given character, use string.split to get list, which you can get the first value using lst[0].
Then, take a look at the csv module to do your export.
| 0 | 0 | 0 | 0 |
2016-07-26T12:50:00.000
| 1 | 0 | false | 38,590,391 | 0 | 0 | 1 | 1 |
I scraped six different values with Python 3.5 using beautifulSoup. Now I have the following six variables with values:
project_titles
project_href
project_desc
project_per
project_mon
project_loc
The data for e.g. "project_titles" looks loke this:
['Formula Pi - Self-driving robot racing with the Raspberry Pi', 'The Superbook: Turn your smartphone into a laptop for $99'] --> seperated by a comma.
Now I want to export this data to a csv.
The Headlines should be in A1 (project_titles), B1 (project_href) and so on.
And in A2 I need the first value of "project_titles". In B2 the first value of "project_href".
I think I need a loop for this, but I didn't get it. Please help me...
|
Reading From Google Sheets Periodically
| 38,600,670 | 1 | 0 | 164 | 0 |
python,google-spreadsheet-api
|
If you want to do this with only manipulating your Python program, you would have to run it all day. This would waste CPU resources.
It's best to use cron to schedule your unix system to run a command for you every 2 hours. In this case, it'd be your python program.
| 0 | 0 | 0 | 0 |
2016-07-26T17:56:00.000
| 1 | 1.2 | true | 38,596,793 | 0 | 0 | 1 | 1 |
I'm trying to read from a Google sheet say every 2 hours. I have looked at both the API for Google sheets and also the Google Apps Script.
I'm using Python/Flask, and what I'm specifically confused about is how to add the time trigger. I can use the Google Sheets API to read from the actual file,but I'm unsure of how to run this process every x hours. From my understanding, it seems like Google Apps Script, is for adding triggers to doc, sheets, etc, which is not really what I want to do.
I'm pretty sure I'm looking in the wrong area for this x hour read. Should I be looking into using the sched module or Advanced Python Scheduler?Any advice on how to proceed would be very appreciated.
|
How can I run Deep Dream on Android
| 39,757,493 | 0 | 0 | 108 | 0 |
java,android,python
|
Short answer is no.
Google deepdream is an iPython notebook with dependencies on caffe which itself has several dependencies.
There is however no reason why someone couldn't develop a similar tool for Android. There is an app called dreamscope for producing these kinds of images that is available for android, but I would presume they do all of their image computation in the cloud.
| 0 | 0 | 0 | 0 |
2016-07-26T22:38:00.000
| 1 | 0 | false | 38,600,982 | 0 | 0 | 1 | 1 |
How can I run Google Deep Dream on Android? Can I execute the Python script or do I need to port it to Java for performance reasons?
|
Suggestions to make website fast by breaking a request in two parts
| 38,612,965 | 2 | 0 | 51 | 0 |
python,performance,model-view-controller,pyramid
|
Why don't you use ajax function , post data to the server and when proccess to the server is done display the result to the html page
| 0 | 0 | 0 | 0 |
2016-07-27T12:21:00.000
| 1 | 0.379949 | false | 38,612,836 | 0 | 0 | 1 | 1 |
I am trying to speed up my website. So at the moment, controller fetches data from database, do calculation on data and display on view.
what I plan to do is, controller/action fetches half the data and display to the view. Than come back to different controller/action and do calculation on data and display data on screen.
But what I want to know is once I fetch data and display on screen, how do I go back to controller automatically(without any click by user) to do calculations on same data.
|
How can I upload a picture to my django app on heroku and get it to be displayed?
| 38,627,826 | 0 | 0 | 58 | 0 |
python,django,database,heroku,web-applications
|
I've been running my django app in heroku for about 6 months now and I've never experienced db gets reset when ever I updated/deploy/push to heroku
note I'm using heroku postgress for db
| 0 | 0 | 0 | 0 |
2016-07-27T17:03:00.000
| 1 | 0 | false | 38,619,166 | 0 | 0 | 1 | 1 |
I have a django application on heroku where some data is added in the admin settings. It is linked to my github. One of the things you add is a picture. It doesn't show up on the site after its uploaded. What could be the cause and solution?
|
How are Django channels different than celery?
| 42,395,545 | 4 | 32 | 11,029 | 0 |
python,django,celery,channels
|
Django channels gives to Django the ability to handle more than just plain HTTP requests, including Websockets and HTTP2. Think of this as 2 way duplex communication that happens asynchronously
No browser refreshing. Multiple clients can send and receive data via websocket and django channels orchestrates this intercommunication example a group chat with simultaneously clients accessing at the same time. You can achieve background processing of long running code simliar to that of a celery to a certain extent, but the application of channels is different to that of celery.
Celery is an asynchronous task queue/job queue based on distributed message passing. As well as scheduling. In layman's terms, I want to fire and run a task in the background or I want to have a periodical task that fires and runs in the back on a set interval. You can also fire task in a synchronous way as well fire and wait until complete and continue.
So the key difference is in the use case they serve and objectives of the frameworks
| 0 | 0 | 0 | 0 |
2016-07-27T18:50:00.000
| 6 | 0.132549 | false | 38,620,969 | 0 | 0 | 1 | 2 |
Recently I came to know about Django channels.
Can somebody tell me the difference between channels and celery, as well as where to use celery and channels.
|
How are Django channels different than celery?
| 50,080,329 | 3 | 32 | 11,029 | 0 |
python,django,celery,channels
|
Other answers, greatly explained the diff, but in facts Channels & Celery can both do asynchronous pooled tasks in common.
Channels and Celery both use a backend for messages and worker daemon(s). So the same kind of thing could be implemented with both.
But keep in mind that Celery is primary made for and can handle most issue of task pooling (retries, result backend, etc), where Channels is absolutely not made for.
| 0 | 0 | 0 | 0 |
2016-07-27T18:50:00.000
| 6 | 0.099668 | false | 38,620,969 | 0 | 0 | 1 | 2 |
Recently I came to know about Django channels.
Can somebody tell me the difference between channels and celery, as well as where to use celery and channels.
|
Maven build with Java: How to execute script located in resources?
| 38,622,730 | 1 | 1 | 1,406 | 0 |
java,python,shell,maven
|
The maven path for all the artifacts is not the same that gets generated when you run it or export the project. You can check this by exporting the project as Jar/War/Ear file and viewing it via winRAR or any other tool.
The resources should be in jar parallel to com directory if its a jar project, but you can double check it.
| 0 | 1 | 0 | 0 |
2016-07-27T20:22:00.000
| 1 | 0.197375 | false | 38,622,523 | 0 | 0 | 1 | 1 |
I am building my Java project with Maven and I have a script file that ends up in the target/classes/resources folder. While I can access the file itself via this.getClass.getResource("/lookUpScript.py").getPath(), I cannot execute a shell command with "." + this.getClass.getResource("/lookUpScript.py").getPath(); this ultimately ends up being ./lookUpScript.py. To execute the shell command I am using a method that is part of my company's code that I can get to work fine with any command not involving a file. Is there a standard way of accessing files located in the resources area of a Maven build that may fix this?
|
sending dynamic html email containing javascript via a python script
| 38,650,801 | 2 | 0 | 516 | 0 |
javascript,python,email,bokeh,smtplib
|
Sorry, but you'll not be able to send an email with JavaScript embedded. That is a security risk. If you're lucky, an email provider will strip it before rendering, if you're unlucky, you'll be sent directly to spam and the provider will distrust your domain.
You're better off sending an email with a link to the chart.
| 0 | 0 | 1 | 1 |
2016-07-29T04:43:00.000
| 1 | 0.379949 | false | 38,650,665 | 0 | 0 | 1 | 1 |
at the first place, I could not help myself with the correct search terms on this.
secondly, I couldnt pretty much make it working with standard smtplib or email package in python.
The question is, I have a normal html page(basically it contains a that is generated from bokeh package in python, and all it does is generating an html page the javascript within renders a nice zoomable plot when viewed in a browser.
My aim is to send that report (the html basically) over to recipients in a mail.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.