Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Using PIP in a Azure WebApp
| 41,843,617 | 2 | 9 | 7,236 | 0 |
python,django,azure,pip,azure-web-app-service
|
You won't be able to upgrade the pip of your Django webapp because you will not have access to system files.
Instead you can upgrade pip of your virtualenv, which you can do by adding a line in deploy.cmd file before install requirements.txt command.
env\scripts\python -m pip install --upgrade pip
Remember not to upgrade pip with pip (env/scripts/pip) else it will uninstall global pip.
| 0 | 0 | 0 | 0 |
2016-01-09T10:31:00.000
| 5 | 0.07983 | false | 34,692,370 | 1 | 0 | 1 | 2 |
I'm pretty new to Azure and I'm trying to get a Django WebApp up and running. I uploaded the files using FTP, But Azure doesn't run my requirements.txt.
So I searched for a bit and found out that you can install the requirements.txtwith pip.
Back in Azure, PIP doesn't seem to work. Neither in the Console, The KUDU CMD or the KUDU powershell. Python does work.
When I try to install PIP via Python, it first says that a older version is already installed. When Python tries to upgrade PIP, it doesn't have access to the folder that it needs to edit.
I was wondering how I could use PIP in azure.
(If you know a seperate way to install the requirements.txt please tell, because this was how I originally came to this point.)
|
Using PIP in a Azure WebApp
| 38,240,151 | 2 | 9 | 7,236 | 0 |
python,django,azure,pip,azure-web-app-service
|
Have you tried upgrading pip with easy_install? The following worked for me in Azure kudu console:
python -m easy_install --upgrade --user pip
| 0 | 0 | 0 | 0 |
2016-01-09T10:31:00.000
| 5 | 0.07983 | false | 34,692,370 | 1 | 0 | 1 | 2 |
I'm pretty new to Azure and I'm trying to get a Django WebApp up and running. I uploaded the files using FTP, But Azure doesn't run my requirements.txt.
So I searched for a bit and found out that you can install the requirements.txtwith pip.
Back in Azure, PIP doesn't seem to work. Neither in the Console, The KUDU CMD or the KUDU powershell. Python does work.
When I try to install PIP via Python, it first says that a older version is already installed. When Python tries to upgrade PIP, it doesn't have access to the folder that it needs to edit.
I was wondering how I could use PIP in azure.
(If you know a seperate way to install the requirements.txt please tell, because this was how I originally came to this point.)
|
Location of virtualenv in production
| 34,701,093 | 4 | 3 | 1,778 | 0 |
python,django,virtualenv,virtualenvwrapper
|
Basically you can put the virtual environment at any place that suits you and can be read by the user running the python process. For security reasons you should consider to create the ve as an other user so the process has no write access to it.
| 0 | 0 | 0 | 0 |
2016-01-10T00:52:00.000
| 3 | 0.26052 | false | 34,700,802 | 0 | 0 | 1 | 1 |
I'm in the process of deploying a Django site, and I'm using a virtualenv to keep my Python installation tidy. I'm trying to figure out where the virtual environment should be located in a production server environment. It seems like this should be super straightforward, but it's giving me a monster headache. Any help would be greatly appreciated!
My plan was to use virtualenvwrapper to make my virtual environment. By default, this stores the virtualenv in ~/.virtualenvs, which in this case is /home/james/virtualenvs/. This is fine in development, when I'm on my local machine and running everything under the user james. However, I don't believe that the user james is going to be running the code in the virtualenv on the production server; rather, I believe it's going to be www-data. Is www-data supposed to reach across to james to access the virtualenv, or is there a way to install the virtualenv into www-data? It seems like there should be a standardized way of configuring virtualenvs in production, but I can't seem to find anything.
Thank you in advance for any and all help!
|
how to write an Android app in Java which needs to use a Python library?
| 34,710,122 | 0 | 0 | 49 | 0 |
java,android,python
|
Instead of running it as one app, what about running the python script as separate from the original script? I believe it would bee possible, as android is in fact a UNIX based OS. Any readers could give their input on this idea an if it would work.
| 1 | 0 | 0 | 1 |
2016-01-10T19:50:00.000
| 1 | 0 | false | 34,710,059 | 0 | 0 | 1 | 1 |
I want to develop an app to track people's Whatsapp last seen and other stuff, and found out that there are APIs out there to deal with it, but the thing is they are writen in python and are normally run in Linux I think
I have Java and Android knowledge but not python, and wonder if there's a way to develop the most of the app in Java and get the info I want via calls using these python APIs, but without having to install a python interpreter or similar on the device, so the final user just has to download and run the Android app as he would do with any other
I want to know if it would be very hard for someone inexperienced as me (this is the 2nd and final year of my developing grade), for it's what I have in mind for the final project, thx in advance
|
who creates the media folder in django and how to change permission rights?
| 36,338,260 | 0 | 1 | 343 | 0 |
python,django,nginx,permissions,file-permissions
|
Coulnd't find out who exactly is it created by, however, the permissions depend on the user (root or non root).
This means if you run the commands (for example: python manage.py runserver) with sudo or under root the folder gets root permissions which can't be edited from a non root user.
| 0 | 0 | 0 | 0 |
2016-01-11T19:07:00.000
| 1 | 1.2 | true | 34,729,149 | 0 | 0 | 1 | 1 |
I set up django using nginx and gunicorn. I am looking at the permission in my project folder and I see that the permission for the media folder is set to root (all others are set to debian):
-rw-r--r-- 1 root root 55K Dec 2 13:33 media
I am executing all app relevant commands like makemigrations, migrate, collectstatic, from debian, therefore everything else is debian.
But the media folder doesn't exist when I start my app. I will be created once I upload stuff.
But who creates it and how do I change the permissions to debain?
|
How to host a flask web app on my own pc?
| 34,775,584 | 0 | 0 | 2,566 | 0 |
python,web,flask,host
|
Enable port forwarding on your router, start flask on the 0.0.0.0 address of your computer, set the forwarded port to be the one started on your laptop. This will now allow your LAN and calls to your ISP provided address to be directed to your laptop.
To clarify, LAN can do it without port forwarding in my experience.
| 0 | 0 | 0 | 0 |
2016-01-12T06:30:00.000
| 1 | 0 | false | 34,736,964 | 0 | 0 | 1 | 1 |
Im developing an app using flask framework in python, i wanted to host it on my pc for a few people to be able to visit it, similar to wamps put online feature but for flask instead, i dont want to deploy it to the cloud just yet. how can i do it.
|
How to sync variable between servers?
| 34,737,908 | 2 | 0 | 125 | 0 |
python,django
|
Premature optimisation is the root of all evil... This being said, what you want is a cache, not an async queue. Django has a good built-in cache framework, you just have to choose your backend (redis comes to mind but there are other options)
| 0 | 0 | 0 | 0 |
2016-01-12T07:10:00.000
| 2 | 1.2 | true | 34,737,505 | 0 | 0 | 1 | 1 |
I plan to use 3 servers(there will have a haproxy to dispath to 3 servers but I don't servey it now ) to do load balancing
And I face a problem :
I create a object which has a function to query from database to get a list when the django start
(Because the list seldom changed but very frequently used so I inintial it at first).
If the data changed,it will push the message to rabbitmq, and 3 servers has rabbitmq clients to get it.
But the problem is the rabbitmq listener's process is not the same with django
How can it nortify to django process ??
Now my solution is call api(use localhost) when rabbitmq client got the changed.(so the guest can visit website and I can change the list)
But it have to bind 0.0.0.0,I am not sure it's a good idea
What is a better way to sync between 3 servers ???
|
List of ObjectIDs for an Algolia Index
| 34,751,577 | 4 | 1 | 359 | 0 |
python,indexing,algolia
|
Browse is the right way to go.
The good thing is that you can specify arguments while performing a browse_all and one of them can be attributesToRetrieve: [] to not retrieve any attributes. You'll therefore only get the objectID.
| 0 | 0 | 0 | 0 |
2016-01-12T18:19:00.000
| 1 | 1.2 | true | 34,751,064 | 0 | 0 | 1 | 1 |
Is there a way to retrieve all objectIDs from an Algolia Index?
I know there is [*Index Name*].browse_all() which in the docs say it can retrieve 1000 objects at a time but it retrieves the entire object rather than just the objectIDs.
I can work with pagination but would rather not and do not want to pull the entire object because our indexes are not small.
|
How to invoke python scripts in node.js app on Bluemix?
| 34,790,983 | 1 | 1 | 358 | 0 |
python,node.js,ibm-cloud
|
I finally fixed this as adding an entry to dependencies in package.json of the project, which causes the call of npm install for the linked github repo. It is kinda straightforward but I found no explanation for that on Bluemix resources.
| 0 | 0 | 0 | 1 |
2016-01-13T10:01:00.000
| 2 | 0.099668 | false | 34,763,600 | 0 | 0 | 1 | 1 |
I'd like to run text processing Python scripts after submitting searchForms of my node.js application.
I know how the scripts can be called with child_process and spawn within js, but what should I set up on the app (probably some package.json entries?) so that it will be able to run Python after deploying to Bluemix?
Thanks for any help!
|
custom Plone Dexterity factory to create subcontent
| 34,772,414 | 6 | 1 | 124 | 0 |
python,plone,dexterity
|
A different approach can simply be to add event handlers for IObjectAddedEvent, and add there your subcontents using common APIs.
| 0 | 0 | 0 | 0 |
2016-01-13T14:26:00.000
| 2 | 1 | false | 34,769,208 | 0 | 0 | 1 | 1 |
I thought it would be possible to create a custom Dexterity factory that calls the default factory and then adds some subcontent (in my case Archetypes-based) to the created 'parent' Dexterity content.
I have no problem creating and registering the custom factory.
However, regardless of what method I use (to create the AT subcontent), the subcontent creation fails when attempted from within the custom factory.
I've tried everything from plone.api to invokeFactory to direct instantiation of the AT content class.
In most cases, traceback shows the underlying Plone/CMF code tries to get portal_types tool using getToolByName and fails; similarly when trying to instantiate the AT class directly, the manage_afterAdd then tries to access reference_catalog, which fails.
Is there any way to make this work?
|
How to config Django using pymysql as driver?
| 53,195,032 | 1 | 20 | 18,120 | 1 |
python,mysql,django,pymysql
|
The short answer is no they are not the same.
The engine, in a Django context, is in reference to RDBMS technology. The driver is the library developed to facilitate communication to that actual technology when up and running. Letting Django know what engine to use tells it how to translate the ORM functions from a backend perspective. The developer doesn't see a change in ORM code but Django will know how to convert those actions to a language the technology understands. The driver then takes those actions (e.g. selects, updates, deletes) and sends them over to a running instance to facilitate the action.
| 0 | 0 | 0 | 0 |
2016-01-13T21:50:00.000
| 2 | 0.099668 | false | 34,777,755 | 0 | 0 | 1 | 1 |
I'm new to Django. It wasted me whole afternoon to config the MySQL engine. I am very confused about the database engine and the database driver. Is the engine also the driver? All the tutorial said that the ENGINE should be 'django.db.backends.mysql', but how the ENGINE decide which driver is used to connect MySQL?
Every time it says 'django.db.backends.mysql', sadly I can't install MySQLDb and mysqlclient, but PyMysql and the official mysql connector 2.1.3 has been installed. How could I set the driver to PyMysql or mysql connector?
Many thanks!
OS: OS X Al Capitan
Python: 3.5
Django: 1.9
This question is not yet solved:
Is the ENGINE also the DRIVER?
|
How can I have Enterprise and Public version of Django application sharing some code?
| 34,781,374 | 0 | 0 | 98 | 0 |
python,django,git
|
Probably the best solution is to identify exactly which code is shared between the two projects and make that a reusable app.
Then each installation can install that django app, and then has their own site specific code as well.
| 0 | 1 | 0 | 0 |
2016-01-14T02:48:00.000
| 3 | 0 | false | 34,780,851 | 0 | 0 | 1 | 2 |
I'm building a webapp using Django which needs to have two different versions: an Enterprise version and a standard public version. Up until now, I've been only developing the Enterprise version and am now looking for the best way to separate the two versions in the simplest way while avoiding duplication of code as much as possible. The main difference between the two versions will be that they need different URLs and different Views. I intend to differentiate based on subdomain using a multi-tenant architecture, where the www.example.com is the public version, and company1.example.com hits the enterprise version.
I've come up with a couple potential solutions, but I'm not happy with any of them.
Separate Git repositories and entirely separate projects, with all common code duplicated. This much duplication of code is bound to be error prone where things will get out of sync and is expected to be ridden with copy-paste mistakes. This is a last-resort solution.
Separate Git repositories, with common code shared via Git Submodules (a single common 'base' repository containing base models and shared views). I've read horror stories about git submodules, though, so I'm wary of this solution.
Single Git repository containing multiple 'project' folders (public/enterprise) each with their own base urls.py, settings.py, wsgi.py, etc...) and multiple manage.py files to choose which "Project" to run. I'm afraid that this solution would become an utter mess because it wouldn't be possible to have the public and enterprise versions use different versions of the common library if one needs an update before the other.
Separate Git repositories, with all shared code developed as 'Re-usable apps' and installed into the python path. This would be a somewhat clean solution, but would be difficult to work with any time changes needed to be made to the common modules.
Single project where all features are managed via conditional logic in the views. This would be most prone to bugs and confusion of all, and I'd prefer to avoid this solution.
Does anyone have any experience with this type of solution or could anyone help me find the best solution to this problem?
|
How can I have Enterprise and Public version of Django application sharing some code?
| 34,781,480 | 1 | 0 | 98 | 0 |
python,django,git
|
What about "a single Git repository, with all shared code developed as 'Re-usable apps'"? That is configure the options enabled with the INSTALLED_APPS setting.
First you need to decide on your release process. If you intend on releasing both versions simultaneously, using the one git repository makes sense.
An overriding concern might be if you have different distribution requirements for the code, e.g. if you want the code in the public version to be publicly available and the enterprise version to be private. Then you might have to use two git repositories.
| 0 | 1 | 0 | 0 |
2016-01-14T02:48:00.000
| 3 | 0.066568 | false | 34,780,851 | 0 | 0 | 1 | 2 |
I'm building a webapp using Django which needs to have two different versions: an Enterprise version and a standard public version. Up until now, I've been only developing the Enterprise version and am now looking for the best way to separate the two versions in the simplest way while avoiding duplication of code as much as possible. The main difference between the two versions will be that they need different URLs and different Views. I intend to differentiate based on subdomain using a multi-tenant architecture, where the www.example.com is the public version, and company1.example.com hits the enterprise version.
I've come up with a couple potential solutions, but I'm not happy with any of them.
Separate Git repositories and entirely separate projects, with all common code duplicated. This much duplication of code is bound to be error prone where things will get out of sync and is expected to be ridden with copy-paste mistakes. This is a last-resort solution.
Separate Git repositories, with common code shared via Git Submodules (a single common 'base' repository containing base models and shared views). I've read horror stories about git submodules, though, so I'm wary of this solution.
Single Git repository containing multiple 'project' folders (public/enterprise) each with their own base urls.py, settings.py, wsgi.py, etc...) and multiple manage.py files to choose which "Project" to run. I'm afraid that this solution would become an utter mess because it wouldn't be possible to have the public and enterprise versions use different versions of the common library if one needs an update before the other.
Separate Git repositories, with all shared code developed as 'Re-usable apps' and installed into the python path. This would be a somewhat clean solution, but would be difficult to work with any time changes needed to be made to the common modules.
Single project where all features are managed via conditional logic in the views. This would be most prone to bugs and confusion of all, and I'd prefer to avoid this solution.
Does anyone have any experience with this type of solution or could anyone help me find the best solution to this problem?
|
Create scheduled job and run the periodically
| 34,786,153 | 0 | 0 | 72 | 0 |
python,flask
|
I have used windows task scheduler to schedule a .bat file. The .bat file contained some short code to run the python script.
This way the scripy is not idling in the background when you are not using it.
As for storing data in between, I would save it to a file.
| 0 | 0 | 0 | 0 |
2016-01-14T09:06:00.000
| 1 | 0 | false | 34,785,420 | 0 | 0 | 1 | 1 |
I have flask web service application with some daily, weekly and monthly events I want to store these events and calculate their start time, for example for an order with count of two and weekly period.
The first payment is today and other one is next week.
I want to store repeated times and then for each of them send notification on the start time periodically.
What is the best solution ?
|
AWS Lambda function firing twice
| 41,511,055 | 6 | 10 | 5,712 | 0 |
python,amazon-web-services,amazon-s3,aws-lambda
|
I am also facing the same issue, in my case on every PUT event in S3 bucket a lambda should trigger, it triggers twice with same aws_request_id and aws_lambda_arn.
To fix it, keep track of the aws_request_id (this id will be unique for each lambda event) somewhere and have a check on the handler. If the same aws_request_id exist then do nothing, otherwise process as usual.
| 0 | 0 | 1 | 1 |
2016-01-14T09:28:00.000
| 2 | 1 | false | 34,785,863 | 0 | 0 | 1 | 2 |
I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email.
But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails.
Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email.
|
AWS Lambda function firing twice
| 34,795,499 | 13 | 10 | 5,712 | 0 |
python,amazon-web-services,amazon-s3,aws-lambda
|
Yes, we have this as well and it's not linked to the email, it's linked to S3 firing multiple events for a single upload. Like a lot of messaging systems, Amazon does not guarantee "once only delivery" of event notifications from S3, so your Lambda function will need to handle this itself.
Not the greatest, but doable.
Some form of cache with details of the previous few requests so you can see if you've already processed the particular event message or not.
| 0 | 0 | 1 | 1 |
2016-01-14T09:28:00.000
| 2 | 1.2 | true | 34,785,863 | 0 | 0 | 1 | 2 |
I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email.
But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails.
Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email.
|
Load a select list when selecting another select
| 34,786,896 | 0 | 0 | 44 | 1 |
python,flask,flask-admin
|
You have to follow this steps
Javascript
Bind a on change event to your Department select .
If the select changes you get the value selected.
When you get the value, you have to send it to the server through an AJAX request.
Flask
Implement a method that reads the value and loads the associated Subdepartments.
Send a JSON response to the view with your Subdepartments
Javascript
In your AJAX request implement a success function. This function by default has as first parameter the data received from the server. Loop over them and append them to the wished select.
| 0 | 0 | 0 | 0 |
2016-01-14T10:05:00.000
| 1 | 0 | false | 34,786,665 | 0 | 0 | 1 | 1 |
I have a Flask-admin application and I have a class with a "Department" and a "Subdepartment" fields.
In the create form, I want that when a Department is selected, the Subdepartment select automatically loads all the corresponding subdepartments.
In the database, I have a "department" table and a "sub_department" table that was a foreign key "department_id".
Any clues on how I could achieve that?
Thanks in advance.
|
How can I activate (display) a view using Revit API?
| 34,803,157 | 5 | 4 | 2,994 | 0 |
revit-api,revit,revitpythonshell,revit-2015
|
I think the most preferred way is the UIDocument.RequestViewChange() method. The tricky part about this is that unless you've designed your application to be modeless with external events or idling, it may not actually happen until later when control returns back to Revit from your addin.
(There's also setting the UIDocument.ActiveView property - not positive if this has different constraints).
The other way that I have done it historically is through the use of the UIDocument.ShowElements() command. The trick here is that you don't have control of the exact view - but if you can figure out the elements that appear only in that view, you can generally make it happen (even if you have to do a separate query to get a bunch of elements that are only in the given floorplan view).
Good Luck!
| 0 | 0 | 0 | 0 |
2016-01-14T11:13:00.000
| 2 | 0.462117 | false | 34,788,159 | 0 | 0 | 1 | 1 |
I am trying to activate a view using Revit API. What I want to do exactly is to prompt the user to select some walls, but when the user is asked that, he can't switch views to select more walls (everything is greyed out at that point).
So the view I want to activate (by that I mean, I want this view to be actually shown on screen) already exist, and I can access its Id.
I have seen threads about creating, browsing, filtering views, but nothing on activating it... It's a Floor Plan view.
So far I can access its associated ViewPlan object, and associated parameters (name, Id, ..).
Is it possible to do ?
Thanks a lot !
Arnaud.
|
Is it possible to remove the get parameters from the referer in the header without meta-refresh?
| 34,791,352 | 0 | 1 | 141 | 0 |
javascript,python,redirect,http-referer
|
Thanks to NetHawk it can be done using history.replaceState() or history.pushState().
| 0 | 0 | 1 | 0 |
2016-01-14T13:08:00.000
| 1 | 1.2 | true | 34,790,474 | 0 | 0 | 1 | 1 |
The flow with me is this we get a request to www.oursite.com?secretinfo=banana
then we have people do some stuff on that page and we send them to another site. Is it possible to remove the part "secretinfo=banana" from the referer in the header info?
We do this now, by redirecting to another page without this parameters, which does another redirect by a meta-refresh to the other party. As you can imagine this is not very good for the user experience.
Doing it direct would be great but even doing it with a 302 or 303 redirect would be better but these don't change the referer.
We are using Python 3 with Flask or it can be with JavaScript.
|
Using HTTrack to mirror a single page
| 41,248,744 | 1 | 2 | 1,960 | 0 |
python,http,command-line,wget,httrack
|
This is an old post so you might have figured it out by now. I just came across your post looking for another answer about using Python and HTTrack. I was having the same issue you were having and I passed the argument -r2 and it downloaded the images.
My arguments basically look like this:
cmd = [httrack, myURL,'-%v','-r2','-F',"Mozilla/5.0 (Windows NT 6.1; Win64; x64)",'-O',saveLocation]
| 0 | 0 | 1 | 0 |
2016-01-14T17:33:00.000
| 3 | 0.066568 | false | 34,796,053 | 0 | 0 | 1 | 2 |
I've been attempting to use HTTrack to mirror a single page (downloading html + prerequisites: style sheets, images, etc), similar to the question [mirror single page with httrack][1]. However, the accepted answer there doesn't work for me, as I'm using Windows (where wget "exists" is but actually a wrapper for Invoke-WebRequest and doesn't function at all the same way).
HTTrack really wants to either (a) download the entire website I point it at, or (b) only download the page I point it to, leaving all images still living on the web. Is there a way to make HTTrack download only enough to view a single page properly offline - the equivalent of wget -p?
|
Using HTTrack to mirror a single page
| 59,127,033 | -1 | 2 | 1,960 | 0 |
python,http,command-line,wget,httrack
|
Saving the page with your browser should download the page and all its prerequisites.
| 0 | 0 | 1 | 0 |
2016-01-14T17:33:00.000
| 3 | -0.066568 | false | 34,796,053 | 0 | 0 | 1 | 2 |
I've been attempting to use HTTrack to mirror a single page (downloading html + prerequisites: style sheets, images, etc), similar to the question [mirror single page with httrack][1]. However, the accepted answer there doesn't work for me, as I'm using Windows (where wget "exists" is but actually a wrapper for Invoke-WebRequest and doesn't function at all the same way).
HTTrack really wants to either (a) download the entire website I point it at, or (b) only download the page I point it to, leaving all images still living on the web. Is there a way to make HTTrack download only enough to view a single page properly offline - the equivalent of wget -p?
|
How to do Odoo Leave Type Filter(user has only display 2 leave type admin need to display all)?
| 34,806,653 | 0 | 0 | 218 | 0 |
python-2.7,openerp,odoo-8,odoo-9
|
You can easily achieve this by creating one field in 'hr.holidays.status' model like whether this leave is visible to manager or not.
By overwriting onchange of holiday_status_id and return domain as per logged in user by checking whether it is a manager or not.
| 0 | 0 | 0 | 0 |
2016-01-15T04:52:00.000
| 1 | 0 | false | 34,804,604 | 0 | 0 | 1 | 1 |
user(employee) has to display only 2 leave type and admin(HR officer) has to display all leave type
|
Unclear issue after installing django
| 34,805,744 | 0 | 0 | 100 | 0 |
python,django,python-3.x,installation,installation-path
|
For you and possible future users asking similar question:
Only the pip command runs the 2.7 Python interpreter. You are using the 3.4 version, so instead of pip you have to use the pip3.4 command.
Why? Python 2.7 is not compatible with the 3.x and higher versions.
In your case Django is installed only for the 2.7 version and if you run the python3.4 command, Django is not installed ("no module named django").
| 0 | 0 | 0 | 0 |
2016-01-15T04:57:00.000
| 1 | 0 | false | 34,804,656 | 1 | 0 | 1 | 1 |
I tried to install django after python installation (3.4.0 version), the problem began when i tried to run the simple command: "pip install django" via the cmd - it did nothing (descending line and writes nothing). I forced it to apply the installation using the command: "python -m pip install django". Although it was declared that the installation was successful, when I run, for example, the command: "django-admin --version" it did nothing as well, but when i run the command: "python -m django-admin --version", it says that: "python.exe: no module named django-admin".
In general, each command associated to pip or to django does not work, such as:
pip help, pip X ot django X
Ps. I added the paths in 'Path' of the User Varuables and System Variables:
C:\Python34; C:\Python34\Scripts
|
How i can send automatic email when i confirm a form request on Odoo 8?
| 34,806,721 | 0 | 0 | 720 | 0 |
python,openerp,odoo-8
|
Go to setting-> Technical setting-> Email-> Outgoing Mail Servers
Set SMTP server, SMTP port other credentials
eg:
SMTP Server: smtp.gmail.com
SMTP port: 587
connection security: TLS(STARTTLS)
Once done, Test the connection is setup properly or not by clicking Test connection button.
You can send mail by calling send_mail()
| 0 | 0 | 0 | 1 |
2016-01-15T07:16:00.000
| 1 | 0 | false | 34,806,022 | 0 | 0 | 1 | 1 |
i'm a new Odoo developer and i need to send automatic email when i confirm a form request, and i can input manually sender and receiver the email.
Any one have a sample, tutorial or anyone can help me, i dont know the steps or configure of mail server because i use localhost, Thank you
|
App Engine social platform - Content interactions modeling strategy
| 34,808,818 | 1 | 0 | 19 | 0 |
python,google-app-engine,social-networking
|
I´m guessing you have two entities in you model: User and Content. Your queries seem to aggregate upon multiple Content objects.
What about keeping this aggregated values on the User object? This way, you don´t need to do any queries, but rather only look up the data stored in the User object for these queries.
At some point though, you might consider not using the datastore, but look at sql storage instead. It has a higher constant cost, but I´m guessing at some point (more content/users) it might be worth considering both in terms of cost and performance.
| 0 | 1 | 0 | 0 |
2016-01-15T10:01:00.000
| 1 | 0.197375 | false | 34,808,553 | 0 | 0 | 1 | 1 |
I have a Python server running on Google app engine and implements a social network. I am trying to find the best way (best=fast and cheap) to implement interactions on items.
Just like any other social network I have the stream items ("Content") and users can "like" these items.
As for queries, I want to be able to:
Get the list of users who liked the content
Get a total count of the likers.
Get an intersection of the likers with any other users list.
My Current implementation includes:
1. IntegerProperty on the content item which holds the total likers count
2. InteractionModel - a NdbModel with a key id qual to the content id (fast fetch) and a JsonPropery the holds the likers usernames
Each time a user likes a content I need to update the counter and the list of users. This requires me to run and pay for 4 datastore operations (2 reads, 2 writes).
On top of that, items with lots of likers results in an InteractionModel with a huge json that takes time to serialize and deserialize when reading/writing (Still faster then RepeatedProperty).
None of the updated fields are indexed (built-in index) nor included in combined index (index.yaml)
Looking for a more efficient and cost effective way to implement the same requirements.
|
Do I need rabbitmq bindings for direct exchange?
| 41,491,616 | 3 | 1 | 943 | 0 |
python,rabbitmq,rmq
|
If you are using the default exchange for direct routing (exchange = ''), then you don't have to declare any bindings. By default, all queues are bound to the default exchange. As long as the routing key exactly matches a queue name (and the queue exists), the default exchange iw
| 0 | 0 | 0 | 1 |
2016-01-15T18:07:00.000
| 3 | 0.197375 | false | 34,817,150 | 0 | 0 | 1 | 3 |
I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks
|
Do I need rabbitmq bindings for direct exchange?
| 34,817,271 | 1 | 1 | 943 | 0 |
python,rabbitmq,rmq
|
Always. In fact, even though queues are strictly a consumer-side entity, they should be declared & bound to the direct exchange by the producer(s) at the time they create the exchange.
| 0 | 0 | 0 | 1 |
2016-01-15T18:07:00.000
| 3 | 1.2 | true | 34,817,150 | 0 | 0 | 1 | 3 |
I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks
|
Do I need rabbitmq bindings for direct exchange?
| 34,846,505 | 1 | 1 | 943 | 0 |
python,rabbitmq,rmq
|
You have to bind a queue with some binding key to an exchange, else messages will be discarded.
This is how any amqp broker works, publisher publish a message to exchange with some key, amqp broker(RabbitMq) routes this message from exchange to those queue(s) which are binded with exchange with the given key.
However it's not mandatory to declare and bind a queue in publisher.
You can do that in subscriber but make sure you run your subscriber before starting your publisher.
If you think your messages are getting routed to queue without bindings than you are missing something.
| 0 | 0 | 0 | 1 |
2016-01-15T18:07:00.000
| 3 | 0.066568 | false | 34,817,150 | 0 | 0 | 1 | 3 |
I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks
|
Best way to store emails from a landing page on google app engine?
| 34,824,272 | 1 | 0 | 97 | 0 |
python,google-app-engine,google-cloud-datastore,app-engine-ndb
|
Create an email entity, and use the email address as the entities key.
This immediately will prevent duplicates.
Fetching all of the email addresses can be very efficient as you only need query by kind with a keys only query, and use map_async to process the emails.
In addition you could use these entities to store progress of email, maybe provide an audit trail.
To increase speed at time of emailing, you could periodically build cached lists of the emails, either in the datastore or stored in blob storage.
| 0 | 1 | 0 | 0 |
2016-01-15T22:31:00.000
| 1 | 0.197375 | false | 34,820,966 | 0 | 0 | 1 | 1 |
I have a landing page set up and have a html text box (with error checking for valid emails) put together with a submit button. I am currently using NDB to store different entities.
What I'm looking for is the best way to store just the email that a person enters. So likely hundreds or thousands of emails will be entered, there shouldn't be duplicates, and eventually we will want to use all of those emails to send a large news update to everyone who entered in their emails.
What is the best way to store this email data with these contraints:
Fast duplicate checking
Quick callback for sending emails en masse
|
Using a class field in another fields
| 34,822,770 | 1 | 0 | 51 | 0 |
python,class,django-models,field
|
I think what you want to do is change the view that the user sees. What you have above is the underlying DB model which is the wrong place for this sort of feature.
In addition (assuming this is a web application), you will probably need to do it in Javascript, so you can change the set of allowed names as soon as the user changes the nationality field.
| 0 | 0 | 0 | 0 |
2016-01-16T01:20:00.000
| 1 | 1.2 | true | 34,822,500 | 0 | 0 | 1 | 1 |
I have a field in a class that depends on another field in the same class but I have a problem to code:
class myclass(models.Model):
nation = [('sp', 'spain'), ('fr', 'france')]
nationality = models.CharField(max_length=2, choices=nation)
first_name = models.CharField(max_length=2, choices=name)
I want to put name = [('ro', 'rodrigo'), ('ra', 'raquel')] if nation = spain and name = [('lu', 'luis'), ('ch', 'chantal')] if nation = france.
How I can do that? Thanks!
|
Strange error during initial database migration of a Django site
| 34,837,989 | 1 | 1 | 463 | 1 |
python,django,python-3.x,django-forms,pythonanywhere
|
As it says in my comment above, it turns out that the problem with the database resulted from running an upgrade of Django from 1.8 to 1.9. I had forgotten about this. After rolling my website back to Django 1.8, the database migrations ran correctly.
The reason why I could not access the website turned out to be because I had to edit the wsgi.py file, but I was editing the wrong version. The nginx localhost web server I was using keeps it in the different folder location than PythonAnyhwere's implementation. I uploaded the file from my localhost copy and edited it according to the instructions on PythonAnywhere's help system without realizing it was not being read by PythonAnywhere's server. What I really needed to do was edit the correct file by accessing it through the web tab on their control panel. Once I edited this file, the website front end began to work as expected.
| 0 | 0 | 0 | 0 |
2016-01-17T07:12:00.000
| 2 | 1.2 | true | 34,836,049 | 0 | 0 | 1 | 1 |
I have been working on a localhost copy of my Django website for a little while now, but finally decided it was time to upload it to PythonAnywhere. The site works perfectly on my localhost, but I am getting strange errors when I do the initial migrations for the new site. For example, I get this:
mysql.connector.errors.DatabaseError: 1264: Out of range value for
column 'applied' at row 1
'applied' is not a field in my model, so this error has to be generated by Django making tables for its own use. I have just checked in the MySQL manager for my localhost and the field 'applied' appears to be from the table django_migrations.
Why is Django mishandling setting up tables for its own use? I have dropped and remade the database a number of times, but the errors persist. If anyone has any idea what would cause this I would appreciate your advice very much.
My website front end is still showing the Hello World page and the Admin link comes up with a page does not exist error. At this stage I am going to assume this is related to the database errors.
EDIT: Additional information about why I cannot access the front-end of the site:
It turns out when I am importing a pre-built site into PythonAnywhere, I have to edit my wsgi.py file to point to the application. The trouble now is that I don't know exactly what to put there. When I follow the standard instructions in the PythonAnywhere help files nothing seems to change. There website is also seems to be very short on detailed error messages to help sort it out. Is there perhaps a way to turn off their standard hello world placeholder pages and see server error messages instead?
|
Adding new apps to django
| 34,861,485 | 0 | 2 | 405 | 0 |
python,django
|
Nothing. As long as your apps live in the different folders, they are completely independent apps for Django. Just make sure they both are loaded in your settings.INSTALLED_APPS.
* Catch #1: If you have the identical template tags files, rename them so they would become polls_tags.py and polls2_tags.py.
* Catch #2: Don't forget to rename your templates so that templates/polls/index.html' becomes 'templates/polls2/index.html.
| 0 | 0 | 0 | 0 |
2016-01-18T18:17:00.000
| 2 | 0 | false | 34,861,431 | 0 | 0 | 1 | 1 |
I have followed the guidelines for starting up learning django, but i have a question. if I want to add a new app onto the polls app they instructed, called poll2, can I just copy + paste the polls folder? (this is for example if I want to make an identical app, with same functionality). Is there anything else special i need to do, other than make admin.py load poll2 along with polls?
|
failed in "sudo pip"
| 41,135,807 | 2 | 13 | 12,302 | 0 |
python,permissions,pip,sudo
|
if you have 2 versions of pip for example /user/lib/pip and /user/local/lib/pip belongs to python 2.6 and 2.7. you can delete the /user/lib/pip and make a link pip=>/user/local/lib/pip.
you can see that the pip commands called from "pip" and "sudo" pip are different. make them consistence can fix it.
| 0 | 1 | 0 | 0 |
2016-01-19T08:38:00.000
| 6 | 0.066568 | false | 34,871,994 | 0 | 0 | 1 | 5 |
Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1
|
failed in "sudo pip"
| 47,222,853 | 0 | 13 | 12,302 | 0 |
python,permissions,pip,sudo
|
Assuming two pip versions are present at /usr/bin/pip & /usr/local/bin/pip where first is present for sudo user & second for normal user.
From sudo user you can run below command so it will use higher version of pip for installation.
/usr/local/bin/pip install jupyter
| 0 | 1 | 0 | 0 |
2016-01-19T08:38:00.000
| 6 | 0 | false | 34,871,994 | 0 | 0 | 1 | 5 |
Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1
|
failed in "sudo pip"
| 34,874,730 | 0 | 13 | 12,302 | 0 |
python,permissions,pip,sudo
|
As you can see with sudo you run another pip script.
With sudo: /usr/bin/pip which is older version;
Without sudo: /usr/local/lib/python2.7/site-packages/pip which is the latest version.
The error you encountered is sometimes caused by using different package managers, common way to solve it is the one already proposed by @Ali:
sudo easy_install --upgrade pip
| 0 | 1 | 0 | 0 |
2016-01-19T08:38:00.000
| 6 | 0 | false | 34,871,994 | 0 | 0 | 1 | 5 |
Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1
|
failed in "sudo pip"
| 34,872,132 | 17 | 13 | 12,302 | 0 |
python,permissions,pip,sudo
|
Try this:
sudo easy_install --upgrade pip
By executing this you are upgrading the version of pip that sudoer is using.
| 0 | 1 | 0 | 0 |
2016-01-19T08:38:00.000
| 6 | 1 | false | 34,871,994 | 0 | 0 | 1 | 5 |
Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1
|
failed in "sudo pip"
| 39,518,909 | 24 | 13 | 12,302 | 0 |
python,permissions,pip,sudo
|
I had the same problem.
sudo which pip
sudo vim /usr/bin/pip
modify any pip==6.1.1 to pip==8.1.2 or the version you just upgrade to.
It works for me.
| 0 | 1 | 0 | 0 |
2016-01-19T08:38:00.000
| 6 | 1 | false | 34,871,994 | 0 | 0 | 1 | 5 |
Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1
|
can python 2.7 be removed completely now?
| 34,872,786 | 2 | 0 | 57 | 0 |
python,django,ubuntu
|
A lot of application still require Python 2.7 and are not yet compatible with Python3. So it really depends on what you do on the server (Only running Django?).
One solution would be to use virtualenv so that you do not depend on which python version is installed in your server, and you totally control all the packages.
Look for django + virtualenv, you will find a lot of tutorials.
| 0 | 0 | 0 | 0 |
2016-01-19T09:09:00.000
| 2 | 0.197375 | false | 34,872,610 | 1 | 0 | 1 | 1 |
I am on ubuntu 15.10. I notice that i have many python versions installed. Is it safe now to remove 2.7 completely? And how to make 3.5 the default one? I ask this because i think it messes up my django installation because django gets intsalled in share directory.
|
How many instances of app Gunicorn creates
| 34,875,434 | 10 | 7 | 2,571 | 0 |
python,flask,parallel-processing,gunicorn
|
It will create 4 gunicorn workers to handle the one flask app. If you spin 4 instances of a flask app (with docker for example) you will need to run gunicorn 4 times. Finally to handle all those flask instances you will need a Nginx server in front of it acting as a load balancer.
For example, if one user is doing a registration routine that takes a lot of time due to multiple querys to the database you still have another worker to send the request to the flask instance.
I get our point, but Flask is not WSGI ready, which is the stardard. Gunicorn is playing that role in production so you get more reliability instead of using the Develpment standard Werkzeug server that comes with it. In other words, Gunicorn is just a wrapper on you flask object. It just handles the requests and let Flask do its thing.
| 0 | 0 | 0 | 0 |
2016-01-19T09:55:00.000
| 1 | 1.2 | true | 34,873,578 | 0 | 0 | 1 | 1 |
I`m new to this, and misunderstand how Gunicorn + Flask works.
When i run Gunicorn with 4 workers it creates 4 instances of my Flask app, or it will create 4 processes that handle web requests from Nginx and one instance of Flask app?
If i make simple implementation of memory cache(dictionary for example) in my app, will gunicorn create more than one instances of app and therefore more than one instances of cache?
|
Autoreload webpage when source changed
| 34,882,612 | 0 | 0 | 72 | 0 |
python,django
|
No, dev server is just a simple server that accepts request, passes it to the django app and returns a response from the app. It is something different than you can find in some JavaScript libraries or frameworks, where data are held in browser and you only hot reload the source code and library regenerates the page using the same data.
| 0 | 0 | 0 | 0 |
2016-01-19T11:18:00.000
| 1 | 0 | false | 34,875,393 | 0 | 0 | 1 | 1 |
I wonder is there some optional configuration to the dev server to autorefresh page when files changed. I know that django dev server autoreload project when some changes appear but what i am looking for is refreshing the webpage like it is in for example meteor. I was googling a little and find some apps and plugins to ff and chrome.
Django is designed to web development so i suspect that such feature should be in the core of dev server. Is it?
|
Django Tastypie atomic operation
| 34,888,130 | 0 | 1 | 268 | 0 |
python,django,tastypie
|
I'd need to see:
The code making the API calls.
Any changes you've made to the resources.
What sort of stack this is deployed on.
If you happen to be using greenthreads, multiple workers, and/or multiple servers, it's possible the 2 requests are actually being processed out of order.
I strongly recommend changing your code to not perform concurrent actions on a remote resource; wait until one request is done before starting the next.
| 0 | 0 | 0 | 0 |
2016-01-19T13:06:00.000
| 2 | 0 | false | 34,877,607 | 0 | 0 | 1 | 1 |
I am using Django 1.7.3 as my framework and Tastypie 0.11.1 as rest api library.
I have a basic model with name field and an api for creating this model.
My problem is with critical sections ( race conditions ) when trying to create the model.
I have tried transaction.atomic and set ATOMIC_REQUESTS = True on db level and yet when I am sending two requests as a race I receive two identical rows.
Is there a way to ensure that Tastypie save function will be atomic ? or any way to ensure that requests will be atomic ?
|
how many docker containers should a java web app w/ database have?
| 34,886,874 | 1 | 3 | 1,322 | 0 |
python,tomcat,docker,application-server
|
You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.
| 0 | 1 | 0 | 0 |
2016-01-19T19:01:00.000
| 2 | 0.099668 | false | 34,884,896 | 0 | 0 | 1 | 1 |
I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?
|
Serving large files in AWS
| 34,899,601 | 0 | 0 | 191 | 1 |
python,mysql,amazon-web-services,nas
|
You also can use MongoDb , it provides several API, and also you can store file in S3 bucket with the use of Multi-Part Upload
| 0 | 1 | 0 | 0 |
2016-01-20T09:09:00.000
| 1 | 0 | false | 34,895,738 | 0 | 0 | 1 | 1 |
As part of a big system, I'm trying to implement a service that (among other tasks) will serve large files (up to 300MB) to other servers (running in Amazon).
This files service needs to have more than one machine up and running at each time, and there are also multiple clients.
Service is written in Python, using Tornado web server.
First approach was using MySQL, but I figured I'm going to have hell saving such big BLOBs, because of memory consumption.
Tried to look at Amazon's EFS, but it's not available in our region.
I heard about SoftNAS, and am currently looking into it.
Any other good alternatives I should be checking?
|
robot framework with appium ( not able identify elements )
| 34,927,269 | -2 | 1 | 825 | 0 |
python,appium,robotframework
|
Switch to (webview) context resolved this issue.
| 0 | 0 | 1 | 0 |
2016-01-20T14:57:00.000
| 1 | -0.379949 | false | 34,903,359 | 0 | 0 | 1 | 1 |
I am trying to automate native android app using robot framework + appium with AppiumLibrary and was able to successfully open application ,from there my struggle begins, not able to find any element on the screen through UI automator viewer since the app which I was testing is web-view context and it shows like a single frame(no elements in it is being identified) . I have spoken to dev team and they gave some html static pages where I could see some element id's for that app. So I have used those id's ,But whenever I ran the test it throws error as element doesn't match . The same app is working with java + appium testNG framework. Only difference I could see between these two is, using java + appium framework complete html code is getting when we call page source method for the android driver object but in robot its returning some xml code which was displayed in UI automator viewer(so this xml doesn't contain any HTML source code with element id's and robot is searching the id's in this xml code and hence it is failing). I am totally confused and got stuck here. Can some one help me on this issue.
|
Django PostgreSQL : migrating database to a different directory
| 34,922,249 | 0 | 1 | 62 | 1 |
python,django,postgresql,amazon-ec2
|
Ok, thanks for your answers, I used :
find . -name "postgresql.conf" to find the configuration find, which was located into the "/etc/postgresql/9.3/main" folder. There is also pg_lsclusters if you want to show the directory data.
Then I edited that file putting the new path, restarted postgres and imported my old DB.
| 0 | 0 | 0 | 0 |
2016-01-20T16:46:00.000
| 1 | 0 | false | 34,905,744 | 0 | 0 | 1 | 1 |
I have a Django website running on an Amazon EC2 instance. I want to add an EBS. In order to do that, I need to change the location of my PGDATA directory if I understand well. The new PGDATA path should be something like /vol/mydir/blabla.
I absolutely need to keep the data safe (some kind of dump could be useful).
Do you have any clues on how I can do that ? I can't seem to find anything relevant on the internet.
Thanks
|
Django: allow user to add fields to model
| 51,751,466 | 1 | 3 | 3,132 | 0 |
python,django,model
|
i would suggest storing json as a string in the database, that way it can be as extendable as you want and the field list can go very long.
Edit:
If you are using other damn backends you can use Django-jsonfield. If you are using Postgres then it has a native jsonfield support for enhanced querying, etc.
Edit 2:
Using django mongodb connector can also help.
| 0 | 0 | 0 | 0 |
2016-01-20T17:47:00.000
| 3 | 0.066568 | false | 34,907,014 | 0 | 0 | 1 | 1 |
I am just starting with Django and want to create a model for an application.
I find Djangos feature to
- automatically define validations and html widget types for forms according to the field type defined in the model and
- define a choice set for the field right in the model
very usefull and I want to make best use of it. Also, I want to make best use of the admin interface.
However, what if I want to allow the user of the application to add fields to the model? For example, consider a simple adress book. I want the user to be able to define additional atributes for all of his contacts in the admin settings, i.e. add a fax number field, so that a fax number can be added to all contacts.
from a relational DB perspective, I would have a table with atributes (PK: atr_ID, atr_name, atr_type) and an N:N relation between atributes and contacts with foreign keys from atributes and contacts - i.e. it would result in 3 tables in the DB. right?
but that way I cannot define the field types directly in the Django model. Now what is best practice here? How can I make use of Djangos functionality AND allow the user to add aditional/custom fields via the admin interface?
Thank you! :)
Best
Teconomix
|
Celery Retry not working on AWS Beanstalk running Docker ver.1.6.2(Multi container)
| 34,942,922 | 0 | 0 | 83 | 0 |
python,django,amazon-web-services,celery
|
Knoob mistake:
Turns out that I had another environment with similar code consuming from the same rabbitMQ server. Seems this other environment was picking up the retries.
| 0 | 1 | 0 | 0 |
2016-01-20T22:11:00.000
| 1 | 1.2 | true | 34,911,638 | 0 | 0 | 1 | 1 |
Am trying to implement retries in one of my celery tasks which works fine on my local development environment but doesn't execute retries when deployed to AWS beanstalk.
|
Trying to view html on yikyak.com, get "browser out of date" page
| 34,937,195 | 0 | 0 | 62 | 0 |
python,google-chrome,selenium
|
I figured it out. I was being dumb. I saved off the html as a file and opened that file with chrome and it displayed the normal page. I just didn't see the fact that it was a normal page looking at it directly. Thanks all 15 people for your time.
| 0 | 0 | 1 | 0 |
2016-01-21T03:48:00.000
| 1 | 0 | false | 34,915,058 | 0 | 0 | 1 | 1 |
Question: yikyak.com returns some sort of "browser not supported" landing page when I try to view source code in chrome (even for the page I'm logged in on) or when I write it out to the Python terminal. Why is this and what can I do to get around it?
Edit for clarification: I'm using the chrome webdriver. I can navigate around the yik yak website by clicking on it just fine. But whenever I try to see what html is on the page, I get an html page for a "browser not reported" page.
Background: I'm trying to access yikyak.com with selenium for python to download yaks and do fun things with them. I know fairly little about web programming.
Thanks!
Secondary, less important question: If you're already here, are there particularly great free resources for a super-quick intro to the certification knowledge I need to store logins and stuff like that to use my logged in account? That would be awesome.
|
Secure Azure API rest calls in javascript
| 35,052,618 | 0 | 1 | 421 | 0 |
javascript,python,azure,azure-api-management
|
It seems that we cannot hide the subscription id in the JS code. Because JS code should send http request with this key and we can get the subscription ID with Fiddler.
Alternatively approach is, we can send this http request in the server-end. We can call the server-end method use the Ajax, server-end call the Azure Management API and we can write the subscription into the server end. Using this method, others can not see the subscription ID in JS code.
| 0 | 0 | 0 | 0 |
2016-01-21T09:35:00.000
| 3 | 0 | false | 34,919,957 | 0 | 0 | 1 | 2 |
we are using Azure API management which maps to python flask api. We are making the javascript ajax calls (Azure APIs). We are now placing the subscription key directly in the query parameter of the ajax calls.
Now anyone who have access to this key (by pressing developer tools or view source), can access the apis' as well.
Is there a way to hide the subscription key in ajax calls?
|
Secure Azure API rest calls in javascript
| 35,021,640 | 1 | 1 | 421 | 0 |
javascript,python,azure,azure-api-management
|
You can use JSON Web Token (JWT) in the request, which has a signature and expiration time.
| 0 | 0 | 0 | 0 |
2016-01-21T09:35:00.000
| 3 | 0.066568 | false | 34,919,957 | 0 | 0 | 1 | 2 |
we are using Azure API management which maps to python flask api. We are making the javascript ajax calls (Azure APIs). We are now placing the subscription key directly in the query parameter of the ajax calls.
Now anyone who have access to this key (by pressing developer tools or view source), can access the apis' as well.
Is there a way to hide the subscription key in ajax calls?
|
Should a Java program and python program that are related co-exist in same git repo?
| 34,925,646 | 2 | 1 | 666 | 0 |
java,python,git,github,packaging
|
Since you'll generally be compiling and distributing them separately, I'd suggest separate repos. They are -in that case- separate projects.
One artifact per project keeps thing nice and simple. One build command per output. Having multiple projects in one repo means a complex directory structure, lots of build tool customisation (in the case of,say, Maven) and possibly complex build commands.
It does mean -however- that any communication changes will need to be made to two projects but as client and server are in different languages you'd need to do that anyway.
| 0 | 0 | 0 | 0 |
2016-01-21T13:45:00.000
| 1 | 1.2 | true | 34,925,511 | 1 | 0 | 1 | 1 |
I'm developing two programs for a project, one client-side and one server-side, where the client program is in python and the server is in java.
My question is are there guidelines (e.g. by github, subversion, etc.) stating that these two should or should not co-exist in the same git repo?
|
Django, global variables and tokens
| 34,926,130 | 1 | 0 | 382 | 0 |
python,django,asynchronous,global-variables
|
You can store your token in django cache, it will be faster from database or disk storage in most of the cases.
Another approach is to use redis.
You can also calculate your token:
save some random token in settings of both servers
calculate token based on current timestamp rounded to 10 seconds, for example using:
token = hashlib.sha1(secret_token)
token.update(str(rounded_timestamp))
token = token.hexdigest()
if token generated on remote server when POSTing request match token generated on local server, when getting response, request is valid and can be processed.
| 0 | 0 | 0 | 0 |
2016-01-21T14:04:00.000
| 2 | 1.2 | true | 34,925,917 | 0 | 0 | 1 | 2 |
I'm using django to develop a website. On the server side, I need to transfer some data that must be processed on the second server (on a different machine). I then need a way to retrieve the processed data. I figured that the simplest would be to send back to the Django server a POST request, that would then be handled on a view dedicated for that job.
But I would like to add some minimum security to this process: When I transfer the data to the other machine, I want to join a randomly generated token to it. When I get the processed data back, I expect to also get back the same token, otherwise the request is ignored.
My problem is the following: How do I store the generated token on the Django server?
I could use a global variable, but I had the impression browsing here and there on the web, that global variables should not be used for safety reason (not that I understand why really).
I could store the token on disk/database, but it seems to be an unjustified waste of performance (even if in practice it would probably not change much).
Is there third solution, or a canonical way to do such a thing using Django?
|
Django, global variables and tokens
| 34,926,433 | 1 | 0 | 382 | 0 |
python,django,asynchronous,global-variables
|
The simple obvious solution would be to store the token in your database. Other possible solutions are Redis or something similar. Finally, you can have a look at distributed async tasks queues like Celery...
| 0 | 0 | 0 | 0 |
2016-01-21T14:04:00.000
| 2 | 0.099668 | false | 34,925,917 | 0 | 0 | 1 | 2 |
I'm using django to develop a website. On the server side, I need to transfer some data that must be processed on the second server (on a different machine). I then need a way to retrieve the processed data. I figured that the simplest would be to send back to the Django server a POST request, that would then be handled on a view dedicated for that job.
But I would like to add some minimum security to this process: When I transfer the data to the other machine, I want to join a randomly generated token to it. When I get the processed data back, I expect to also get back the same token, otherwise the request is ignored.
My problem is the following: How do I store the generated token on the Django server?
I could use a global variable, but I had the impression browsing here and there on the web, that global variables should not be used for safety reason (not that I understand why really).
I could store the token on disk/database, but it seems to be an unjustified waste of performance (even if in practice it would probably not change much).
Is there third solution, or a canonical way to do such a thing using Django?
|
Django: I have to login to admin site before the application
| 34,926,127 | 2 | 0 | 42 | 0 |
python,django,django-admin
|
This might happen due to the following combination of circumstances:
The view you are accessing requires authentication (check for the @login_required decorator on the view)
Therefore, when you access anonymously it is trying to redirect you to the login page (check your LOGIN_REDIRECT_URL setting in settings.py)
Then, when your browser tries to reach this login page, it is not found (404)
So remove @login_required if it isn't really necessary, or make sure your login redirect is well configured and pointing to a url that actually provides a login page.
| 0 | 0 | 0 | 0 |
2016-01-21T14:06:00.000
| 1 | 0.379949 | false | 34,925,948 | 0 | 0 | 1 | 1 |
I have a weird problem with my Django application and I have no idea where to look to fix it. Whenever I run my application, before I can login to the application and do stuff on it I have to login to the admin site or else it throws a "Page not found (404)" error when I try to login to the application as a normal (non-admin) user.
Any ideas on what may be causing this and how I can fix it?
|
Set degree of parallelism for a single operation in Python
| 37,719,013 | 1 | 1 | 370 | 0 |
python,apache-flink
|
For users who are unaware, Apache Flink added this feature couple of months back.
Here is the short doc from Flink :-
The default parallelism can be overwriten for an entire job by calling setParallelism(int parallelism) on the ExecutionEnvironment or by passing -p to the Flink Command-line frontend. It can be overwritten for single transformations by calling setParallelism(int parallelism) on an operator.
| 0 | 0 | 0 | 0 |
2016-01-21T20:38:00.000
| 2 | 0.099668 | false | 34,933,833 | 0 | 0 | 1 | 1 |
I execute my program with a dop > 1 but I do not want multiple output files. In Java myDataSet.writeAsText(outputFilePath, WriteMode.OVERWRITE).setParallelism(1);is working as expected.
But when I try the same in Python it does not work. This is my code: myDataSet.write_text(output_file, write_mode=WriteMode.OVERWRITE).set_degree_of_parallelism(1)
Is there a possibilty to achieve this behaviour in Python?
|
python version for robot framework selenium2library (Windows10)
| 34,997,071 | 0 | 0 | 2,741 | 0 |
python-2.7,python-3.x,selenium-webdriver,robotframework
|
With python 2.7.9 you can only install robotframework 2.9
With python 3.X you can install robotframework 3.x+ but as Bryan Oakley said, Selenium2Library is not yet supported ;)
| 0 | 0 | 1 | 0 |
2016-01-21T23:02:00.000
| 2 | 0 | false | 34,936,039 | 0 | 0 | 1 | 1 |
Env: Windows 10 Pro
I installed python 2.7.9 and using pip installed robotframework and robotframework-selenium2library and it all worked fine with no errors.
Then I was doing some research and found that unless there is a reason for me to use 2.x versions of Python, I should stick with 3.x versions. Since 3.4 support already exists for selenium2library (read somewhere), so I decided to switch to it.
I uninstalled python 2.7.9 and installed python 3.4 version. When I installed robotframerwork, I am getting the following:
C:\Users\username>pip install robotframework
Downloading/unpacking RobotFramework
Running setup.py (path:C:\Users\username\AppData\Local\Temp\pip_build_username\RobotFramework\setup.py) egg_info for package RobotFramework
no previously-included directories found matching 'src\robot\htmldata\testdata'
Installing collected packages: RobotFramework
Running setup.py install for RobotFramework
File "C:\Python34\Lib\site-packages\robot\running\timeouts\ironpython.py", line 57
raise self._error[0], self._error[1], self._error[2]
^
SyntaxError: invalid syntax
File "C:\Python34\Lib\site-packages\robot\running\timeouts\jython.py", line 56
raise self._error[0], self._error[1], self._error[2]
^
SyntaxError: invalid syntax
no previously-included directories found matching 'src\robot\htmldata\testdata'
replacing interpreter in robot.bat and rebot.bat.
Successfully installed RobotFramework
Cleaning up...
When I did pip list I do see robotframework is installed.
C:\Users\username>pip list
pip (1.5.4)
robotframework (3.0)
setuptools (2.1)
Should I be concerned and stick to Python 2.7.9?
|
How to display the webdriver window only in one specific condition using selenium (python)?
| 34,969,465 | 0 | 0 | 33 | 0 |
python,selenium,selenium-webdriver
|
No, its not possible.
Only way is to use several browsers.
As example:
run phantomJS(headless browser)
run do needed actions
run firefox, used perform login
copy cookies after login
paste cookies to phantomJS
close firefox
| 0 | 0 | 1 | 0 |
2016-01-22T16:07:00.000
| 1 | 0 | false | 34,950,994 | 0 | 0 | 1 | 1 |
How do I only display the screen of my webdriver in a specific case using selenium?
I only want to display the window to the user when the title of the page is for exemple "XXX", so then he type something one the window and than the window close again, and the robot continue making what he should make on background.
Is it possible?
Thanks,
|
Disable or restrict /o/applications (django rest framework, oauth2)
| 35,040,802 | 1 | 5 | 873 | 0 |
python,django,rest,oauth
|
Solution found!
In fact, the reason why /o/application was accessible, is because I had a super admin session open.
Everything is great, then :)
| 0 | 0 | 0 | 0 |
2016-01-23T02:42:00.000
| 2 | 1.2 | true | 34,959,031 | 0 | 0 | 1 | 1 |
I am currently writing a REST API using Django rest framework, and oauth2 for authentication (using django-oauth-toolkit). I'm very happy with both of them, making exactly what I want.
However, I have one concern. I'm passing my app to production, and realized there might be a problem with the /o/applications/ view, which is accessible to everyone!
I found myself surprised to not see anything in the doc about it, neither when I try to google it. Did I miss something?
Some ideas where to either making a custom view, requiring authentication as super-user (but this would be weird, as it would mix different kind of authentication, wouldn't it?), or add a dummy route to 401 or 403 view to /o/applications/.
But these sound quite hacky to me... isn't it any official "best" solution to do it? I'd be very surprised if I'm the first one running into this issue, I must have missed something...
Thanks by advance!
|
How to decrease to split data put in task queue, Google app engine with Python
| 34,976,107 | 0 | 0 | 96 | 0 |
python,google-app-engine,task-queue
|
Check the size of the payload (arguments) you are sending to the task queue.
If it's more than a few KB in size you need to store it in the datastore and send the key of the object holding the data to the task queue
| 0 | 1 | 0 | 0 |
2016-01-24T12:52:00.000
| 2 | 0 | false | 34,976,025 | 0 | 0 | 1 | 2 |
Encounter an error "RequestTooLargeError: The request to API call datastore_v3.Put() was too large.".
After looking through the code, it happens on the place where it is using task queue.
So how can I split a large queue task into several smaller ones?
|
How to decrease to split data put in task queue, Google app engine with Python
| 34,977,778 | 0 | 0 | 96 | 0 |
python,google-app-engine,task-queue
|
The maximum size of a task is 100KB. That's a lot of data. It's hard to give specific advice without looking at your code, but I would mention this:
If you pass a collection to be processed in a task in a loop, than the obvious solution is to split the entire collection into smaller chunks, e.g. instead of passing 1000 entities to one task, pass 100 entities to 10 tasks.
If you pass a collection to a task that cannot be split into chunks (e.g. you need to calculate totals, averages, etc.), then don't pass this collection, but query/retrieve it in the task itself. Every task is saved back to the datastore, so you don't win much by passing the collection to the task - it has to be retrieved from the datastore anyway.
If you pass a very large object to a task, pass only data that the task actually needs. For example, if your task sends an email message, you may want to pass Email, Name, and Message, instead of passing the entire User entity which may include a lot of other properties.
Again, 100KB is a lot of data. If you are not using a loop to process many entities in your task, the problem with the task queue may indicate that there is a bigger problem with your data model in general if you have to push around so much data every time. You may want to consider splitting huge entities into several smaller entities.
| 0 | 1 | 0 | 0 |
2016-01-24T12:52:00.000
| 2 | 1.2 | true | 34,976,025 | 0 | 0 | 1 | 2 |
Encounter an error "RequestTooLargeError: The request to API call datastore_v3.Put() was too large.".
After looking through the code, it happens on the place where it is using task queue.
So how can I split a large queue task into several smaller ones?
|
PyCharm & VirtualEnvs - How To Remove Legacy
| 34,993,725 | 9 | 3 | 16,475 | 0 |
python,django,pycharm
|
You can clean out old PyCharm interpreters that are no longer associated with a project via Settings -> Project Interpreter, click on the gear in the top right, then click "More". This gives you a listing where you can get rid of old virtualenvs that PyCharm thinks are still around. This will prevent the "(1)", "(2)" part.
You don't want to make the virtualenv into the content root. Your project's code is the content root.
As a suggestion:
Clear out all the registered virtual envs
Make a virtualenv, outside of PyCharm
Create a new project using PyCharm's Django template
You should then have a working example.
| 0 | 0 | 0 | 0 |
2016-01-24T17:42:00.000
| 3 | 1.2 | true | 34,979,145 | 1 | 0 | 1 | 3 |
How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris
|
PyCharm & VirtualEnvs - How To Remove Legacy
| 60,949,461 | 0 | 3 | 16,475 | 0 |
python,django,pycharm
|
In addition to the answer above, which removed the Venv from the Pycharm list, I also had to go into my ~/venvs directory and delete the associated directory folder in there.
That did the trick.
| 0 | 0 | 0 | 0 |
2016-01-24T17:42:00.000
| 3 | 0 | false | 34,979,145 | 1 | 0 | 1 | 3 |
How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris
|
PyCharm & VirtualEnvs - How To Remove Legacy
| 63,129,392 | 0 | 3 | 16,475 | 0 |
python,django,pycharm
|
When virtual env is enabled, there will be a 'V' symbol active in the bottom part of pycharm in the same line with terminal and TODO. When you click on the 'V' , the first one will be enabled with a tick mark. Just click on it again. Then it will get disabled. As simple as that.
| 0 | 0 | 0 | 0 |
2016-01-24T17:42:00.000
| 3 | 0 | false | 34,979,145 | 1 | 0 | 1 | 3 |
How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris
|
How to fix a Deprecation Warning in Django 1.9
| 35,009,997 | 3 | 4 | 2,418 | 0 |
python,django,rest,django-views,django-rest-framework
|
You don't have to "fix" Deprecation Warnings as they are, well, only warnings and things still work. However, if you'll decide to update they might break your app. So usually it's a good idea to rewrite the parts with warnings to new interfaces, that are hinted in those warnings if it's in your code. If it's in some side library you use, then you might want to wait if the library creator will update his/her library in the next release.
Regarding your particular warnings, unless you'll decide to update to Django 1.10, your code should work well.
| 0 | 0 | 0 | 0 |
2016-01-25T20:48:00.000
| 2 | 1.2 | true | 35,002,061 | 0 | 0 | 1 | 1 |
I am a new user of the Django Framework. I am currently building a REST API with the django_rest_framework. When starting my server I am getting deprecation warnings that I have no idea how to fix.
RemovedInDjango110Warning: 'get_all_related_objects is an unofficial API that has been deprecated. You may be able to replace it with 'get_fields()'
for relation in opts.get_all_related_objects()
The above is the first of these. Does anyone know how to fix this issue. All I have in my API at the minute is standard rest calls using the built in ModelViewSet and I have also overwritten the default authentication & user system with my own so I have no idea why I'm getting these warnings as I have been using Django 1.9 from the start.
I also got this:
RemovedInDjango110Warning: render() must be called with a dict, not a RequestContext
From my initial research this is related to templates. I am not using any templates so I don't know why this is coming up.
Can anyone help me to fix these issues?
|
Connection drop with IBM Watson Server
| 35,411,963 | 2 | 1 | 223 | 0 |
python,tcp,twisted,ibm-watson
|
That's our fault. We experienced an issue with WebSockets connections being dropped when the service was under heavy load.
| 0 | 0 | 0 | 0 |
2016-01-26T08:36:00.000
| 1 | 0.379949 | false | 35,009,726 | 0 | 0 | 1 | 1 |
I have been using IBM watson speech to text over websockets and since recently there are connection drops in the middle of process or handshake issues.
This is the error log and it can't process audio files after 1-2 minutes of handshake:
_connectionLost: [Failure instance: Traceback (failure with no frames): : Connection was closed cleanly.
('WebSocket connection closed: connection was closed uncleanly (peer dropped the TCP connection without previous WebSocket closing handshake)', 'code: ', 1006, 'clean: ', False)
Can somebody help me understand what is exactly going wrong. I am currently running the process through a virtual machine but the problem persists even with local machine implementation. Is there a problem with Watson server?
|
Catching API response for insufficient funds from Stripe
| 35,018,652 | 2 | 1 | 1,120 | 0 |
python,stripe-payments
|
That would trigger a declined charge which is a card_error and can be triggered with this card number: 4000000000000002: Charge will be declined with a card_declined code.
| 0 | 0 | 1 | 0 |
2016-01-26T15:07:00.000
| 1 | 1.2 | true | 35,017,039 | 0 | 0 | 1 | 1 |
If I create a Charge object via the Stripe API and the card is valid, but the charge is declined, what error does this cause? It doesn't look to be possible to simulate this error in the test sandbox and I'd like to be able to catch it (and mock it in tests), but the documentation isn't clear on this point.
|
Multiple tuples in unique_together
| 35,024,190 | 24 | 23 | 5,860 | 1 |
python,django,django-models
|
Each tuple results in a discrete UNIQUE clause being added to the CREATE TABLE query. As such, each tuple is independent and an insert will fail if any data integrity constraint is violated.
| 0 | 0 | 0 | 0 |
2016-01-26T21:10:00.000
| 1 | 1.2 | true | 35,024,007 | 0 | 0 | 1 | 1 |
When I am defining a model and using unique_together in the Meta, I can define more than one tuple. Are these going to be ORed or ANDed? That is lets say I have a model where
class MyModel(models.Model):
druggie = ForeignKey('druggie', null=True)
drunk = ForeignKey('drunk', null=True)
quarts = IntegerField(null=True)
ounces = IntegerField(null=True)
class Meta:
unique_together = (('drunk', 'quarts'),
('druggie', 'ounces'))
either both druggie and ounces are unique or both drunk and quarts are unique, but not both.
|
Do i need Virtual environment for lxml and beautiful soup in linux?
| 35,028,988 | 0 | 0 | 104 | 0 |
python-2.7,beautifulsoup,lxml
|
That's a matter of personal preference, however in most cases the benefits of installing libraries in a virtual environment far outweigh the costs.
Setting up virtualenv (and perhaps virtualenvwrapper), creating an environment for your project, and activating it will take 2-10 minutes (depending on your familiarity with the system) before you can start work on your project itself, but it may save you a lot of hassle further down the line. I would recommend that you do so.
| 0 | 0 | 1 | 0 |
2016-01-27T04:29:00.000
| 2 | 0 | false | 35,028,910 | 1 | 0 | 1 | 2 |
I am doing a data scraping project in python.For that i need to use beautiful soup and lxml.Should i install them globally or in a virtual environment?
|
Do i need Virtual environment for lxml and beautiful soup in linux?
| 35,029,148 | 3 | 0 | 104 | 0 |
python-2.7,beautifulsoup,lxml
|
Well Using or not using a virtual environment is up to you. But it is always a best practice to use virtualenv and virtualenvwrapper. So that if something unusual happens with your project and its dependencies it won't hamper the python reciding at the system level.
It might happen that in future you might need to work on different version of lxml or beautifulsoup and if you do not use virtual environment then you need to upgrade or degrade the libraries and now your older project will not run as you have upgraded or degraded everything in the system level python. Therefore it is wise to start using the best practices as early as possible to save time and efforts.
| 0 | 0 | 1 | 0 |
2016-01-27T04:29:00.000
| 2 | 1.2 | true | 35,028,910 | 1 | 0 | 1 | 2 |
I am doing a data scraping project in python.For that i need to use beautiful soup and lxml.Should i install them globally or in a virtual environment?
|
Using web2py for a user frontend crud
| 35,039,883 | 0 | 0 | 670 | 1 |
python,mysql,frontend,crud,web2py
|
This double/redundant way of talking to my DB strikes me as odd and web2py does not support python3.
Any abstraction you want to use to communicate with your database (whether it be the web2py DAL, the Django ORM, SQLAlchemy, etc.) will have to have some knowledge of the database schema in order to construct queries.
Even if you programmatically generated all the SQL statements yourself without use of an ORM/DAL, your code would still have to have some knowledge of the database structure (i.e., somewhere you have to specify names of tables and fields, etc.).
For existing databases, we aim to automate this process via introspection of the database schema, which is the purpose of the extract_mysql_models.py script. If that script isn't working, you should report an issue on Github and/or open a thread on the web2py Google Group.
Also, note that when creating a new database, web2py helps you avoid redundant specification of the schema by handling migrations (including table creation) for you -- so you specify the schema only in web2py, and the DAL will automatically create the tables in the database (of course, this is optional).
| 0 | 0 | 0 | 0 |
2016-01-27T13:21:00.000
| 1 | 0 | false | 35,038,543 | 0 | 0 | 1 | 1 |
I was asked to port a Access database to MySQL and
provide a simple web frontend for the users.
The DB consists of 8-10 tables and stores data about
clients consulting (client, consultant,topic, hours, ...).
I need to provide a webinterface for our consultants to use,
where they insert all this information during a session into a predefined mask/form.
My initial thought was to port the Access-DB to MySQL, which I have done
and then use the web2py framework to build a user interface with login,
inserting data, browse/scroll through the cases and pulling reports.
web2py with usermanagment and a few samples views & controllers and
MySQL-DB is running. I added the DB to the DAL in web2py,
but now I noticed, that with web2py it is mandatory to define every table
again in web2py for it being able to communicate with the SQL-Server.
While struggeling to succesfully run the extract_mysql_models.py script
to export the structure of the already existing SQL DB for use in web2py
concerns about web2py are accumulating.
This double/redundant way of talking to my DB strikes me as odd and
web2py does not support python3.
Is web2py the correct way to fulfill my task or is there better way?
Thank you very much for listening/helping out.
|
Download and serve image or store link to image? Scale + Security
| 35,049,943 | 0 | 0 | 53 | 0 |
python,django,performance,security
|
I think the best solution for this is to download the first image with BeautifulSoups (as you are currently doing) and then upload it to a CDN (like AmazonWS S3, Google Cloud Storage, etc) and save only the link to that image in your model. So the next time you view that link, you will just serve the image from your CDN.
This solution is very secure and can scale up!
| 0 | 0 | 0 | 0 |
2016-01-27T17:55:00.000
| 1 | 0 | false | 35,044,665 | 0 | 0 | 1 | 1 |
I am building a webapp in Django that allows users to post links. When they post a link, I want to display a thumbnail image for the link. Right now, I simply download the first image on the linked page (using BeautifulSoup), store it in my Django model, and then serve it with the model.
I am wondering whether this is the best solution, from both a scale and security perspective? Would a better solution be to simply store a link to the original image on the original website, and then have the user's browse simply request that image from the linked website?
Would the second solution be faster and safer than downloading all the images onto my server? I am also worried about whether downloading and serving thousands of images will scale, as well as how to protect the app from images on malicious sites.
|
Cannot install Python Package with docker-compose
| 35,066,625 | 12 | 6 | 9,159 | 0 |
python,docker,docker-compose
|
It looks like you ran the pip install in a one-off container. That means your package isn't going to be installed in subsequent containers created with docker-compose up or docker-compose run. You need to install your dependencies in the image, usually by adding the pip install command to your Dockerfile. That way, all containers created from that image will have the dependencies available.
| 0 | 1 | 0 | 0 |
2016-01-28T16:06:00.000
| 1 | 1.2 | true | 35,066,307 | 0 | 0 | 1 | 1 |
I am running a Django project with docker. Now I want to install a Python package inside the Docker container and run the following command:
docker-compose django run pip install django-extra-views
Now when I do docker-compose up, I get an error ImportError: No module named 'extra_views'. docker-compose django run pip freeze doesn't show the above package either.
Am I missing something?
|
Is it ideal to replace django forms with django rest framework serializers in HTML terms
| 35,081,076 | 0 | 0 | 117 | 0 |
python,django,django-rest-framework
|
I don't know if it's "better" but it can helps to keep things DRY.
I haven't done that yet but something I'm considering for my next projects.
| 0 | 0 | 0 | 0 |
2016-01-29T09:31:00.000
| 1 | 0 | false | 35,081,055 | 0 | 0 | 1 | 1 |
I am trying to build a restful web app and I was thinking since the serializers is very similar to a django form and can be produced in the HTML. I was thinking if better use the serializers rather than the django forms
|
How to get a list of Stomp queues or/and topics (their names) as a client?
| 35,082,799 | 2 | 1 | 1,119 | 0 |
java,python,ruby,activemq,stomp
|
Well, the simple answer is that you can't. That's not part of the Stomp protocol.
The complex answer, as always, is "it depends". It's entirely possible that whatever is providing your stomp service will have something that you can use. (In RabbitMQ, for example, you can log in to the web interface and look at the current queue names).
However the whole point of Stomp (and to a certain extent in all messaging) is that there aren't really "desintations", just queues which can be read by one or more clients. And the queues are transient; you might find the information deprecates pretty quickly...
| 0 | 0 | 0 | 0 |
2016-01-29T10:20:00.000
| 1 | 1.2 | true | 35,082,015 | 0 | 0 | 1 | 1 |
In Stomp, how can I browse all queues or/and topics available? Is it possible at all?
The key here is to get the result and the language is not important, it can be either python, ruby or java because as I've found out it's easier to do this particular task using them because of the existing libraries. Python seems to have only one most popular library, though.
|
Performing a blocking request in django view
| 35,083,287 | 2 | 5 | 2,469 | 0 |
python,django,celery
|
The usual solution here is to offload the task to celery, and return a "please wait" response in your view. If you want, you can then use an Ajax call to periodically hit a view that will report whether the response is ready, and redirect when it is.
| 0 | 1 | 0 | 0 |
2016-01-29T11:15:00.000
| 3 | 0.132549 | false | 35,083,133 | 0 | 0 | 1 | 1 |
In one of the views in my django application, I need to perform a relatively lengthy network IO operation. The problem is other requests must wait for this request to be completed even though they have nothing to do with it.
I did some research and stumbled upon Celery but as I understand, it is used to perform background tasks independent of the request. (so I can not use the result of the task for the response to the request)
Is there a way to process views asynchronously in django so while the network request is pending other requests can be processed?
Edit: What I forgot to mention is that my application is a web service using django rest framework. So the result of a view is a json response not a page that I can later modify using AJAX.
|
Django Pre-defined groups
| 35,087,045 | 0 | 0 | 94 | 0 |
python,django
|
Yes just create your desired groups in admin panel and add your permissions to each group then assign your users to the defined groups.
| 0 | 0 | 0 | 0 |
2016-01-29T14:17:00.000
| 2 | 0 | false | 35,086,705 | 0 | 0 | 1 | 2 |
Is there any way to create a group model with permissions already estabilished? I'm trying to create a system with at least 4 pre defined user types, and each user type will have some permissions.
|
Django Pre-defined groups
| 35,088,494 | 0 | 0 | 94 | 0 |
python,django
|
You can add the Group and Permissions creating commands to a migration, using the RunPython operation.
| 0 | 0 | 0 | 0 |
2016-01-29T14:17:00.000
| 2 | 0 | false | 35,086,705 | 0 | 0 | 1 | 2 |
Is there any way to create a group model with permissions already estabilished? I'm trying to create a system with at least 4 pre defined user types, and each user type will have some permissions.
|
Google Cloud Debugger for Python App Engine module says "Deployment revision unknown"
| 35,107,768 | 1 | 0 | 147 | 0 |
google-app-engine,google-app-engine-python,google-cloud-debugger
|
It looks like you did everything correctly.
The "Failed to update the snapshot" error shows up when there is some problem on the Cloud Debugger backend. Please contact the Cloud Debugger team through [email protected] or submit feedback report in Google Developer Console.
| 0 | 1 | 0 | 0 |
2016-01-29T17:46:00.000
| 1 | 1.2 | true | 35,090,793 | 0 | 0 | 1 | 1 |
I'm trying to get the Google Cloud Debugger to work on my Python App Engine module. I've followed the instructions and:
Connected to my Bitbucket hosted repository.
Generated the source-context.json and source-contexts.json using gcloud preview app gen-repo-info-file
Uploaded using appcfg.py update
However when I try to set a snapshot using the console, there is message saying:
The selected debug target does not have source revision information. The source shown here may not match the deployed source.
And when I try to set the snapshot point, I get the error:
Failed to update the snapshot
|
Can't see objects created in Django-Mezzanine Admin site
| 35,800,848 | 0 | 0 | 86 | 0 |
python,django,mezzanine
|
So I ended up figuring out what the problem was. The templates in my project has a content block called 'main' in mimicking the native template files. I needed to give the content block a new name across the board instead because they were overwriting the Mezzanine templates somehow.
| 0 | 0 | 0 | 0 |
2016-01-29T19:17:00.000
| 1 | 1.2 | true | 35,092,345 | 0 | 0 | 1 | 1 |
I just upgraded a website from Django 1.7/Mezzanine 3 to Django 1.8/Mezzanine 3. After doing so, I discovered that the admin site showed none of the previously created objects from my apps, even though they exist in the database and on the live site.
When I inspect the object in my browser, it doesn't seem like the database is being searched at all. This affects all of my apps, plus the User app native to Django. It does not affect the pages app, comments app, or blog post app native to Django.
I've tried deleting migration files, restarting the server, deleting and recreating the database, and dropping affected tables to recreate them.
There are no error messages, the page just looks like no one has created any objects yet. When you create a new object and save it, you still can't see them, even though the new object is live and in the database.
|
How to prevent a user from directly accessing my html page
| 35,094,515 | -2 | 0 | 1,098 | 0 |
javascript,jquery,python,html
|
Make sure to check the "referer" header in Python, and validate that the address is your login page.
| 0 | 0 | 1 | 0 |
2016-01-29T21:29:00.000
| 5 | -0.07983 | false | 35,094,371 | 0 | 0 | 1 | 1 |
I have a login page that sends a request to a Python script to authenticate the user. If it's not authenticated it redirects you to the login page. If it is redirects you to a HTML form. The problem is that you can access the HTML form by writing the URL. How Can I make sure the user came from the login form with my Python script without using modules because I can't install anything in my server. I want it to be strictly with Python I can't use PHP. Is it possible? Can I use other methods to accomplish the task?
|
Tastypie. How to add time of execution to responses?
| 35,167,054 | 1 | 1 | 107 | 0 |
python,django,tastypie
|
That'll only work for list endpoints though. My advice is to use a middleware to add X- headers, it's a cleaner, more generalized solution.
| 0 | 0 | 0 | 0 |
2016-01-30T19:03:00.000
| 3 | 0.066568 | false | 35,105,825 | 0 | 0 | 1 | 1 |
I want to measure execution time for some queries and add this data to responses, like: {"meta": {"execution_time_in_ms": 500 ...}} I know how to add fields to tastypie's responses, but I haven't idea how to measure time in it, where I should initialize the timer and where I should stop it. Any ideas?
|
Django - Storing a users API session token securely
| 35,111,076 | 2 | 2 | 1,030 | 0 |
python,django,api,security,token
|
A session cookie is not a bad solution. You could also mark the cookie as "secure" to make sure that it only can be sent over https. It is far better than using i.e. localstorage.
| 0 | 0 | 0 | 0 |
2016-01-31T02:44:00.000
| 1 | 0.379949 | false | 35,109,757 | 0 | 0 | 1 | 1 |
I have an application which issues a simple request with basic auth which returns a session token. I then want to use that token for subsequent calls to that same application interface.
My question is, is it OK to store this token in the session/cookie of the logged in user, or should I approach this a different way? I want to ensure 100% user security at all times.
|
Having issues adding a new web page in Django
| 36,265,905 | 0 | 0 | 117 | 0 |
python,html,django
|
I figured out the issue; all I had to do was restart the web server after making the changes. Thanks anyway guys!
| 0 | 0 | 0 | 0 |
2016-01-31T11:05:00.000
| 1 | 0 | false | 35,113,104 | 0 | 0 | 1 | 1 |
I am new here. I have googled for help regarding my issue to no avail. I am new to Python and the Django framework.
Issue I am facing:
I have a website built with the Django framework. In this website, I have a drop-down menu and I want to add a new page there. This new webpage only contains text and there is no user interaction needed.
1) I have created the new webpage, and put it in the "webapps/template" folder.
2) I updated the urls.py file with the new webpage's url in the "webapps" folder.
3) I have updated the base.html with the new webpage's url in the "webapps/template" folder.
I have worked with 3 files in total: new webpage, urls.py and base.html.
When I upload the files, the site breaks. What am I missing here?
Do I need to update another URL file somewhere? Please advise?
|
Fail to scrapyd-deploy
| 48,967,865 | 0 | 1 | 1,047 | 0 |
python,scrapy,scrapyd
|
Facing the same issue, the solution was hastened by reviewing scrapyd's error log. The logs are possibly located in the folder /tmp/scrapydeploy-{six random letters}/. Check out stderr. Mine contained a permissions error: IOError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/binary_agilo-1.3.15-py2.7.egg/EGG-INFO/entry_points.txt'. This happens to be a packaged that was installed system-wide last week, thus leading to scrapyd-deploy failing to execute. Removing the package fixes the issue. (Instead, the binary_agilo package is installed in a virtualenv.)
| 0 | 1 | 0 | 0 |
2016-02-01T07:05:00.000
| 2 | 0 | false | 35,124,720 | 0 | 0 | 1 | 1 |
Traceback (most recent call last):
File "/usr/local/bin/scrapyd-deploy", line 273, in
main()
File "/usr/local/bin/scrapyd-deploy", line 95, in main
egg, tmpdir = _build_egg()
File "/usr/local/bin/scrapyd-deploy", line 240, in _build_egg
retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.py", line 276, in retry_on_eintr
return function(*args, **kw)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-sV4Ws2']' returned non-zero exit status 1
|
Difference between model fields(in django) and serializer fields(in django rest framework)
| 37,944,880 | 3 | 9 | 5,561 | 0 |
python,django,django-rest-framework
|
Both of them refers to the same thing with a little bit difference.
Model fields are used within the database i.e while creating the schema, visible to the developer only.
while Serializer fields are used to when exposing the api to the client, visible to client also.
| 0 | 0 | 0 | 0 |
2016-02-01T11:44:00.000
| 3 | 0.197375 | false | 35,129,697 | 0 | 0 | 1 | 3 |
As we can validate the values using the conventional model field then why Django REST Framework contains its own serializer fields. I know that serializer fields are used to handle the converting between primitive values and internal datatypes. Except this, is there anything different between them.
|
Difference between model fields(in django) and serializer fields(in django rest framework)
| 35,133,156 | 12 | 9 | 5,561 | 0 |
python,django,django-rest-framework
|
Well there is a ModelSerializer that can automatically provide the serializer fields based on your model fields (given the duality you described). A ModelSerializer allows you to select which models fields are going to appear as fields in the serializer, thus allowing you to show/hide some fields.
A field in a model, is conventionally tied to a data store (say a column in a database).
A DRF Serializer can exist without a Django model too, as it serves to communicate between the API and the client, and its fields can be in many forms that are independent from the model and the backing database, e.g. ReadOnlyField, SerializerMethodField etc
| 0 | 0 | 0 | 0 |
2016-02-01T11:44:00.000
| 3 | 1.2 | true | 35,129,697 | 0 | 0 | 1 | 3 |
As we can validate the values using the conventional model field then why Django REST Framework contains its own serializer fields. I know that serializer fields are used to handle the converting between primitive values and internal datatypes. Except this, is there anything different between them.
|
Difference between model fields(in django) and serializer fields(in django rest framework)
| 35,133,230 | 6 | 9 | 5,561 | 0 |
python,django,django-rest-framework
|
Model fields are what you keep in your database.
(it answers how you want your data organized)
Serializer fields are what you expose to your clients.
(it answers how you want your data represented)
For models.ForeignKey(User) of your model,
you can represent it in your serializer as an Int field, or UserSerializer(which you will define), or as http link that points to the api endpoint for the user.
You can represent the user with username, it's up to how you want to represent it.
With DRF,
You can hide model fields, mark it as read-only/write-only.
You can also add a field that is not mappable to a model field.
| 0 | 0 | 0 | 0 |
2016-02-01T11:44:00.000
| 3 | 1 | false | 35,129,697 | 0 | 0 | 1 | 3 |
As we can validate the values using the conventional model field then why Django REST Framework contains its own serializer fields. I know that serializer fields are used to handle the converting between primitive values and internal datatypes. Except this, is there anything different between them.
|
Update strategy Python application + Ember frontend on BeagleBone
| 35,147,597 | 0 | 1 | 141 | 0 |
python,deployment,updates,beagleboneblack,yocto
|
A natural strategy would be to make use of the package manager also used for the rest of the system. The various package managers of Linux distributions are not closed systems. You can create your own package repository containing just your application/scripts and add it as a package source on your target. Your "updater" would work on top of that.
This is also a route you can go when using yocto.
| 0 | 1 | 0 | 1 |
2016-02-01T17:02:00.000
| 1 | 0 | false | 35,136,140 | 0 | 0 | 1 | 1 |
For the moment I've created an Python web application running on uwsgi with a frontend created in EmberJS. There is also a small python script running that is controlling I/O and serial ports connected to the beaglebone black.
The system is running on debian, packages are managed and installed via ansible, the applications are updated also via some ansible scripts. With other words, updates are for the moment done by manual work launching the ansible scripts over ssh.
I'm searching now a strategy/method to update my python applications in an easy way and that can also be done by our clients (ex: via webinterface). A good example is the update of a router firmware. I'm wondering how I can use a similar strategy for my python applications.
I checked Yocto where I can build my own linux with but I don't see how to include my applications in those builds, and I don't wont to build a complete image in case of hotfixes.
Anyone who has a similar project and that would like to share with me some useful information to handle some upgrade strategies/methods?
|
Web Socket between Javascript and Python
| 35,147,809 | 0 | 1 | 93 | 0 |
javascript,python,sockets
|
Maybe you need to deploy the website on your own server,because listening for client connecting is server side things,but the blogger.com host the server,the javascript section who provide to you is just for static pages.
If the blogger.com provide you api,that have some function like
app.on("connection",function(){
/*send data to your python program*/
})
| 0 | 0 | 1 | 0 |
2016-02-02T06:35:00.000
| 1 | 0 | false | 35,146,632 | 0 | 0 | 1 | 1 |
I have a blog on blogger.com, and they have a section where you can put html/javascript code in. I'm a total beginner to javascript/html but I'm somewhat adept at python. I want to open a listening socket on python(my computer) so that everytime a guest looks at my blog the javascript sends my python socket some data, like ip or datetime for example. I looked around on the internet, and ended up with the tornado module for my listening socket, but I have a hard time figuring out the javascript code.
Basically it involves no servers.
|
Appium Android UI testing - how to verify the style attribute of an element?
| 35,277,434 | 1 | 1 | 1,771 | 0 |
android,python,appium,python-appium
|
Update. As it turns out that cannot be done with appium webdriver.
For those of you who are wondering this is the answer I rec'd from the appium support group:
This cannot be done by appium as underlying UIAutomator framework does not allow us to do so.
In app's native context this cannot be done
In app's webview's context this will be same as below because webview is nothing but a chromeless browser session inside and app
print searchBtn.value_of_css_property("background-color").
Summary
for element inside NATIVE CONTEXT ==>> NO
for element inside WEBVIEW CONTEXT ==>> YES
Hope this helps.
| 1 | 0 | 0 | 0 |
2016-02-02T21:48:00.000
| 1 | 0.197375 | false | 35,164,413 | 0 | 0 | 1 | 1 |
I would like to verify the style of an element i.e. the color of the text shown in a textview. Whether it is black or blue ex. textColor or textSize. This information is not listed in the uiautomatorviewer.
I can get the text using elem.get_attribute("text") as the text value is seen in the Node Detail. Is there a way to check for the style attributes?( I can do this fairly easy with straight selenium.)
|
Extracting common functionality in Django management commands
| 35,493,697 | 1 | 1 | 56 | 0 |
python,django
|
I went with app_name/manangement/helpers.py. No issues.
| 0 | 0 | 0 | 0 |
2016-02-03T16:48:00.000
| 1 | 1.2 | true | 35,182,926 | 0 | 0 | 1 | 1 |
What is the generally accepted way for isolating functionality that is shared between multiple management commands in a given app? For example, I have some payload building code that is used across multiple management commands that access a third-party API. Is the proper location app_name/manangement/helpers.py which would then be imported in a management command with from ..helpers import build_api_payload?
I don't want to put it at the root of the app (we typically use app_name/helpers.py for shared functionality), since it pulls in dev dependencies that wouldn't exist in production, and is never really used outside the management command anyways.
|
Streaming live data in HTML5 graphs and tables
| 35,204,805 | 2 | 0 | 1,747 | 0 |
python,html,flask,bokeh,flask-socketio
|
You really are asking two questions in one. Really, you have two problems here. First, you need a mechanism to periodically give the client access to updated data for your tables and charts. Second, you need the client to incorporate those updates into the page.
For the first problem, you have basically two options. The most traditional one is to send Ajax requests (i.e. requests that run in the background of the page) to the server on a regular interval. The alternative is to enhance your server with WebSocket, then the client can establish a permanent connection and whenever the server has new data it can push it to the client. Which option to use largely depends on your needs. If the frequency of updates is not too high, I would probably use background HTTP requests and not worry about adding Socket.IO to the mix, which has its own challenges. On the other side, if you need sort of a live, constantly updating page, then maybe WebSocket is a good idea.
Once the client has new data, you have to deal with the second problem. The way you deal with that is specific to the tables and charts that you are using. You basically need to write Javascript code that passes these new values that were received from the server into these components, so that the page is updated. Unfortunately there is no automatic way to cause an update. You can obviously throw the current page away and rebuild it from scratch with the new data, but that is not going to look nice, so you should probably find what kind of Javascript APIs these components expose to receive updates.
I hope this helps!
| 0 | 0 | 0 | 0 |
2016-02-03T21:42:00.000
| 2 | 1.2 | true | 35,188,305 | 0 | 0 | 1 | 1 |
I have developed a python web application using flask microframework. I have some interactive plots generated by Bokeh and some HTML5 tables. My question is how I can update my table and graph data on fly?
Should I use threading class and set the timer and then re run my code every couple of seconds and feed updated data entries to the table and graphs?
I also investigated flask-socketIO, but all I found is for sending and receiving Messages, Is there a way to use flask-socketIO for this purpose?
I also worked a little bit with Bokeh-server, should I go that direction? does it mean I need to run two servers? my flask web server and bokeh-server?
I am new to this kind of work. I appreciate if you can explain in detail what I need to do.
|
Change Spreadsheet Settings from gspread
| 36,905,525 | 0 | 0 | 110 | 0 |
python,gspread
|
I don't believe there is a way to change those settings.
However, you can use python's datetime module to convert the time to time zone you want.
| 0 | 0 | 0 | 0 |
2016-02-04T08:39:00.000
| 1 | 0 | false | 35,196,150 | 0 | 0 | 1 | 1 |
Is there any way to change Spreadsheet Settings from the api??
Is there any other way I can do this from python??
Im using gspread to pull the results of a google form into python.
I want to change the time zone of the results to fit to my needs, since my local time and form's time don't match.
Thank you in advance
|
pip freeze > requirements.pip loses some info on github installed packages
| 35,254,517 | 1 | 1 | 48 | 0 |
python,pip
|
This was a bug of pip
https://github.com/pypa/pip/pull/3258
it's now fixed
I wonder why people downvoted the question...
| 0 | 0 | 0 | 0 |
2016-02-05T04:06:00.000
| 1 | 1.2 | true | 35,216,112 | 1 | 0 | 1 | 1 |
I installed one of the fork I made using
-e git://github.com/pcompassion/django.js.git@bd0f7b56d8ab2ae77795797fd10812d0b76883dc#egg=django.js-fork
then I create a requirements.pip using pip freeze > requirements.pip
it shows
django.js==0.8.2.dev0
and it is not usable to use this in production.
Why is this happening and how can I prevent it?
|
Do I need to do something in models.py if I'm creating a contact form?
| 35,242,633 | 2 | 2 | 109 | 0 |
python,django,django-models,django-forms
|
models.py is just a convention. You are not required to put your models in any specific module, you could put everything in one file if you wanted to.
If your contact form doesn't store anything in your database, you don't need any models either. You could do everything with just a form, then email the information entered elsewhere, or write it to disk by other means.
Even if you did want to put the information into a database, you could still do that without creating a model. However, creating a model just makes this task far easier and convenient, because Django can then generate a form from that, do validation, provide helpful feedback to your users when they make a mistake, handle transactions, etc.
| 0 | 0 | 0 | 0 |
2016-02-06T15:15:00.000
| 1 | 1.2 | true | 35,242,589 | 0 | 0 | 1 | 1 |
I'm creating a contact form in django. I've read some tuts and some of them use models.py and some of them skip the models part. What is the role of models.py in creating a contact form?
|
Why do chat applications have to be asynchronous?
| 35,250,150 | 2 | 3 | 1,314 | 0 |
python,django,chat,tornado
|
You certainly can develop a synchronous chat app, you don't necessarily need to us an asynchronous framework. but it all comes down to what do you want your app to do? how many people will use the app? will there be multiple users and multiple chats going on at the same time?
| 0 | 1 | 0 | 0 |
2016-02-07T04:29:00.000
| 3 | 0.132549 | false | 35,249,741 | 0 | 0 | 1 | 1 |
I need to implement a chat application for my web service (that is written in Django + Rest api framework). After doing some google search, I found that Django chat applications that are available are all deprecated and not supported anymore. And all the DIY (do it yourself) solutions I found are using Tornado or Twisted framework.
So, My question is: is it OK to make a Django-only based synchronous chat application? And do I need to use any asynchronous framework? I have very little experience in backend programming, so I want to keep everything as simple as possible.
|
Is it possible to change the Django csrf token name and token header
| 35,256,740 | 0 | 2 | 2,189 | 0 |
python,django,csrf,django-csrf
|
For header name and cookie name you can change it using CSRF_COOKIE_NAME and CSRF_HEADER_NAME. Unfortunately, you can't change POST field that easy. You will have to modify CsrfViewMiddleware for that. But if you're using angular, you can use only headers and completely omit POST fields for that.
| 0 | 0 | 0 | 0 |
2016-02-07T17:20:00.000
| 1 | 1.2 | true | 35,256,569 | 0 | 0 | 1 | 1 |
I am building a app using Angular and Django
by default, Django uses X-CSRFToken as the csrf header and csrftoken as the token name.
I Want to rename the header name to something X-SOMENAME and token as sometokenName,
I know with Angular we can change the default names with$http.defaults
Is it possible to change the token name on Django so that the generated token has sometokenName and the header Django looks to X-SOMENAME?
Thank you.
|
SQLAlchemy, Alembic and new instances
| 35,275,008 | 1 | 4 | 591 | 1 |
python,sqlalchemy,flask-sqlalchemy,alembic
|
If you know the state of the database you can just stamp the revision you were at when you created in the instance.
setup instance
run create_all
alembic heads (to determine latest version available in scripts dir)
alembic stamp
Here is the doc from the commandline:
stamp 'stamp' the revision table with the given revision;
don't run any migrations.
| 0 | 0 | 0 | 0 |
2016-02-07T23:32:00.000
| 1 | 1.2 | true | 35,260,536 | 0 | 0 | 1 | 1 |
In a platform using Flask, SQLAlchemy, and Alembic, we constantly need to create new separate instances with their own set of resources, including a database.
When creating a new instance, SQLAlchemy's create_all gives us a database with all the updates up to the point when the instance is created, but this means that this new instance does not have the migrations history that older instances have. It doesn't have an Alembic revisions table pointing to the latest migration.
So when the time comes to update both older instances (with migrations histories) and a newer instance without migrations history we have to either give the newer instance a custom set of revisions (ignoring older migrations than the database itself) or create a fake migrations history for it and use a global set of migrations. For the couple of times that this has happened, we have done the latter.
Is making a root migration that sets up the entire database as it was before the first migration and then running all migrations instead of create_all a better option for bootstrapping the database of new instances?
I'm concerned for the scalability of this as migrations increase in number.
Is there perhaps another option altogether?
|
ipython notebook requires javascript - on firefox web browser
| 35,270,287 | 1 | 1 | 3,643 | 0 |
javascript,firefox,ipython-notebook,jupyter-notebook
|
In the address bar, type "about:config" (with no quotes), and press Enter.
Click "I'll be careful, I promise".
In the search bar, search for "javascript.enabled" (with no quotes).
Right click the result named "javascript.enabled" and click "Toggle". JavaScript is now enabled.
To Re-disable JavaScript, repeat these steps.
| 0 | 0 | 0 | 0 |
2016-02-08T09:40:00.000
| 2 | 0.099668 | false | 35,266,360 | 1 | 0 | 1 | 2 |
I'm trying to run jupyter notebook from a terminal on Xfce Ubuntu machine.
I typed in the command:
jupyter notebook --browser=firefox
The firefox browser opens, but it is empty and with the following error:
"IPython Notebook requires JavaScript. Please enable it to proceed."
I searched the web on how to enable JavaScript on Ipython NoteBook but didn't find an answer. I would appreciate a lot any help. Thanks!
|
ipython notebook requires javascript - on firefox web browser
| 35,266,458 | 1 | 1 | 3,643 | 0 |
javascript,firefox,ipython-notebook,jupyter-notebook
|
javascript has to be enabled in firefox browser it is turned off. to turn it on do this
To enable JavaScript for Mozilla Firefox: Click the Tools drop-down menu and select Options. Check the boxes next to Block pop-up windows, Load images automatically, and Enable JavaScript. Refresh your browser by right-clicking anywhere on the page and selecting Reload, or by using the Reload button in the toolbar.
| 0 | 0 | 0 | 0 |
2016-02-08T09:40:00.000
| 2 | 0.099668 | false | 35,266,360 | 1 | 0 | 1 | 2 |
I'm trying to run jupyter notebook from a terminal on Xfce Ubuntu machine.
I typed in the command:
jupyter notebook --browser=firefox
The firefox browser opens, but it is empty and with the following error:
"IPython Notebook requires JavaScript. Please enable it to proceed."
I searched the web on how to enable JavaScript on Ipython NoteBook but didn't find an answer. I would appreciate a lot any help. Thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.