Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OpenERP. How to make multiple invoices on a sale order when it contains products from different companies?
| 22,589,593 | 0 | 0 | 632 | 0 |
python,openerp,invoice,erp
|
you have to override a create method in Invoice Model
| 0 | 0 | 0 | 0 |
2014-03-15T04:57:00.000
| 1 | 0 | false | 22,419,733 | 0 | 0 | 1 | 1 |
How to make multiple invoices on a sale order when it contains products from different companies? My configuration is: - Main Company has: sub company 1, sub company 2. These companies has many products. I wish make multiple invoices when I create a sale order if it contains products from different companies. For example: Sale order contains: - Product A: sub company 1. - Product B: sub company 2. This sale order should make two invoices but in one sale order. It is like group by order lines by company from product.
I use OpenERP 7 2014-March.
|
How many objects should I retrieve from server and How many can be stored in NSCache?
| 22,482,214 | 0 | 1 | 54 | 0 |
python,iphone,nscache,memory-optimization
|
Maybe you should use NSData to retrieve data from your service instead of NSCache.
NSCache is for temporary objects, however NSData is used to move data between applications (from your service to your app)
Description of NSCache by Apple:
An NSCache object is a collection-like container, or cache, that stores key-value pairs, similar to the NSDictionary class. Developers often incorporate caches to temporarily store objects with transient data that are expensive to create. Reusing these objects can provide performance benefits, because their values do not have to be recalculated. However, the objects are not critical to the application and can be discarded if memory is tight. If discarded, their values will have to be recomputed again when needed.
Description of NSData by Apple:
*NSData and its mutable subclass NSMutableData provide data objects, object-oriented wrappers for byte buffers. Data objects let simple allocated buffers (that is, data with no embedded pointers) take on the behavior of Foundation objects.
NSData creates static data objects, and NSMutableData creates dynamic data objects. NSData and NSMutableData are typically used for data storage and are also useful in Distributed Objects applications, where data contained in data objects can be copied or moved between applications.*
| 0 | 0 | 0 | 0 |
2014-03-15T15:19:00.000
| 1 | 0 | false | 22,425,631 | 0 | 0 | 1 | 1 |
My service return up to 500 objects at time i've notice that my iphone application is crashing when the amount of data goes over 60 objects. to workaround this issue I'm running a query that brings back only the top 40 results but that is slower than just returning the entire data
what are the best practices and how can i retrieve more objects?
what is the maximum amount of memory allocated to an application in iphone and is there a way to extend it?
How many objects should I retrieve from server
How many can be stored in NSCache?
|
How to include libraries through the python local server
| 22,430,282 | 1 | 1 | 197 | 0 |
python,d3.js,local
|
The SimpleHTTPServer module will only serve things that are within the directory you're telling it to serve and folders beneath that directory, for security reasons. (Otherwise a visitor could ask it for e.g. ../../../../etc/passwd or similar.)
If you want to serve scripts and other assets, you'll need to put them in a subfolder of the directory you're running SimpleHTTPServer in.
| 0 | 0 | 1 | 0 |
2014-03-15T15:48:00.000
| 1 | 0.197375 | false | 22,426,005 | 0 | 0 | 1 | 1 |
I'm currently running an html and jsp file locally and hosting it by running this command through the terminal: python -m SimpleHTTPServer 8888 &.
This has been going smoothly, but I recently ran into an issue where I have to include library files (d3, jQuery, ajax, etc.)
I've included the following command in my html file <script src="../libs/d3.v3.min.js">
but noticed that it was pulling up a 404 error. I've tried to remedy it with a change in script to : <script src="http://d3js.org/d3.v3.min.js">.
But I actually feel that it doesn't go to the root of the problem. Why am I unable to include the files I have in my lib?
Edited wording to question thanks for the heads up Amber: The lib file is located one directory up from the html file.
|
Best practice for setting up Flask+uWSGI+nginx
| 22,432,964 | 0 | 7 | 3,061 | 0 |
python,nginx,flask,uwsgi
|
What you are asking for is not "best practices", but "conventions". No, there are no conventions in the project about paths, privileges and so on. Each sysadmin (or developer) has its needs and tastes, so if you are satisfied with your current setup... well "mission accomplished". There are no uWSGI gods to make happy :) Obviously distro-supplied packages must have their conventions, but again they are different from distro to distro.
| 0 | 0 | 0 | 0 |
2014-03-16T04:05:00.000
| 3 | 0 | false | 22,432,826 | 0 | 0 | 1 | 1 |
I'm trying to set up my first web server using the combination of Flask, uWSGI, and nginx. I've had some success getting the Flask & uWSGI components running. I've also gotten many tips from various blogs on how to set this up. However, there is no consistency and the articles suggest many different ways of setting this up especially where folder structures, nginx configurations and users/permissions are concerned (I've tried some of these suggestions and many do work, but I am not sure which is best). So is there one basic "best practice" way of setting up this stack?
|
How to play a note as long as I hit the key (Fluidsynth)?
| 22,449,199 | 1 | 0 | 317 | 0 |
python,midi,synthesizer,fluidsynth
|
In MIDI if you send a note on message it stays on until you send a note off. Maybe you are sending a note on every time you check the state of the button? If so, you shouldn't, send the note on/note off only when the button state changes.
| 0 | 0 | 0 | 1 |
2014-03-16T10:34:00.000
| 1 | 1.2 | true | 22,435,753 | 0 | 0 | 1 | 1 |
I'm working on a Raspberry Pi project at the moment and I'm searching for a possibility to play a note as long as ey press a button (connected with gpio).
I use pyFluidsynth and got it working but it's note holding a note as long as i press a button, it repeats it really fast but to slow not to hear it.
Is there any control I don't know? I'm just using noteon and noteoff, is there maybe something like "notehold"?
thanks!
|
django.fcgi or virtualenv : no module named operator
| 22,452,380 | 2 | 0 | 2,136 | 0 |
python,django
|
Since the operator module is part of the standard library, it looks like you have a corrupt Python installation in your virtualenv. The best thing to do would be to simply delete and recreate your virtualenv.
| 0 | 0 | 0 | 0 |
2014-03-17T10:45:00.000
| 1 | 1.2 | true | 22,452,284 | 0 | 0 | 1 | 1 |
So these are my website informations:
framework : Django
hosting : alwaysdata
python : 2.7
virtualenv is used
The problem :
I have the non explicit 500 error :Internal Server Error
I have not any error log
But :
I found a trail to solve this issue. Indeed when i run manually the django.fcgi, i got this traceback:
Traceback (most recent call last):
File "public/django.fcgi", line 14, in
from django.core.servers.fastcgi import runfastcgi
File "/home/usr/.virtualenvs/thevirtualenv/lib/python2.7/site-packages/django/core/servers/fastcgi.py", line 17, in
from django.utils import importlib
File "/home/usr/.virtualenvs/thevirtualenv/lib/python2.7/site-packages/django/utils/importlib.py", line 4, in
from django.utils import six
File "/home/usr/.virtualenvs/thevirtualenv/lib/python2.7/site-packages/django/utils/six.py", line 23, in
import operator
ImportError: No module named operator
Manipulation(s) which could occur this issue :
I had this issue for 3 week ago, i let it rest too long, so now i can't remember what i have done to bring about this.. But i think it was a dirty virtualenv creation or edition, something like that..
Thanks to bear with my english.
Does anyone have an Idea about my case?
Attempts to solve this issue:
I just tried to recreate my virtualenv and got this error message :
Traceback (most recent call last):
File "/home/usr/python/python27/bin/virtualenv", line 5, in
from pkg_resources import load_entry_point
zipimport.ZipImportError: can't decompress data; zlib not available
|
Django background monitor service
| 22,575,690 | 0 | 0 | 128 | 0 |
python,django,monitor
|
Well, there are many ways to do this. One would be a simple daemon (card observer) script that reads the card data every second or so and puts it in a memcached/db/file. Then you simply read it in the view.
Once you get this working, you may want to take another course, like running a observer thread from Django etc.
| 0 | 0 | 0 | 0 |
2014-03-17T20:17:00.000
| 1 | 1.2 | true | 22,464,447 | 0 | 0 | 1 | 1 |
I have a django-based website. And I have an RFID-reader used by the django site. The reader's have a monitor, and a function, wich gives back the uid of the inserted card. When card isn't inserted, it gives back None.
I'd like to "run" the monitor's code while the django site is running, and I'd like to call the function of the monitor from my views, to use the uid of the rfid. How can I do this?
Thanks!
|
Need Django Package for S3
| 22,474,074 | 0 | 0 | 67 | 0 |
python,django,amazon-s3
|
django-storages works quite well and many other Django products rely on it. It does provide other storage services besides S3 but, of course, you don't need to use any of the others.
It does need to know your AWS access key & secret key, but you don't need to actually put those values in your settings.py; typically, you'll put them in an environmental variable and read those in settings.py, like:
AWS_S3_ACCESS_KEY_ID = os.environ['YOUR_AWS_ACCESS_KEY_ID']
AWS_S3_SECRET_ACCESS_KEY = os.environ['YOUR_AWS_SECRET_ACCESS_KEY']
| 0 | 0 | 0 | 0 |
2014-03-18T07:56:00.000
| 1 | 0 | false | 22,473,370 | 0 | 0 | 1 | 1 |
I am looking for a simple but effective S3 based Django Package through which subscribers from the website can directly use storage services without any hassle. I am a beginner for django so really looking for something simple to use. Please recommend something exactly as per requirement as previously I've found some resources but they cover all storage services and gets complicated for me to understand or apply. I need something that directly stores files to S3 excluding web server layer. I also don't want to save Access Key/Secret Key to my global settings file in settings.py. Please help.
|
Making a field "blank=False" in production
| 22,474,781 | 3 | 0 | 104 | 0 |
django,python-2.7,django-models,django-forms
|
No. blank is enforced solely at the application level.
| 0 | 0 | 0 | 0 |
2014-03-18T09:07:00.000
| 2 | 1.2 | true | 22,474,657 | 0 | 0 | 1 | 1 |
I have deployed my Django website but just now realized that I didn't make one of the fields compulsory. For the field it is currently,
blank=True, null=True
Now if I go ahead and change it to
blank=False
will there be any effect on the database and already existing data in it?
|
Django calling REST API from models or views?
| 22,479,708 | 6 | 10 | 4,474 | 0 |
python,django,rest,django-models,django-views
|
I think it is an opinion where to call web services. I would say don't pollute your models because it means you probably need instances of those models to call these web services. That might not make any sense. Your other choice there is to make things @classmethod on the models, which is not very clean design I would argue.
Calling from the view is probably more natural if accessing the view itself is what triggers the web service call. Is it? You said that you need to keep things in sync, which points to a possible need for background processing. At that point, you can still use views if your background processes issue http requests, but that's often not the best design. If anything, you would probably want your own REST API for this, which necessitates separating the code from your average web site view.
My opinion is these calls should be placed in modules and classes specifically encapsulated for your remote calls and processing. This makes things flexible (background jobs, signals, etc.) and it is also easier to unit test. You can trigger calling this code in the views or elsewhere, but the logic itself should be separate from both the views and the models to decouple things nicely.
You should imagine that this logic should exist on its own if there was no Django around it, then build other pieces that connect that logic to Django (ex: syncing the models). In other words, keep things atomic.
Yes, same reasons as above, especially flexibility. Is there any reason not to?
Yes, simply create the equivalent of an interface. Have each class map to the interface. If the fields are the same and you are lazy, in python you can just dump the fields you need as dicts to the constructor (using **kwargs) and be done with it, or rename the keys using some convetion you can process. I usually build some sort of simple data mapper class for this and process the django or rest models in a list comprehension, but no need if things match up as I mentioned.
Another related option to the above is you can dump things into a common structure in a cache such as Redis or Memcache. It might be wise to atomically update this info if you are concerned with "freshness." But in general you should have a single source of authority that can tell you what is actually fresh. In sync situations, I think it's better to pick one or the other to keep things predictable and clear though.
One last thing that might influence your design is that by definition, keeping things in sync is a difficult process. Syncs tend to be very prone to failure, so you should have some sort of durable mechanism such as a task queue or job system for retries. Always assume when calling a remote REST API that calls can fail for crazy reasons such as network hicups. Also keep in mind transactions and transactional behavior when syncing. Since these are important, it points again to the fact that if you put all this logic in a view directly, you will probably run into trouble reusing it in the background without abstracting things a bit anyway.
| 0 | 0 | 0 | 0 |
2014-03-18T12:15:00.000
| 2 | 1.2 | true | 22,479,095 | 0 | 0 | 1 | 2 |
I have to call external REST APIs from Django. The external data source schemas resemble my Django models. I'm supposed to keep the remote data and local ones in sync (maybe not relevant for the question)
Questions:
What is the most logical place from where to call external web services: from a model method or from a view?
Should I put the code that call the remote API in external modules that will be then called by the views?
Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"?
Thanks
EDIT: for the people willing to close this question: I've broken down the question in three simple questions from the beginning and I've received good answers so far, thanks.
|
Django calling REST API from models or views?
| 22,479,360 | 3 | 10 | 4,474 | 0 |
python,django,rest,django-models,django-views
|
What is the most logical place from where to call external web services: from a model method or from a view?
Ideally your models should only talk to database and have no clue what's happening with your business logic.
Should I put the code that call the remote API in external modules that will be then called by the views?
If you need to access them from multiple modules, then yes, placing them in a module makes sense. That way you can reuse them efficiently.
Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"?
Of course it's possible. You can just implement how you fetch your data on request. But the more efficient way might just be avoiding that logic and just sync your local data with remote data and show the local data on the views.
| 0 | 0 | 0 | 0 |
2014-03-18T12:15:00.000
| 2 | 0.291313 | false | 22,479,095 | 0 | 0 | 1 | 2 |
I have to call external REST APIs from Django. The external data source schemas resemble my Django models. I'm supposed to keep the remote data and local ones in sync (maybe not relevant for the question)
Questions:
What is the most logical place from where to call external web services: from a model method or from a view?
Should I put the code that call the remote API in external modules that will be then called by the views?
Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"?
Thanks
EDIT: for the people willing to close this question: I've broken down the question in three simple questions from the beginning and I've received good answers so far, thanks.
|
Python Django: Load Autofield to MySql Table using loaddata
| 22,500,375 | 1 | 0 | 344 | 0 |
python,mysql,django
|
Found the solution - I had to use dumpdata app.model --natural
| 0 | 0 | 0 | 0 |
2014-03-18T15:03:00.000
| 1 | 0.197375 | false | 22,483,205 | 0 | 0 | 1 | 1 |
I have a model with two DateField - fields in it, which I dumped to JSON using dumpdata. Now I want to load those fixtures (I am using South) to my MySQL-Database which leads to the following Error:
CommandError: The database backend does not accept 0 as a value for AutoField.
Does anybody know that problem and the solution to it?
My Database is MySql (version 5.6.12) and I'm using Django 1.5.1. I used Sqlite before and want to change to MySQL.
|
AWS EC2 not running web server on default port
| 22,513,826 | 2 | 3 | 647 | 0 |
python,django,amazon-web-services,amazon-ec2
|
sudo python manage.py runserver 0.0.0.0:80
did the trick.
| 0 | 0 | 0 | 0 |
2014-03-18T20:24:00.000
| 1 | 1.2 | true | 22,490,257 | 0 | 0 | 1 | 1 |
I am new to AWS setup:
here are the steps i followed to setup a Django web server. ( but its not running on public ip )
created AWS instance
installed Django 1.6.2
created sample app
added security group (Inbound Requests) of running instance with HTTP - TCP - 80 - 0.0.0.0/0
tried following ways to run server.
python manage.py runserver 0.0.0.0:8000
python manage.py runserver ec2-XX-XXX-XXX-XX.us-west-2.compute.amazonaws.com:8000
python manage.py runserver
but server is not accesible from the Public DNS given by EC2.
NOTE: running micro instance with ubuntu 12.04 (LTS) with virtualenv.
what is missing in the above steps.
Thanks.
|
Server side python code runing continuosly per session
| 22,520,376 | 1 | 0 | 183 | 0 |
python,django,session,flask,server-side
|
Celery is a great solution, but it can be overpowered for many setups. If you just need tasks to run periodically (once an hour, once a day, etc) then consider just using cron.
There's a lot less setup and it can get you quite far.
| 0 | 0 | 0 | 0 |
2014-03-19T03:42:00.000
| 2 | 0.099668 | false | 22,495,767 | 0 | 0 | 1 | 1 |
I have searched the forums for my question but im either searching for a thing naming it wrongly or the question is hard which i really doubt.
I am developing a web-app which would have an web-interface written in one of the MVC frameworks like django or even flask and allow user to login, will identify users session and allow to make some settings and also my app needs to run some python process(script which basically is a separate file) on the server on a per-session per-settings made by user basis. This process is quite long - can take even days to perform and shouldn't affect the execution and performance of MVC part of an app. Another issue is that this process should be run per user so the basic usage model of such app would be:
1. the user enters the site.
2. the user makes some settings which are mirrored to database.
3. the user pushes the launch button which executes some python script just for this user with the settings he has made.
4. the user is able to monitor some parameters of the script running based on some messages that the script itself generates.
I do understand that my question is related to the architecture of the app itself and i'm quite new to python and haven't had any experience of developing such complex application but I'm also quite eager to learn about it. I do understand the bricks from which my app should be built (like django or flask and the server-side script itself) but i know very little about how this elements should be glued together to create seamless environment. Please direct me to some articles related to this topic or recommend some similar threads or just give a clear high level explanation how such separate python processes could be triggered,run and monitored further on a per-user basis from controller part of MVC.
|
Should I deploy only the .pyc files on server if I worry about code security?
| 22,497,827 | 11 | 6 | 6,259 | 0 |
python,django,cloud,wsgi
|
Deploying .pyc files will not always work. If using Apache/mod_wsgi for example, at least the WSGI script file still needs to be straight Python code.
Some web frameworks also may require the original source code files to be available. Using .pyc files also does little to obscure any sensitive information that may be in templates used by a web framework.
In general, using .pyc files is a very weak defence and tools are available to reverse engineer them to extract information from them.
So technically your application may run, but it would not be regarded as very secure way of protecting your source code.
You are better of using a hosting service you trust. This generally means paying for reputable hosting rather than just the cheapest one you can find.
| 0 | 0 | 0 | 1 |
2014-03-19T03:54:00.000
| 4 | 1.2 | true | 22,495,894 | 0 | 0 | 1 | 3 |
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'.
What are the pros and cons of taking this approach? Is this a standard practice?
I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
|
Should I deploy only the .pyc files on server if I worry about code security?
| 22,496,250 | 1 | 6 | 6,259 | 0 |
python,django,cloud,wsgi
|
Generally, deploying PYC files will work fine.
The Pros, as you said, a bit helpful for protecting source codes.
Cons, here are the points I found:
1). PYC only works with same Python version. E.g., "a.pyc" was compiled by Python2.6, "b.pyc" was by 2.7, and b.pyc "import a", it won't work. Similarly, "python2.6 b.pyc" neither work. So do remember to use a same Python version to generate all PYC, as well as the version on your cloud server
2). if you want to SSH to cloud server for some live debugging, PYC cannot help you
3). the deployment work requires extra things to do
| 0 | 0 | 0 | 1 |
2014-03-19T03:54:00.000
| 4 | 0.049958 | false | 22,495,894 | 0 | 0 | 1 | 3 |
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'.
What are the pros and cons of taking this approach? Is this a standard practice?
I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
|
Should I deploy only the .pyc files on server if I worry about code security?
| 22,495,975 | 1 | 6 | 6,259 | 0 |
python,django,cloud,wsgi
|
Yes, just deploying the compiled files is fine. Another point to consider are the other aspects of your application. One aspect could be if current bugs let malicious users know what technology stack you are using, type of error messages displayed when (if) your application crashes. To me, these seem like some of the other aspects, I'm sure there are more.
| 0 | 0 | 0 | 1 |
2014-03-19T03:54:00.000
| 4 | 0.049958 | false | 22,495,894 | 0 | 0 | 1 | 3 |
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'.
What are the pros and cons of taking this approach? Is this a standard practice?
I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
|
How to cache data to be used in multiple ways at a single URL
| 22,516,105 | 0 | 0 | 64 | 0 |
python,django,performance,caching
|
I think your idea to store the prepped data in a file is a good one. I might name the file something like this:
/tmp/prepped-data-{{session_id}}.json
You could then just have a function in each view called get_prepped_data(session_id) that either computes it or reads it from the file. You could also delete old files when that function is called.
Another option would be to store the data directly in the user's session so it is cleaned up when their session goes away. The feasibility of this approach depends a bit on how much data needs to be stored.
| 0 | 0 | 0 | 0 |
2014-03-19T16:01:00.000
| 1 | 0 | false | 22,511,568 | 0 | 0 | 1 | 1 |
Let's say I have a page I'd like to render which will present some (expensive to compute) data in a few ways. For example, I want to hit my database and get some large-size pile of data. Then I want to group that data and otherwise manipulate it in Python (for example, using Pandas). Say the result of this manipulation is some Pandas DataFrame that I'll call prepped_data. And say everything up to this point takes 3 seconds. (I.e. it takes a while...)
Then I want to summarize that data at a single URL (/summary): I'd like to show a bar graph, a pie chart and also an HTML table. Each of these elements depends on a subset of prepped_data.
One way I could handle this is to make 3 separate views hooked up to 3 separate URL's. I could make pie_chart_view which would make a dynamically generated pie chart available at /piechart.svg. I could make bar_graph_view which would make a dynamically generated bar graph available at /bargraph.svg. And I could make summary_view which would finish by rendering a template. That template would make use of context variables generated by summary_view itself to make my HTML table. And it would also include the graphs by linking to their URL's from within the template. In this structure, all 3 view functions would need to independently calculate prepped_data. That seems less-than-ideal.
As an alternative. I could turn on some kind of caching. Maybe I could make a view called raw_data_view which would make the data itself available at /raw_data.json. I could set this to cache itself (using whatever Django caching backend) for a short amount of time (30 seconds?). Then each of the other views could hit this URL to get their data and that way I could avoid doing the expensive calculations 3 times. This seems a bit dicey as well, though, because there's some real judgement involved in setting the cache time.
One other route could involve creating both graphs within summary_view and embedding the graphics directly within the rendered HTML (which is possible with .svg). But I'm not a huge fan of that since you wind up with bulky HTML files and graphics that are hard for users to take with them. More generally, I don't want to commit to doing all my graphics in that format.
Is there a generally accepted architecture for handling this sort of thing?
Edit: How I'm making the graphs:
One comment asked how I'm making the graphs. Broadly speaking, I'm doing it in matplotlib. So once I have a Figure I like generated by the code, I can save it to an svg easily.
|
Can you display python web code in Joomla?
| 22,783,076 | 0 | 5 | 5,932 | 0 |
php,python,joomla,cherrypy,joomla3.0
|
Before considering Python
What are you wanting to customize? (perhaps some clever Javascript or a Joomla extension already exists)
Is the Joomla-way not a better solution for your problem, given the fact that you're using Joomla? (change the template, or the view-templates of the modules and component in particular)
i.o.w.: do you understand Joomla enough to know that you need something else? See below:
If Python is still the way to go:
Does your hosting support Python?
Should you reconsider your choice of CMS?
I like your choice of CherryPy.
| 0 | 0 | 0 | 1 |
2014-03-19T16:32:00.000
| 5 | 0 | false | 22,512,321 | 0 | 0 | 1 | 1 |
I'm building a Joomla 3 web site but I have the need to customize quite a few pages. I know I can use PHP with Joomla, but is it also possible to use Python with it? Specifically, I'm looking to use CherryPy to write some custom pieces of code but I want them to be displayed in native Joomla pages (not just iFrames). Is this possible?
|
Bind to LDAP after SSO?
| 22,533,335 | 1 | 1 | 153 | 0 |
php,python,ldap,single-sign-on,saml
|
Rather than using the user's credentials to bind to LDAP, get an application account at LDAP that has read permissions for the attributes you need on the users within the directory. Then, when you get the username via SSO, you just query LDAP using your application's ID.
Make sure you make your application ID's password super strong - 64 chars with a yearly change should be good. Better yet, do certificate-based authn.
| 0 | 0 | 1 | 0 |
2014-03-19T20:39:00.000
| 1 | 0.197375 | false | 22,517,604 | 0 | 0 | 1 | 1 |
I have this web application with LDAP backend, to read and modify some LDAP attributes.
Web application use the SSO (Single Sign-on) to authenticate user.
How can I bind to LDAP, if I only get a user name as an attribute from SSO, withouth asking for password again, because it will make SSO useless?
I use SimpleSAMLphp as identity provider, and python driven web application for LDAP management.
|
Local server giving wrong files. Is it possible I'm running 2 python servers?
| 22,522,021 | 1 | 2 | 1,036 | 0 |
python,localhost,simplehttpserver,localserver
|
Only one process can listen on a port; you cannot have two SimpleHTTPServer processes listening on the same port. You can however leave an old server process up and then disregard failed startup of the new server process or error message about automatic port conflict resolution.
To debug this process, use netstat ( lsof in OSX, since BSD netstat is lame ) to find the process listening on the port and then 'ps -fww' to list data about that process. You can also take a look at /proc/$pid ( linux ) to get a process ID's current working directories. lsof can also help track down files the process has open in linux OR BSD/OSX if you're unsure which files it's serving.
Hope it helps!
| 0 | 0 | 0 | 0 |
2014-03-20T01:54:00.000
| 2 | 1.2 | true | 22,521,912 | 0 | 0 | 1 | 2 |
I'm in the directory /backbone/ which has a main.js file within scripts. I run python -m SimpleHTTPServer from the backbone directory and display it in the browser and the console reads the error $ is not defined and references a completely different main.js file from something I was working on days ago with a local python server.
I am new to this and don't have an idea what's going on. Would love some suggestions if you have time.
|
Local server giving wrong files. Is it possible I'm running 2 python servers?
| 67,453,910 | 0 | 2 | 1,036 | 0 |
python,localhost,simplehttpserver,localserver
|
I recently had this problem and it was due to the old page being stored in the browser cache. Accessing the port from a different browser worked for me (or you can clear your cache).
| 0 | 0 | 0 | 0 |
2014-03-20T01:54:00.000
| 2 | 0 | false | 22,521,912 | 0 | 0 | 1 | 2 |
I'm in the directory /backbone/ which has a main.js file within scripts. I run python -m SimpleHTTPServer from the backbone directory and display it in the browser and the console reads the error $ is not defined and references a completely different main.js file from something I was working on days ago with a local python server.
I am new to this and don't have an idea what's going on. Would love some suggestions if you have time.
|
Shared lock for Python objects
| 22,524,103 | 1 | 0 | 1,254 | 0 |
python,multithreading,locking
|
Since the updates are so infrequent, you're better off just making a copy of the object, updating copy, and then updating the global variable to point to the new object. Simple assignments in python are atomic so you don't need any locks at all.
| 0 | 0 | 0 | 0 |
2014-03-20T03:19:00.000
| 1 | 1.2 | true | 22,522,802 | 1 | 0 | 1 | 1 |
I'm developing a tiny web application with Flask/Gunicorn on Heroku. Since I'm just prototyping, I have a single web process (dyno) with a worker thread started by the same process. The web application is just returning a JSON dump of a global object, which is periodically updated by the worker thread monitoring an external web service. The global object is updated every 15 to 60 minutes. My plan was to use an exclusive lock in the worker thread when an update to the global object is needed, and a shared lock in the web threads so multiple requests can be satisfied concurrently. Unfortunately it looks like that Python doesn't have shared locks, only exclusive locks. How can I ensure consistency in the web threads, i.e., how to be sure that the update to the global object is atomic while allowing multiple read-only access to the object?
|
Different Postgres users for syncdb/migrations and general database access in Django
| 22,527,486 | 1 | 3 | 78 | 1 |
python,django,postgresql
|
From ./manage.py help syncdb:
--database=DATABASE Nominates a database to synchronize. Defaults to the
"default" database.
You can add another database definition in your DATABASES configuration, and run ./manage.py syncdb --database=name_of_database_definition. You might want to create a small wrapper script for running that command, so that you don't have to type out the --database=... parameter by hand every time.
south also supports that option, so you can also use it to specify the database for your migrations.
| 0 | 0 | 0 | 0 |
2014-03-20T04:28:00.000
| 1 | 0.197375 | false | 22,523,519 | 0 | 0 | 1 | 1 |
I'm using Django 1.6 with PostgreSQL and I want to use a two different Postgres users - one for creating the initial tables (syncdb) and performing migrations, and one for general access to the database in my application. Is there a way of doing this?
|
Realtime forms in Django
| 22,533,787 | 0 | 0 | 81 | 0 |
jquery,python,django
|
I think ajax should do the trick for you
| 0 | 0 | 0 | 0 |
2014-03-20T12:12:00.000
| 1 | 0 | false | 22,532,644 | 0 | 0 | 1 | 1 |
I have a website and I want to have a form on the website that multiple people can view. As the firm gets updated by any of the individuals looking at it, everyone else can also see the updates without refreshing the web page. Basically it will be a row from a table displayed as a form and each part of the form will be filled periodically and will all start off empty
I was thinking jquery might be able to do this but I do not fully understand everything about jquery yet.
Any thoughts or ideas on the best way to do this? I am currently just learning django as I go.
|
Exchange data between Python and PHP
| 22,542,914 | 1 | 4 | 419 | 0 |
php,python
|
Write a Python script that takes a path in sys.argv or the audio data via sys.stdin and writes metadata to sys.stdout. Call it from PHP using exec.
| 0 | 0 | 0 | 1 |
2014-03-20T18:56:00.000
| 1 | 0.197375 | false | 22,542,566 | 0 | 0 | 1 | 1 |
Is it possible to exchange data between a PHP page and a Python application? How can I implement a PHP page that reacts to a Python application?
EDIT:
My application is divided in 2 parts: the web backend and a Python daemon. Via the web backend I upload MP3s to my server; these MP3s are processed by my Python daemon which fetch metadata from Musicbrainz.
Now:
I need to ask the user the results of the "Python fetch" to choose the right metadata.
Is this possible?
|
How to retrieve Facebook friend's information with Python-Social-auth and Django
| 41,433,786 | 1 | 11 | 7,106 | 0 |
python,django,facebook,facebook-graph-api,python-social-auth
|
Just some extra for the reply above. To get the token from extra_data you need to import the model with that data (took me a while to find this):
from social.apps.django_app.default.models import UserSocialAuth
| 0 | 0 | 0 | 0 |
2014-03-21T00:57:00.000
| 2 | 0.099668 | false | 22,548,223 | 0 | 0 | 1 | 1 |
How can I retrieve Facebook friend's information using Python-Social-auth and Django? I already retrieve a profile information and authenticate the user, but I want to get more information about their friends and invite them to my app.
Thanks!
|
Paypal, encrypted add to cart button generate dynamically
| 22,621,678 | 2 | 0 | 383 | 0 |
python,paypal
|
Your question embodies a contradiction in terms. The purpose of so-called encrypted buttons is for Paypal to check that they exist as registered buttons. If you roll your own buttons, Paypal can't do that.
You're looking at the problem the wrong way. If someone chooses to send you money, that's very nice, but unless it's a price you recognize and advertise for one of your own items, you're not obliged to deliver anything. Your IPN handler should check that and fail the transaction if the price, item, etc. don't match your own catalog database.
You can either refund the money or even just let Paypal's normal processes do that for you if they have the cheek to raise an 'item not received' case. Or just keep it as a donation.
| 0 | 0 | 0 | 0 |
2014-03-21T18:54:00.000
| 1 | 0.379949 | false | 22,567,255 | 0 | 0 | 1 | 1 |
in my current web application project, I'm developing a small commerce with many products...
I need to implement PayPal for payments and I have read a lot of documentation in paypal developer site.
The solution of implement Payment standard Buttons (add to cart button in my case) is fantastic, but I need to auto generate the html for each product in my database.
If I auto-generate the clear code (without encryption) the problem is that one malicious user can edit the amount of one product (in html rendered page) and the purchase it (with your price :D )....
What I want is to auto generate the encrypted "Add to cart" button with some APIs.
What is the elegant solution to my problem??
I'm using python for develop my application.
|
Sync data with Local Computer Architecture
| 22,629,711 | 0 | 0 | 63 | 0 |
python,django,data-binding,architecture
|
Sounds like you need a message queue.
You would run a separate broker server which is sent tasks by your web app. This could be on the same machine. On your two local machines you would run queue workers which connect to the broker to receive tasks (so no inbound connection required), then notify the broker in real time when they are complete.
Examples are RabbitMQ and Oracle Tuxedo. What you choose will depend on your platform & software.
| 0 | 0 | 1 | 0 |
2014-03-24T06:24:00.000
| 2 | 0 | false | 22,602,390 | 0 | 0 | 1 | 1 |
The scenario is
I have multiple local computers running a python application. These are on separate networks waiting for data to be sent to them from a web server. These computers are on networks without a static IP and generally behind firewall and proxy.
On the other hand I have web server which gets updates from the user through a form and send the update to the correct local computer.
Question
What options do I have to enable this. Currently I am sending csv files over ftp to achieve this but this is not real time.
The application is built on python and using django for the web part.
Appreciate your help
|
How to code in openerp so that user can create his fields?
| 22,627,251 | 0 | 0 | 103 | 0 |
python,openerp
|
The user can add fields, models, can customize the views etc from client side. These are in Settings/Technical/Database Structure, here you can find the menus Fields, Models etc where the user can add fields. And the views can be customized in Settings/Technical/User Interface.
| 0 | 0 | 0 | 0 |
2014-03-24T08:46:00.000
| 1 | 0 | false | 22,604,620 | 0 | 0 | 1 | 1 |
I have been developing modules in OpenERP-7 using Python on Ubuntu-12.04. I want to give my users a feature by which they will have the ability to create what ever fields they want to . Like they will set the name, data_type etc for the field and then on click , this field will be created. I dont have any idea how this will be implemented. I have set up mind to create a button that will call a function and it will create a new field according to the details entered by the user . Is this approach of mine is right or not? And will this work . ? Please guide me so that I can work smartly.
Hopes for suggestion
|
When I try to run 'cfx run' or 'cfx test' using the Mozilla Add-On SDK, my application binaries are not found
| 22,612,244 | 3 | 1 | 730 | 0 |
python,macos,firefox,firefox-addon,firefox-addon-sdk
|
It's looking for the Firefox binary file, not your application's binaries. You have to install Firefox because cfx run will open a browser with your add-on installed so you can use it and test it live.
If firefox is already installed, then it is in a non-standar path, so you must tell cfx command where to find it, this way:
cfx run -b /usr/bin/firefox
or
cfx run -b /usr/bin/firefox-trunk
These examples are ony valid in some Linux distros like Ubuntu, you will have to find the firefox binary file in Mac OSX.
| 0 | 1 | 0 | 0 |
2014-03-24T13:19:00.000
| 1 | 1.2 | true | 22,610,616 | 0 | 0 | 1 | 1 |
I installed the the latest Add-On SDK by Mozilla (version 1.15). Installation was successful and when I execute cfx I get a list of all possible commands. I made a new separate empty folder, cd'd into it and ran cfx init. This was also successful and all necessary folders and files got created.
Now when I try to run the extension or test it, I get the following error:
I can't find the application binary in any of its default locations on
your system. Please specify one using the -b/--binary option.
I have tried looking up the docs to see what kind of file I should be looking for but was unsuccessful in solving the issue. I tried to create an empty bin folder within the add-on folder and i have tried initiating the template in different parents and sub-folders. I still get the same message.
I'm running on a Mac, OSX Mavericks 10.9.1
What's going on here exactly?
|
Django: Core library and South migrations
| 22,619,479 | 0 | 0 | 78 | 0 |
python,django,django-south
|
Well... the answer is to use apps.
That's what they're for. They were designed the way the are exactly because standard modules don't provide the level of integration needed.
If you start hacking away on your library to make it work on its own, you'll end up with mess of code and glue about the same size of a django app, but with considerably worse smell.
| 0 | 0 | 0 | 0 |
2014-03-24T20:00:00.000
| 1 | 1.2 | true | 22,619,437 | 0 | 0 | 1 | 1 |
I'm currently working on a django project which tends to get pretty complex in time. Therefore I'm planning to encapsulate basic core models and utilities that are going to be reused throughout the application in a separate space. Since these models are mostly base models needed by other apps, imho there's no need to create a django app and instead place them in an standard python package (so the package is acting just like a simple library).
Since I'm using south for migrations I'm running into problems when not creating an app and instead use my 'library', because south only considers apps for migrations.
What is the django way to avoid this 'problem' and to be able to also create migrations for my core models?
|
Flask accept request data as a stream without processing?
| 22,659,358 | 1 | 0 | 92 | 0 |
python,flask
|
The Werkzeug Request object heavily relies on properties and anything that touches request data is lazily cached; e.g. only when you actually access the .form attribute would any parsing take place, with the result cached.
In other words, don't touch .files, .form, .get_data(), etc. and nothing will be sucked into memory either.
| 0 | 0 | 0 | 0 |
2014-03-25T05:36:00.000
| 1 | 1.2 | true | 22,626,203 | 0 | 0 | 1 | 1 |
I have an endpoint in my Flask application that accepts large data as the content. I would like to ensure that Flask never attempts to process this body, regardless of its content-type, and always ensures I can read it with the Rquest.stream interface.
This applies only to a couple of endpoints, not my entire application.
How can I configure this?
|
Good places to deploy a simple Django website
| 22,655,516 | 0 | 0 | 125 | 0 |
python,django,hosting
|
Hosting yourself can be cheaper, but you will have to spend some time maintaing the system to keep it safe. Choosing a service may be a bit more expansive but you don't have to deal with the system itself.
Choose your best.
| 0 | 0 | 0 | 0 |
2014-03-26T08:39:00.000
| 3 | 0 | false | 22,655,420 | 0 | 0 | 1 | 2 |
I am looking for options on places to host a Django site.
Should I find a service that already has the proper programs and dependencies installed?
Or can I gain access to a server and install them myself?
|
Good places to deploy a simple Django website
| 22,655,606 | 1 | 0 | 125 | 0 |
python,django,hosting
|
Webfaction
Heroku
Google App Engine
AWS Elastic Beanstalk
Windows Azure
But, cheaper, do it yourself. VPS's these days are quite cheap (digitalocean.com $5/month). An easy to manage combination: Ubuntu + Nginx + Gunicorn, and follow some tutorials about how to secure and update your VPS.
| 0 | 0 | 0 | 0 |
2014-03-26T08:39:00.000
| 3 | 0.066568 | false | 22,655,420 | 0 | 0 | 1 | 2 |
I am looking for options on places to host a Django site.
Should I find a service that already has the proper programs and dependencies installed?
Or can I gain access to a server and install them myself?
|
Display single record in django modeladmin
| 22,660,278 | 2 | 0 | 166 | 0 |
python,django,django-models,django-admin
|
i will suggest you to do something like below i am doing .
Django admin provided you to create a method of particular field name which you have defined into list_display.
In that method you are ovveride return content for that field like below.
class AAdmin(admin.ModelAdmin):
list_display = ('id', 'email_settings')
""" """
def email_settings(self, obj):
from django.core.urlresolvers import reverse
return '%s'%('/admin/core/emailsetting/?id='+str(obj.email_setting.id), obj.email_setting.id)
email_settings.allow_tags = True
email_settings.short_dscription = "Email Setting Link"
Here you can see url is hardcoded .
You can use _meta to get app name and model name .
Example : obj._meta.app_name
| 0 | 0 | 0 | 0 |
2014-03-26T11:53:00.000
| 1 | 1.2 | true | 22,660,190 | 0 | 0 | 1 | 1 |
I want to implement something like this:
I have model A admin with a status field which is a link to the model B admin.
Now when i click on column for the row with link for model B admin it should go to model B admin which it is currently doing but it should only display a single record out of all the records model B for which i clicked.
Model A contains a Foreign Key for model B's record and that is the record which should be displayed in the admin view
|
Database engine choice for Django/ Python application
| 22,664,876 | 1 | 1 | 759 | 1 |
python,database,django,sqlite,postgresql
|
use postgreSQL, our team worked with sqlite3 for a long time. However, when you import data to db,it often give the information 'database is locked!'
The advantages of sqlite3
it is small,as you put it, no server setup needed
max_length in models.py is not strictly, you set max_length=10, if you put 100 chars in the field, sqlite3 never complain about it,and never truncate it.
but postgreSQL is faster than sqlite3, and if you use sqlite3.
some day you want to migrate to postgreSQL, It is a matter beacuse sqlite3 never truncate string but postgreSQL complains about it!
| 0 | 0 | 0 | 0 |
2014-03-26T13:26:00.000
| 2 | 0.099668 | false | 22,662,456 | 0 | 0 | 1 | 1 |
I am working on a Python/Django application. The core logic rests in a Python application, and the web UI is taken care of by Django. I am planning on using ZMQ for communication between the core and UI apps.
I am using a time-series database, which uses PostgreSQL in the background to store string data, and another time-series tool to store time-series data. So I already have a PostgreSQL requirement as part of the time-series db. But I need another db to store data (other than time-series), and I started work using PostgreSQL.
sqlite3 was a suggestion from one of my team members. I have not worked on either, and I understand there are pros and cons to each one of them, I would like to understand the primary differences between the two databases in question here, and the usage scenarios.
|
request.method == "post" returns a false, but
| 22,666,767 | 0 | 0 | 164 | 0 |
django,python-2.7,django-forms
|
request.POST is a dictionary. When it is not empty it returns True.
request.method == 'POST' (note the upper-case for POST and the double equal sign ==) is checking the method.
I believe that you made request.method = 'post' which is clearly not what you meant.
| 0 | 0 | 0 | 0 |
2014-03-26T16:01:00.000
| 1 | 0 | false | 22,666,642 | 0 | 0 | 1 | 1 |
I am adding data from a ModelForm to the db, but "if request.POST returns" a true, and if "request.method = 'post'" returns a false. How can that be? From what I understand it is supposed to work the other way around.
|
Can't get Django/Postgres app settings working on Heroku
| 22,693,845 | 5 | 3 | 1,923 | 1 |
python,django,postgresql,heroku
|
Have you set your DJANGO_SETTINGS_MODULE environment variable? I believe what is happening is this: by default Django is using your local.py settings, which is why it's trying to connect on localhost.
To make Django detect and use your production.py settings, you need to do the following:
heroku config:set DJANGO_SETTINGS_MODULE=settings.production
This will make Django load your production.py settings when you're on Heroku :)
| 0 | 0 | 0 | 0 |
2014-03-26T22:12:00.000
| 1 | 1.2 | true | 22,674,128 | 0 | 0 | 1 | 1 |
I'm making a Django app with the Two Scoops of Django template. Getting this Heroku error, are my Postgres production settings off?
OperationalError at /
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Exception Location: /app/.heroku/python/lib/python2.7/site-packages/psycopg2/__init__.py
foreman start works fine
Procfile: web: python www_dev/manage.py runserver 0.0.0.0:$PORT --noreload
local.py settings:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'www',
'USER': 'amyrlam',
'PASSWORD': '*',
'HOST': 'localhost',
'PORT': '5432',
}
}
production.py settings: commented out local settings from above, added standard Heroku Django stuff:
import dj_database_url
DATABASES['default'] = dj_database_url.config()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
ALLOWED_HOSTS = ['*']
import os
BASE_DIR = os.path.dirname(os.path.abspath(file))
STATIC_ROOT = 'staticfiles'
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
UPDATE: production settings, tried changing:
import dj_database_url
DATABASES['default'] = dj_database_url.config(default=os.environ["DATABASE_URL"])
(named my Heroku color URL to DATABASE_URL, same link in heroku config)
|
Flask Session will not Persist
| 22,688,640 | 6 | 5 | 2,289 | 0 |
python,flask,session-variables
|
Problem was that I had the key static in my init which caused it to work in dev but in production in the .wsgi it was still dynmaic, I have changed this and all seems to be working now.
| 0 | 0 | 0 | 0 |
2014-03-26T23:18:00.000
| 1 | 1.2 | true | 22,675,084 | 0 | 0 | 1 | 1 |
I have recently deployed my first Flask application (first web application ever actually), one problem I am running into and haven't had luck tracking down is related to sessions.
What I am doing is when the user logs in I set session['user'] = user_id and what is happening is I occasionally get a key error when making a request involving that session key. If I try to make the request again the session key is there and the request works fine. I have done research and set the app.config['SERVER_NAME'] to my domain and made sure the secret_key was static, it was dynamic before.
This does not happen when on my local development server so I am a bit stumped at this point.
|
High CPU usage for DJango 1.4 on Windows 2012
| 22,773,695 | 0 | 0 | 94 | 0 |
performance,python-2.7,32-bit,windows-server-2012
|
The issue was in 32-bit version of Python shipped with Zoo.
Installing a 64-bit version and modifying the Zoo engine to use it, has boosted things significantly.
| 0 | 1 | 0 | 0 |
2014-03-27T06:26:00.000
| 1 | 1.2 | true | 22,679,857 | 0 | 0 | 1 | 1 |
We migrated to Helicon Zoo on Windows 2012 (from ISAPI on 2008). Problem is that the users started complaining about random slowdowns and timeouts with the application.
The Python is 2.7 32-bit (due to Zoo requirements).
That said, the problem is not Zoo related, as the runserver seems to exhibit same issues.
The CPU shows to be the highest usage, practically reaching 80%-90% on every request.
On Linux, same application works just fine.
Are there any known caveats with Python 2.7 32-bit on Windows 2012?
|
Django dev server request.META has all my env vars
| 54,917,188 | 2 | 11 | 703 | 0 |
python,django
|
I just ran into this as well which caught me by surprise, I thought my page was sending all my env variables to the server. I use the env to store credentials so I was concerned.
Any application running in your environment has access to your env variables, therefore the server has access to your env variables. Bottom line the browser is not sending all your env variables to the server. The request object is built on the server side.
| 0 | 0 | 0 | 0 |
2014-03-27T12:48:00.000
| 2 | 0.197375 | false | 22,688,151 | 0 | 0 | 1 | 1 |
Why do I see all my environment variables in request.META when using the dev server?
|
Multiple assignment of variables in coffee
| 22,692,398 | 16 | 6 | 3,111 | 0 |
javascript,python,coffeescript
|
Try with [a, b, c] = ['this', 'is', 'variables'].
| 0 | 0 | 0 | 0 |
2014-03-27T15:33:00.000
| 1 | 1.2 | true | 22,692,291 | 1 | 0 | 1 | 1 |
Can I assign multiple variables in coffee like in python:
a, b, c = 'this', 'is', 'variables'
print c >>>variables
|
Security concerning MongoDB on ec2?
| 22,716,431 | 2 | 1 | 182 | 1 |
python,mongodb,security,amazon-web-services,amazon-ec2
|
EC2 security policies by default block all incoming ports except ones you have sanctioned, as such the firewall will actually stop someone from getting directly to your MongoDB instance; as such yes it is secure enough.
Since the instances are physically isolated there is no chance of the problems you would get on shared hosting of someone being able to route through the back of their instances to yours (though some things are still shared like IO read head).
| 0 | 0 | 0 | 0 |
2014-03-28T14:39:00.000
| 2 | 0.197375 | false | 22,715,888 | 0 | 0 | 1 | 2 |
I'm using a flask on an ec2 instance as server, and on that same machine I'm using that flask talking to a MongoDB.
For the ec2 instance I only leaves port 80 and 22 open, without leaving the mongo port (27017) because all the clients are supposed to talk to the flask server via http calls. Only in the flask I have code to insert or query the server.
What I'm wondering is
Is it secure enough? I'm using a key file to ssh to that ec2 machine, but I do need to be 99% sure that nobody else could query/insert into the mongodb
If not, what shall I do?
Thanks!
|
Security concerning MongoDB on ec2?
| 22,716,299 | 2 | 1 | 182 | 1 |
python,mongodb,security,amazon-web-services,amazon-ec2
|
Should be secure enough. If I understand correctly, you don't have ports 27017 open to the world, i.e. you have (or should)block it thru your aws security group and perhaps your local firewall on the ec2 instance, then the only access to that port will be from calls originating on the same server.
Nothing is 100% secure, but I don't see any holes in what you have done.
| 0 | 0 | 0 | 0 |
2014-03-28T14:39:00.000
| 2 | 1.2 | true | 22,715,888 | 0 | 0 | 1 | 2 |
I'm using a flask on an ec2 instance as server, and on that same machine I'm using that flask talking to a MongoDB.
For the ec2 instance I only leaves port 80 and 22 open, without leaving the mongo port (27017) because all the clients are supposed to talk to the flask server via http calls. Only in the flask I have code to insert or query the server.
What I'm wondering is
Is it secure enough? I'm using a key file to ssh to that ec2 machine, but I do need to be 99% sure that nobody else could query/insert into the mongodb
If not, what shall I do?
Thanks!
|
How to check if there are errors in python
| 22,719,539 | 3 | 1 | 182 | 0 |
python,google-app-engine
|
If you are using the App Engine Launcher then by clicking on the Logs you can see all the logs and errors.
An alternative way is to start the development server via the command line (as it's already mentioned) and you will see all the logs there, which makes it much easier to work with because the Logs windows is not that flexible.
| 0 | 1 | 0 | 0 |
2014-03-28T15:45:00.000
| 2 | 1.2 | true | 22,717,414 | 1 | 0 | 1 | 1 |
While using Google App Engine if there is an error in python the result is a blank page. It is difficult to debug python since you don't get line number on which there is error. It is extremely frustrating when you get blank page because of indentation error. Is there any way to execute python google app engine script in python interpreter so i get python error there itself.
|
Using Javascript variables in Python
| 22,718,068 | 3 | 1 | 202 | 0 |
javascript,python,tornado
|
No.
What you are doing in Tornado is constructing some HTML and javascript as text, ready to be sent to the user's browser to be interpreted. On the server, it is only text. You can put values from Python into the text, because the Python is running on the server. There is a clear and complete separation between what happens on the server (Tornado, python) and what happens later on the client (HTML, Javascript).
| 0 | 0 | 0 | 0 |
2014-03-28T16:11:00.000
| 1 | 0.53705 | false | 22,717,928 | 1 | 0 | 1 | 1 |
While working on Tornado template, I know we can use/work with Python variables in HTML/Javascript using {{python_variable}}.
Similarly, is it possible to use Javascript variable in Python code, without passing to another file?
|
openERP 7 need to export data in UTF-8 CSV , but how?
| 22,756,295 | 1 | 0 | 574 | 0 |
python,postgresql,openerp,erp,openerp-7
|
Encodings are a complicated thing, and it is difficult to answer an encoding-related question without precise facts. ANSI is not an encoding, I assume you actually mean ASCII. And ASCII itself can be seen as a subset of UTF-8, so technically ASCII is valid UTF-8.
OpenERP 7.0 only exports CSV files in UTF-8, so if you do not get the expected result you are probably facing a different kind of issue, for example:
The original data was imported using a wrong encoding (you can choose the encoding when you import, but again the default is UTF-8), so it is actually corrupted in the database, and OpenERP cannot do anything about it
The CSV file might be exported correctly in UTF-8 but you are opening it with a different encoding (for example on Windows most programs will assume your files are ISO-8859-1/Latin-1/Windows-1252 encoded). Double-check the settings of your program.
If you need more help you'll have to be much more specific: what result do you get (what does the data look like), what did you expect, etc.
| 0 | 0 | 0 | 0 |
2014-03-28T16:46:00.000
| 1 | 1.2 | true | 22,718,646 | 0 | 0 | 1 | 1 |
I can export a CSV with openERP 7 , but it is encoded in ANSI. I would like to have it as a UTF-8 encoded file. How can I achieve this ? The default export option in openERP doesn"t have any extra options. What files should be modified ? Or is there an app for this matter ? Any help would be appreciated.
|
Creating messenger for django
| 22,729,332 | 1 | 0 | 3,320 | 0 |
python,django,messenger
|
Or you can install xmpp server (like eJabberd) and write a server side interface over it. It will be easier, faster and optimal solution.
Gmail and Facebook both uses xmpp protocol. People using your application will also be able to send chat request to their friends in gmail.
You wont even have to write a website interface, there are javascript library (like Converse.js) available which you can directly plug into your website and you will be good to go.
| 0 | 0 | 0 | 0 |
2014-03-29T08:23:00.000
| 2 | 0.099668 | false | 22,728,758 | 0 | 0 | 1 | 1 |
I'm a learning Python/Django programmer and want to try to create an easy web-messenger. Is it real to write web-messenger for django? And does any modules for that exist or any open-source protocols support python?
|
django difference between validator and clean_field method
| 22,736,818 | 0 | 5 | 1,315 | 0 |
python,django,django-forms,django-validation
|
As far as I remember a field can have several validators (like min_length, max_length) which will be called by the default clean_field method.
| 0 | 0 | 0 | 0 |
2014-03-29T20:52:00.000
| 2 | 0 | false | 22,736,754 | 0 | 0 | 1 | 1 |
In a form in django, what is the difference between a validator for a field and a clean_<field> method for that field?
|
How to communicate between Django and Twisted when implementing a publish-subscribe pattern?
| 22,750,492 | 1 | 3 | 578 | 0 |
python,django,websocket,twisted,publish-subscribe
|
There are dozens or hundreds of ways to do inter-process communication. For example, you could use HTTP by running an HTTP server in one process and using an HTTP client in the other.
The specific choice of protocol probably doesn't matter a whole lot. The particular details of the kind of communication you need might suggest one protocol over the others. If the extent of your requirements are just to provide notification that "something has happened" then a very simple protocol will probably do the job just fine.
| 0 | 1 | 0 | 0 |
2014-03-30T03:48:00.000
| 2 | 0.099668 | false | 22,740,033 | 0 | 0 | 1 | 1 |
I'm implementing a WebSocket server using Python and Autobahn (something that builds off of Twisted). What's the best way to let my Autobahn/Twisted server know that something has happened from within my Django application?
More specifically, I'm implementing a notifications service and instant update service that automatically let's my client side application know when things have changed and what it needs to update.
Is there any way to allow Django to "publish" to my Twisted server and then update the client side? I'm not really sure how this should all look.
Thanks
|
How to get a redditors most down voted comment using PRAW?
| 23,044,806 | 0 | 0 | 585 | 0 |
python,praw
|
The sorting types available in PRAW are equivalent to those available on the webinterface, such as 'new', 'top' or 'controversial'. There isn't a special sort to retrieve worst comments. It may be silly to loop through all of them, but that's the only way to do what you want.
| 0 | 0 | 0 | 0 |
2014-03-30T03:53:00.000
| 2 | 0 | false | 22,740,059 | 0 | 0 | 1 | 2 |
Is there a way to get a redditors worst comment using praw?
I have tried redditor.get_comments(sort="worst").next().body with different sorts but nothing produces the desired result. I suppose I could get all their comments and then loop through them but that seems silly.
|
How to get a redditors most down voted comment using PRAW?
| 40,520,320 | 1 | 0 | 585 | 0 |
python,praw
|
This is a little late, but probably the best way to do this is to sort by top, then use after=t1_d9pvq54 (for example) and a high count to quickly page through the comments until you get to the last one which will be the worst comment.
| 0 | 0 | 0 | 0 |
2014-03-30T03:53:00.000
| 2 | 0.099668 | false | 22,740,059 | 0 | 0 | 1 | 2 |
Is there a way to get a redditors worst comment using praw?
I have tried redditor.get_comments(sort="worst").next().body with different sorts but nothing produces the desired result. I suppose I could get all their comments and then loop through them but that seems silly.
|
Scrapy: scraping website where targeted items are populated using document.write
| 22,757,917 | 2 | 0 | 428 | 0 |
python,web-scraping,scrapy
|
You can't do this, as scrapy will not execute the JavaScript code.
What you can do:
Rely on a headless browser like Selenium, which will execute the JavaScript. Afterwards, use XPath (or simple DOM access) like before to query the web page after executing the page.
Understand where the contents come from, and load and parse the source directly instead. Chrome Dev Tools / Firebug might help you with that, have a look at the "Network" panel that shows fetched data.
Especially look for JSON, sometimes also XML.
| 0 | 0 | 1 | 0 |
2014-03-31T09:14:00.000
| 1 | 1.2 | true | 22,757,755 | 0 | 0 | 1 | 1 |
I am trying to scrap a website where targeted items are populated using document.write method. How can I get full browser html rendered version of the website in the Scrapy?
|
Openshift overrides email header 'from', 'reply-to' fields. How to send email without having to use SendGrid nor other paid email service.?
| 24,405,097 | 0 | 0 | 403 | 0 |
python,django,email,openshift,mezzanine
|
I myself is looking for free SMTP library to just send emails. So far not much luck.
Tried java embedded SMTP library Aspirin. I am able to send mails but not very comfortable working with it as I keep getting some unknown exceptions.
Apache James as another Java based SMTP server but don't think we can embed in the code yet.
| 0 | 0 | 0 | 0 |
2014-03-31T09:26:00.000
| 1 | 0 | false | 22,757,997 | 0 | 0 | 1 | 1 |
I have django 1.6 and python 2.7 deployed on Openshift. I notice that when the application sends emails out, Openshift overrides the email header by changing the 'From' field to '[email protected]' and ignore all 'Reply-to' field that has been set in the application.
I have searched around and it seems like Openshift overrides the email header and recommendation is to use their email service partner which is NOT FREE.
Is there any other way to avoid this ie. deploy Django application on Openshift while still having the application sends email as per dictated in the program. This exact program runs with no issues on test environment and localhost.
Any pointers are much appreciated. Thank you.
|
Where should I make heavy computations ? Client or server side?
| 22,761,056 | 6 | 2 | 1,700 | 0 |
javascript,python,web-applications,numpy
|
Consider both situations: If the computation is client-side, then your client gets loaded, the computation power of the client computer (which maybe is just a mobile phone or whatever) comes into play, and it won't matter much whether other users of the site are doing computations at the same time.
On the other hand, if the computation is done server-side, then your server gets loaded, the computation time in a single-user situation is probably smaller (because your server probably is more powerful than the average client computer), but it will drop dramatically in case you have lots of users accessing your server at the same time.
Other aspects come into play:
If you do it server-side, you should ensure that no private data gets leaked in the process of transmitting the parameters or the results (so use https or similar).
Doing it server-side allows for later upgrading of the computational power (maybe split the task onto several nodes in order to have smaller computation time for higher server costs).
Doing it client-side might allow to do it even off-line, given a proper caching mechanism.
So, all in all, your question is too broad and underspecified to give a clear answer.
| 0 | 0 | 0 | 0 |
2014-03-31T11:50:00.000
| 1 | 1.2 | true | 22,760,837 | 0 | 0 | 1 | 1 |
I have a desktop application, made in Python, with PyQT and scipy / numpy.
The aim of the program is, to find the optimal set of parameters for a differential equation, given some data.
Thus, we use a numerical solver and an optimization routine from numpy. The computation is quite eavy, but also quick (30 sec max), but can become longer (several hours) if we use custom parameters space exploration.
The next step is to "put it on the cloud", so the user doesn't have to bother how to install the application.
Thus, we want to create a Flask application, with display using d3.js or something like that.
I have never done any JS, so I wanted to know what is the best architecture :
the user uploads his data, they are sent on the server, it performs the computations and send them back => we can use scipy / numpy on the server, but too many simultaneous connections can shut down everything.
the user uploads his data, they are processed in JavaScript, on the client side => no more problem on the server, but I have to discover a new language and implement scientific computations myself (and I think it will be longer than the Fortran routines from numpy)
Using / learning JS is not the real problem, being efficient with it is more problematic.
Which is the best option for future modifications (the computations are longer, we want to provide a clustering of the results...) and for development time.
What would you do ?
Thanks.
|
Use production App Engine datastore on development machine?
| 22,772,893 | 2 | 0 | 334 | 0 |
python,google-app-engine
|
TL;DR: We do not support having the dev_appserver use the real app-engine datastore. Even with the suggested use of "remote_api", AFAIK, the dev_appserver does not know how to use it.
If you really want to make this work, you could write your own low-level API and have your own datastore abstraction that uses your API instead of the actual datastore, however this is a non trivial amount of work.
Another option is to have a servlet that can pre-populate your dev datastore with the data you need from checked in files. The checked in raw data could be non-real data or obfuscated real data. At dev_appserver startup, you hit this URL and your database becomes pre-populated with data. If you take this route, you get the bonus of not operating on your live data with dev code.
HTH!
| 0 | 1 | 0 | 0 |
2014-03-31T15:22:00.000
| 2 | 1.2 | true | 22,765,563 | 0 | 0 | 1 | 1 |
Is it possible to setup the App Engine SDK on my local machine to use the live datastore while developing? Sometimes it's just easier for my workflow to work live.
If not, is there an easy way to download or sync the live data to development machine?
Thanks!
|
Disabling alerts and errors in Django File Uploader
| 22,808,378 | 0 | 0 | 17 | 0 |
javascript,python,django,file-upload
|
I've noticed that when using showMessage function when initializing plugin it can override default behaviour of the plugin. Problem solved.
| 0 | 0 | 0 | 0 |
2014-04-02T09:59:00.000
| 1 | 1.2 | true | 22,807,853 | 0 | 0 | 1 | 1 |
I'm using this plugin: https://github.com/zmathew/django-ajax-upload-widget and I'm wondering if there is any way of disabling alerts/notifications when upload fails without changing plugin code?
I want to use bootstrap notifications instead of this ugly default alert popups, but also I have to use Django Eggs so I can't change the plugin code/files.
In documentation I've seen that I can set plugin behaviour when upload success, but can't see anything about upload fail. Please help.
|
Are there any three-way data binding frameworks between the DOM, JavaScript, and server-side database for AngularJS and Django?
| 22,821,198 | 0 | 4 | 1,890 | 0 |
javascript,python,django,angularjs,data-binding
|
JSON is the way to go. I would look at libraries like Tastypie and Django REST framework to alleviate the amount of code to write.
| 0 | 0 | 0 | 0 |
2014-04-02T19:05:00.000
| 2 | 0 | false | 22,820,723 | 0 | 0 | 1 | 1 |
One of the features hawked by AngularJS aficionados is the two-way data binding between DOM contents and JavaScript data that the framework offers.
I'm presently working on a couple of learning projects integrating AngularJS and Django, and one of the pain points is that the problem AngularJS solves between data in JavaScript and DOM representation is not immediately solved for the pairing of AngularJS and Django. Ergo, coordinating AngularJS and Django (AFAICT as an AngularJS novice) involves the kind of programming that is common in jQuery DOM manipulations and Angular seems to be written to obviate the need for. This is great for learning, but leads me to ask, "Has anyone tried to do for AngularJS + Django what AngularJS and Django individually offer to developers, namely obviating the need for this kind of stitching-up code?" AngularJS is more explicit about "Let two-way binding do the work," but Django as "the web framework for perfectionists with deadlines" seems intended to decrease manual labor.
At present I am building JSON to send to the client, but I was wondering if there were any projects to reconcile AngularJS to Django.
|
Multiple websites using the same app - how to set this up?
| 22,856,740 | 2 | 1 | 60 | 0 |
django,python-2.7
|
I recently had something similar to do.
I have for each domain a specific setting file with an unique SITE_ID and also a wsgi file per site. Then in my http.conf (I'm using apache on webfaction) i set up multiple VirtualHost instances, each pointing out to the specific wsgi file.
My configuration looks something like this:
random_django_app/
__init__.py
models.py
...
another_app/
...
settings_app/
settings/
__init__.py
base.py
example_co_uk.py
example_ca.py
...
wsgis/
__init__.py
example_co_uk.py
example_ca.py
__init__.py
urls.py
| 0 | 0 | 0 | 0 |
2014-04-04T03:14:00.000
| 2 | 1.2 | true | 22,852,845 | 0 | 0 | 1 | 1 |
I have an app that shows products available in the US. If i want to change the country, I simply modify the value of a variable in my settings.py file.
Now... each country I serve needs to have its own site, e.g. example.co.uk, example.ca, etc. They'll all be hosted on the same server and use the same database. The views, static files,etc. would be almost the same for each country.
What's the best way of setting this up? Should I have one main app and then have per-country apps that extend the app?
(Using Django 1.6.2/Python 2.7)
|
Security optimal file permissions django+apache+mod_wsgi
| 24,634,526 | 1 | 4 | 1,003 | 1 |
python,django,apache,security,permissions
|
In regards to serving the application from your home directory, this is primarily preference based. However, deployment decisions may be made depending on the situation. For example, if you have multiple users making use of this server to host their website, then you would likely have the files served from their home directories. From a system administrator's perspective that is deploying the applications; you may want them all accessible from /var/www... so they are easier to locate.
The permissions you set for serving the files seem fine, however they may need to run as different users... depending on the number of people using this machine. For example, lets say you have one other application running on the server and that both applications run as www-data. If the www-data user has read permissions of Django's config file, then the other user could deploy a script that can read your database credentials.
| 0 | 0 | 0 | 0 |
2014-04-04T20:58:00.000
| 1 | 0.197375 | false | 22,872,888 | 0 | 0 | 1 | 1 |
I'm just about getting started on deploying my first live Django website, and I'm wondering how to set the Ubuntu server file permissions in the optimal way for security, whilst still granting the permissions required.
Firstly a question of directories: I'm currently storing the site in ~/www/mysite.com/{Django apps}, but have often seen people using /var/www/... or /srv/www; is there any reason picking one of these directories is better than the other? or any reason why keeping the site in my home dir is a bad idea?
Secondly, the permissions of the dir and files themselves. I'm serving using apache with mod_wsgi, and have the file WSGIScriptAlias / ~/www/mysite.com/mainapp/wsgi.py file. Apache runs as www-data user. For optimal security who should own the wsgi.py file, and what permissions should I grant it and its containing dir?
Similarly, for the www, www/mysite.com, and www/mysite.com/someapp directories? What are the minimal permissions that are needed for the dirs and files?
Currently I am using 755 and 644 for dir and files respecitvely, which works well enough which allows the site to function, but I wonder if it is optimal/too liberal. My Ubuntu user is the owner of most files, and www-data owns the sqlite dbs.
|
pydev Google App run Path for project must have only one segment
| 23,118,828 | 8 | 4 | 1,009 | 0 |
eclipse,google-app-engine,python-2.7
|
This is clearly a bug, but there's a possible workaround: In a .py file in your project, right-click and go to "Run As." Then, select "Python Run" (not a custom configuration). Let it run and crash or whatever this particular module does. Now, go look at your run configurations - you'll see one for this run. You can customize it as if you had made it anew.
| 0 | 1 | 0 | 0 |
2014-04-05T05:40:00.000
| 1 | 1.2 | true | 22,877,052 | 0 | 0 | 1 | 1 |
I had trouble to run the pyDev Google App run on Eclipse. I can't create a new run configuration and I get this error message: Path for project must have only one segment.
Any ideas about how to fix it? I am running Eclipse Kepler on Ubuntu 13.10
|
Find Imported Python Modules
| 22,879,658 | 1 | 1 | 67 | 0 |
python,flask
|
You can use sys.modules.keys() but you will need to import sys to use it.
| 0 | 0 | 0 | 0 |
2014-04-05T10:24:00.000
| 3 | 1.2 | true | 22,879,593 | 1 | 0 | 1 | 1 |
I'm building a flask application and I want to remove the redundancy on importing modules. So, on runtime I want to print all the imported modules.
Is there a way to do that?
|
Exchanging NDB Entities between two GAE web apps using URL Fetch
| 22,880,568 | 2 | 0 | 131 | 0 |
python,google-app-engine,google-cloud-datastore,app-engine-ndb,urlfetch
|
You can use the NDB to_dict() method for an entity and use json to exchange te data.
If it is a lot of data you can use a cursor.
To exchange the entity keys, you can add the safe key to the dict.
| 0 | 1 | 0 | 0 |
2014-04-05T10:53:00.000
| 2 | 0.197375 | false | 22,879,890 | 0 | 0 | 1 | 1 |
I am planning to exchange NDB Entities between two GAE web apps using URL Fetch.
One Web app can initiate the HTTP POST Request with the entity model name, starting entity index number and number of entities to be fetched. Each entity would have an index number which would be incremented sequentially for new entities.
To Send an Entity:
Some delimiter could be added to separate different entities as well as to separate properties of an entity. The HTTP Response would have a variable (say "content") containing the entity data.
Receiving Side Web APP:
The receiver web app would parse the received data and store the entities and their property values by creating new entities and "put"ting them
Both the web apps are running GAE Python and have the same models.
My Questions:
Is there any disadvantage with the above method?
Is there a better way to achieve this in automated way in code?
I intend to implement this for some kind of infrequent data backup design implementation
|
Clean retry in deferred.defer
| 22,900,378 | 0 | 0 | 165 | 0 |
python,google-app-engine
|
Just relaunch the task from the task with another deferred.defer call.
| 0 | 1 | 0 | 0 |
2014-04-06T21:04:00.000
| 2 | 0 | false | 22,900,026 | 0 | 0 | 1 | 2 |
I am using deferred.defer quite heavily to schedule tasks using push queues on AppEngine.
Sometimes I wish I would have a clean way to signal a retry for a task without having to raise an Exception that generates a log warning.
Is there a way to do this?
|
Clean retry in deferred.defer
| 22,905,035 | 4 | 0 | 165 | 0 |
python,google-app-engine
|
If you raise a deferred.SingularTaskFailure it will set an error HTTP-status, but there won't be an exception in the log.
| 0 | 1 | 0 | 0 |
2014-04-06T21:04:00.000
| 2 | 1.2 | true | 22,900,026 | 0 | 0 | 1 | 2 |
I am using deferred.defer quite heavily to schedule tasks using push queues on AppEngine.
Sometimes I wish I would have a clean way to signal a retry for a task without having to raise an Exception that generates a log warning.
Is there a way to do this?
|
Django request.POST.get SQL injection
| 22,902,662 | 1 | 1 | 784 | 0 |
python,django
|
If you're feeding the result of request.POST right into a SQL query (i.e., without using the Django ORM), you will most definitely be vulnerable to SQL injection.
But, if you are using the Django ORM (or another well-written ORM, such as SQLAlchemy), all of your input data will be sanitized.
tldr; you're safe
| 0 | 0 | 0 | 0 |
2014-04-07T02:22:00.000
| 1 | 1.2 | true | 22,902,616 | 0 | 0 | 1 | 1 |
I'm currently getting POST data using the method request.POST.get(). I'd like to know if this method gives me raw POST data or if it's correctly escaping and protected against SQL injection.
Thank you in advance for your help.
Galaf
|
Reading Javascript Variable in Python
| 22,907,980 | 1 | 1 | 672 | 0 |
javascript,python,parameter-passing
|
I'm trying to code my bot in Python, then, since it can run synchronously with the browser. How can I pass the JavaScript variables to a client-side Python program?...
You can pass the JavaScript variables only with query string.
I create the server in CherryPy (CherryPy is an object-oriented web application framework using the Python ) and the client function with file HTML.
Repeat: The data can be passed only by query string because the Server works statically and the Client works dynamically.
This is a stupid sentence but so function a general Client/Server.
The server receive a call or message one times and offers the service and response.
I also can wrong....This is a my opinion.
Exists Mako Templates where you can include the pages HTML (helpful for build the structure of the site) or pass variable from Server to Client.
I not know nothing programs or languages that you can pass the JavaScript variable to Server (and I try with Mako Templates but not function).
| 0 | 0 | 1 | 0 |
2014-04-07T04:29:00.000
| 1 | 0.197375 | false | 22,903,625 | 1 | 0 | 1 | 1 |
I'm trying to create a bot for an online game. The values for the game are stored in Javascript variables, which I can access. However, running my bot code in Javascript freezes the browser, since my code is the only thing that executes.
I'm trying to code my bot in Python, then, since it can run synchronously with the browser. How can I pass the Javascript variables to a client-side Python program?
|
If I install Django CMS will it still show my current work in Django admin
| 22,915,407 | 0 | 0 | 34 | 0 |
python,django
|
The Django CMS is a totally different environment. You can't install it on top of your current project. So if you want your models inside django cms you have to migrate them manually to the new enviroment. Maybe their are solutions for it but I'm not aware of them.
| 0 | 0 | 0 | 0 |
2014-04-07T12:56:00.000
| 1 | 1.2 | true | 22,913,080 | 0 | 0 | 1 | 1 |
I'm totally new to Python and I've been learning how to use Django and it's admin functionality to work with my models. My question really is how, if I were to install Django CMS, would work with the admin?
My understanding it limited so I wanted to check as I'm struggling to know if it will still show the model's that I've been making in the same /admin/ url (as i read you login to the cms part via the /admin/ url).
Would installing the CMS overwrite anything current in my /admin/ view, or would the data management merely appear within the CMS control panel?
|
django: Generic Views without Templates
| 22,943,192 | 1 | 0 | 1,412 | 0 |
python,django,oop
|
Templates need HTML. If you want to be generic I would Use a base template and then for each model and CRUD-Operation a partial That takes an object or list of objects and knows exactly how to render That model. The block notation also is good to arrange HTML content.
However, to write an standardized interface like the admin is a lot of Work and not appropriate for an Frontend.
From my Vantage point use TemplateTags and -Filters, Blocks, Use partials wherever you can and standardize your variables you pass to your Template.
This gives you a very pluggable Template system where you can reuse loads of you HTML code.
| 0 | 0 | 0 | 0 |
2014-04-08T13:46:00.000
| 1 | 0.197375 | false | 22,939,011 | 0 | 0 | 1 | 1 |
AFAIK you need a template to use generic views in django.
Is there a way or third party app to use generic views without HTML templates?
I love the django admin interface, since you can use and configure it without writing HTML.
I prefer the object oriented way which is used in django admin to customize it. In most cases you can stay in nice python code, without any HTML/template files.
Update
The django admin uses templates. That's true. But everybody uses the same and proven templates from django.contrib.admin. With generic views everybody writes his own templates. I think this is a drawback and waste of time. Good and extensible default templates would be nice.
I guess someone has already a generic view system for django where you only need to use templates if you want to modify the default. But I could not find such an app with my favourite search engine.
|
Using Python to communicate with JavaScript?
| 22,950,323 | 0 | 0 | 645 | 0 |
javascript,python
|
For security reasons, javascript in a browser is usually restricted to only communicate with the site it was loaded from.
Given that, that's an AJAX call, a very standard thing to do.
| 0 | 0 | 1 | 0 |
2014-04-08T23:41:00.000
| 3 | 0 | false | 22,950,275 | 0 | 0 | 1 | 1 |
Is there a way to send data packets from an active Python script to a webpage currently running JavaScript?
The specific usage I'm looking for is to give the ability for the webpage, using JavaScript, to tell the Python script information about the current state of the webpage, then for the Python script to interpret that data and then send data back to the webpage, which the JavaScript then uses to decide which function to execute.
This is for a video game bot (legally), so it would need to happen in real time. I'm fairly proficient in Python and web requests, but I'm just getting into JavaScript, so hopefully a solution for this wouldn't be too complex in terms of Javascript.
EDIT: One way I was thinking to accomplish this would be to have Javascript write to a file that the Python script could also read and write to, but a quick google search says that JavaScript is very limited in terms of file I/O. Would there be a way to accomplish this?
|
Storing user data in one big database or in a different file for each user - which is more efficient?
| 22,951,848 | 1 | 0 | 242 | 1 |
python,json,database,performance,security
|
I don't think efficiency should be part of your calculus.
I don't like either of your proposed designs.
One table? That's not normalized. I don't know what data you're talking about, but you should know about normalization.
Multiple copies? That's not scalable. Every time you add a user you add a table? Sounds like the perfect way to ensure that your user population will be small.
Is all the data JSON? Document based? Maybe you should consider a NoSQL document based solution like MongoDB.
| 0 | 0 | 0 | 0 |
2014-04-09T02:39:00.000
| 1 | 0.197375 | false | 22,951,806 | 1 | 0 | 1 | 1 |
I'm trying to store user data for a website in Python I'm making. Which is more efficient:
-Storing all the user data in one huge table
-Storing all the user data in several tables, one per user, in one database.
-Storing each user's data in a XML or JSON file, one file per user. Each file has a unique name based on the user id.
Also, which is safer? I'm biased towards storing user data in JSON files because that is something I already know how to do.
Any advice? I'd post some code I already have, but this is more theoretical than code-based.
|
Is django.db.reset_queries required for a (nonweb) script that uses Django when DEBUG is False?
| 23,063,519 | 4 | 3 | 801 | 1 |
python,django,memory-leaks,daemon
|
After running a debugger, indeed, reset_queries() is required for a non-web python script that uses Django to make queries. For every query made in the while loop, I did find its string representation appended to ones of the queries list in connections.all(), even when DEBUG was set as False.
| 0 | 0 | 0 | 0 |
2014-04-10T01:25:00.000
| 1 | 1.2 | true | 22,976,981 | 0 | 0 | 1 | 1 |
I have a script running continuously (using a for loop and time.sleep). It performs queries on models after loading Django. Debug is set to False in Django settings. However, I have noticed that the process will eat more and more memory. Before my time.sleep(5), I have added a call to django.db.reset_queries().
The very small leak (a few K at a time) has come to an almost full stop, and the issue appears to be addressed. However, I still can't explain why this solves the issue, since when I look at what reset_queries does, it seems to clear a list of queries located in each of connections.all().queries. When I try to output the length of these, it turns out to be 0. So the reset_queries() method seems to clear lists that are already empty.
Is there any reason this would still work nevertheless? I understand reset_queries() is run when using mod wsgi regardless of whether DEBUG is True or not.
Thanks,
|
Plone store form inputs in a lightweight way
| 22,986,589 | 0 | 2 | 153 | 1 |
python,forms,plone
|
One approach is to create a browser view that accepts and retrieves JSON data and then just do all of the form handling in custom HTML. The JSON could be stored in an annotation against the site root, or you could create a simple content type with a single field for holding the JSON and create one per record. You'll need to produce your own list and item view templates, which would be easier with the item-per-JSON-record approach, but that's not a large task.
If you don't want to store it in the ZODB, then pick whatever file store you want - like shelf - and dump it there instead.
| 0 | 0 | 0 | 0 |
2014-04-10T10:33:00.000
| 3 | 0 | false | 22,985,483 | 0 | 0 | 1 | 1 |
I need to store anonymous form data (string, checkbox, FileUpload,...) for a Conference registration site, but ATContentTypes seems to me a little bit oversized.
Is there a lightweight alternative to save the inputs -
SQL and PloneFormGen are not an option
I need to list, view and edit the data inputs in the backend...
Plone 3.3.6
python 2.4
Thanks
|
How to add many to one relationship with model from external application in django
| 22,990,016 | 1 | 5 | 564 | 0 |
python,django,foreign-key-relationship
|
There are (at least) two ways to accomplish it:
More elegant solution: Use a TicketProfile class which has a one-to-one relation to Ticket, and put the Client foreign key into it.
Hacky solution: Use a many-to-many relation, and manually edit the automatically created table and make ticket_id unique.
| 0 | 0 | 0 | 0 |
2014-04-10T13:38:00.000
| 2 | 0.099668 | false | 22,989,689 | 0 | 0 | 1 | 1 |
My django project uses django-helpdesk app.
This app has Ticket model.
My app got a Client model, which should have one to many relationship with ticket- so I could for example list all tickets concerning specific client.
Normally I would add models.ForeignKey(Client) to Ticket
But it's an external app and I don't want to modify it (future update problems etc.).
I wold have no problem with ManyToMany or OneToOne but don't know how to do it with ManyToOne (many tickets from external app to one Client from my app)
|
mod_wsgi Error on CentOS 6.5
| 23,104,951 | 1 | 0 | 1,164 | 1 |
python,linux,apache,mod-wsgi
|
I think I figured the out. I needed to load the module and define the VirtualHost in the same include file. I was trying to load in the first include file and define the VirtualHost in the second. Putting them both in one file kept the error from happening.
| 0 | 0 | 0 | 0 |
2014-04-10T15:48:00.000
| 1 | 0.197375 | false | 22,992,857 | 0 | 0 | 1 | 1 |
folks. I'm very new to coding and Python. This is my second Stack question ever. Apologies if I'm missing the obvious. But, I've researched this and am still stuck.
I've been trying to install and use mod_wsgi on CentOS 6.5 and am getting an error when trying to add a VirtualHost to Apache.
The mod_wsgi install seemed to go fine and my Apache status says:
Server Version: Apache/2.2.26 (Unix) mod_ssl/2.2.26 OpenSSL/1.0.1e-fips DAV/2 mod_wsgi/3.4 Python/2.6.6 mod_bwlimited/1.4
So, it looks to me like mod_wsgi is installed and running.
I have also added this line to the my pre-main include file for httpd.conf:
LoadModule wsgi_module modules/mod_wsgi.so
(I have looked ad mod_wsgi is in apache/modules.)
And, I have restarted Apache several times.
The error comes when I try to add a VirtualHost to any of the include files for https.conf.
I always get an error message that says:
Invalid command 'WSGIScriptAlias', perhaps misspelled or defined by a module not included in the server configuration
If I try to use a VirtualHost with a WSGIDaemonProcess reference, I get a similar error message about WSGIDaemonProcess.
From reading on Stack and other places, it sounds like I don't have mod_wsgi installed, or I don't have the Apache config file loading it, or that I haven't restarted Apache since doing those things. But, I really think I have taken all of those steps.
What am I missing here? Thanks!
Marc :-)
|
How to run multiple spiders in the same process in Scrapy
| 22,997,392 | 4 | 3 | 2,104 | 0 |
python,python-2.7,scrapy
|
You will have a name for every spider in the file that says name="youspidername". and when you call it using scrapy crawl yourspidername, it will crawl only that spider. you will have to again give a command to run the other spider using scrapy crawl youotherspidername.
The other way is to just mention all the spiders in the same command like scrapy crawl yourspidername,yourotherspidername,etc.. (this method is not supported for the newer versions of scrapy)
| 0 | 0 | 0 | 0 |
2014-04-10T18:09:00.000
| 3 | 0.26052 | false | 22,995,746 | 0 | 0 | 1 | 1 |
I'm beginner in Python & Scrapy. I've just create a Scrapy project with multiple spiders, when running "scrapy crawl .." it runs only the first spider.
How can I run all spiders in the same process?
Thanks in advance.
|
Easiest way to store a single timestamp in appengine
| 23,025,323 | 0 | 0 | 651 | 0 |
python,google-app-engine
|
Another way to solve this, that i found, is to use memcache. It's super easy. Though it should probably be noted that memcache could be cleared at anytime, so NDB is probably a better solution.
Set the timestamp:
memcache.set("timestamp", current_timestamp)
Then, to read the timestamp:
memcache.get("timestamp")
| 0 | 1 | 0 | 0 |
2014-04-10T23:36:00.000
| 2 | 0 | false | 23,000,998 | 0 | 0 | 1 | 1 |
I am running a Python script on Google's AppEngine. The python script is very basic. Every time the script runs i need it to update a timestamp SOMEWHERE so i can record and keep track of when the script last ran. This will allow me to do logic based on when the last time the script ran, etc. At the end of the script i'll update the timestamp to the current time.
Using Google's NBD seems to be overkill for this but it also seems to be the only way to store ANY data in AppEngine. Is there a better/easier way to do what i want?
|
Pass username/password (Formdata) in a scrapy shell
| 23,269,542 | 0 | 1 | 407 | 0 |
python,web-scraping,scrapy
|
one workaround is to first login using scrapy (using FormRequest) and then invoke inspect_response(response) in the parse method
| 0 | 0 | 1 | 0 |
2014-04-11T21:59:00.000
| 1 | 0 | false | 23,023,294 | 0 | 0 | 1 | 1 |
Is there a way to pass formdata in a scrapy shell?
I am trying to scrape data from an authenticated session, and it would be nice to check xpaths and so on through a scrapy shell.
|
Adding private messaging to a Django project
| 23,041,136 | 1 | 0 | 225 | 0 |
python,django
|
I guess you'll have to roll your own, but really doesn't sound hard.
Is there a reason you'd like to do it in Django? Django is not a simple CMS that you just install and enable some features to make it work. It's a framework, which means you'll have to do some things yourself. And doing this yourself, if you're at least a bit proficient, shouldn't be that hard.
Let's say you have a site / an app written in django and want to implement private messages. All you'd need are two models: User and Messages and save your private message to Messages with two foreign keys for sender and reciever.
You'll have to be more specific with your question to get more specific answers.
| 0 | 0 | 0 | 0 |
2014-04-13T09:22:00.000
| 1 | 1.2 | true | 23,040,983 | 0 | 0 | 1 | 1 |
I'm trying to develop an HTML app with Django using python 3.3.3, and was wondering if there was a simple way to implement a user - to - user private messaging system. I've searched for preexisting apps, but most are out of active development, and other online answers were mostly not useful at all. If possible, I would like it so there are no external dependencies. If there is a simple way to implement this function I would love to know. Thanks.
|
clone element with beautifulsoup
| 27,881,018 | 8 | 21 | 10,039 | 0 |
python,beautifulsoup
|
It may not be the fastest solution, but it is short and seems to work...
clonedtag = BeautifulSoup(str(sourcetag)).body.contents[0]
BeautifulSoup creates an extra <html><body>...</body></html> around the cloned tag (in order to make the "soup" a sane html document). .body.contents[0] removes those wrapping tags.
This idea was derived Peter Woods comment above and Clemens Klein-Robbenhaar's comment below.
| 0 | 0 | 1 | 0 |
2014-04-14T10:21:00.000
| 3 | 1 | false | 23,057,631 | 0 | 0 | 1 | 1 |
I have to copy a part of one document to another, but I don't want to modify the document I copy from.
If I use .extract() it removes the element from the tree. If I just append selected element like document2.append(document1.tag) it still removes the element from document1.
As I use real files I can just not save document1 after modification, but is there any way to do this without corrupting a document?
|
How to change a sqlite table column value type from Django model?
| 49,475,275 | 0 | 1 | 1,433 | 0 |
python,django,sqlite
|
There is an easy way to do this. (in Django 2)
After making the necessary changes to the model.py file of your app, run command:
python manage.py makemigrations - This will generate a new file in migration folder of your app.
python manage.py migrate - This will apply those edits on actual databse.
To check if the changes have been applied, run command : .schema <tablename> in your terminal, after entering the sqlite command-line program.
| 0 | 0 | 0 | 0 |
2014-04-15T12:21:00.000
| 2 | 0 | false | 23,083,599 | 0 | 0 | 1 | 1 |
I created a table before I code the Django app and now I merged both the app and the table with following command python manage.py inspectdb > models.py. However after some while I really need to change the value type of one of the column. Is it enough to chage it through the model file or do I need some additional steps?
|
What is best way to save data with appengine/HTML5/JavaScript/Python combo?
| 23,128,324 | 0 | 0 | 133 | 0 |
javascript,html,google-app-engine,python-2.7
|
My thanks to all of you for taking time to respond. Each response was useful it it's own way.
The AJAX/JQuery looks a promising route for me, so many thanks for the link on that. I'll stop equivocating and stick with Python rather than try Go and start working through the tutorials and courses.
Gary
| 0 | 1 | 0 | 0 |
2014-04-15T13:42:00.000
| 4 | 0 | false | 23,085,522 | 0 | 0 | 1 | 1 |
I want to build an application with an HTML5 interface that persists data using google-app-engine and could do with some some advice to avoid spending a ton of time going down the wrong path.
What is puzzling me is the interaction between HTML5, Javascript/JQuery and Python.
Let's say, I have designed an HTML5 site. I believeetc I can use prompts and forms to collect data entered by users. I also know that I can use Javascript to grab that data and keep it in the form of objects...I need objects for reasons I'll not go into.
But when I look at the app-engine example, it has HTML form information embedded in the Python code which is what is used to store the data in the cloud Datastore.
This raises the following questions in my mind:
do I simply use Python to get user entered information?
how does python interact with a separately described HTML5/CSS2 forms and prompts?
does Javascript/Jquery play any role with respect to data?
are forms and prompts the best way to capture use data? (Is there a better alternative)
As background:
It is a while since I programmed but have used HTML and CSS a fair bit
I did the Javascript and Jquery courses at Codeacademy
I was considering using Go which looks funky but "experimental" worries me and I cannot find a good IDE such as devTable
I can do the Python course at Codeacademy pretty quickly if I need it? I think I may need to understand there objects syntax
I appreciate this is basic basic stuff but if I can get my plumbing sorted, I doubt that I'll have to ask too man more really stupid questsions
Gary
|
Pass information between requests webapp2
| 23,091,888 | 1 | 0 | 345 | 0 |
python,google-app-engine,webapp2
|
You can do it in many ways.
Set cookie by first response and it will be passed to next request - unsafe even cookie is crypted but can be.
First GET will send author to second POST page - POST will send author (hidden field).
First GET will send author to POST url as param (same as above).
You will create session id and save in datastore and with author, GET will send session id cookie, PUT will send session id and you will read from datastore session id with author.
You can use memcache as datastore but it is dangerous (it can be flushed and data is not persistent in cache by design).
You can pass session id from GET to POST with use hidden field not cookie or url.
Consider the simples is GET and redirect to valid POST with variable in URL or in hidden field - other methods is more complex but it need chain of GET/POST.
| 0 | 0 | 0 | 0 |
2014-04-15T15:19:00.000
| 2 | 0.099668 | false | 23,087,949 | 0 | 0 | 1 | 1 |
Is it possible to pass information between requests with webapp2?
I have a class that has to set the author variable on HTTP GET. The HTTP POST will check if author exists, and then continue posting. I tried by having a global variable author=None and then setting author in the HTTP GET, but I think the object is destroyed when the HTTP POST request is made to the same controller.
Any help would be great, thanks!
|
How do I break down files in a similar way to torrents?
| 25,778,457 | 0 | 0 | 350 | 0 |
python,bittorrent
|
It's trivial to "break down" files as you put it. You'll need an algorithm to disassemble them, and then to reassemble them later, presumably by a browser since you mentioned HTML and CSS. Bittorrents implements this, and additionally the ability upload and download from a distributed "swarm" of peers also interested in the same data. Without reinventing the wheel by creating your own version of bittorrent, and again assuming you want to use this data in a browser, you'll want to create a torrent of all the HTML, CSS and other files relevant to your web application, and seed that using bittorrent. Next you'll want to create a bootstrap "page" that makes use of one of the several Javascript bittorrent clients now available to download the torrent, and then load the desired pages and resources when the client completes the download.
| 0 | 0 | 0 | 0 |
2014-04-15T23:59:00.000
| 1 | 0 | false | 23,096,631 | 0 | 0 | 1 | 1 |
I am trying to make a program that breaks down files like HTML or CSS into chunks like that of a torrent. I am completely unsure how to do this. They need to be broken down, than later reassembled in order. anybody know how to do this?
It doesn't have to be in Python, that was just my starting point.
|
Google Analytics reports API - Insufficient Permission 403
| 29,837,598 | 0 | 3 | 5,740 | 0 |
python,django,python-2.7,google-analytics-api,http-status-code-403
|
You should use View ID Not account ID, the 'View ID', you can go:
Admin -> Select Site -> Under "View" -> View Settings , if it doesn't works
you can go: Admin->Profiles->Profile Settings
| 0 | 0 | 1 | 0 |
2014-04-17T09:07:00.000
| 3 | 0 | false | 23,128,964 | 0 | 0 | 1 | 2 |
I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth.
When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back?
I am doing this in Python with Django and I have Analytics API turned on i my API console!
|
Google Analytics reports API - Insufficient Permission 403
| 24,274,077 | 9 | 3 | 5,740 | 0 |
python,django,python-2.7,google-analytics-api,http-status-code-403
|
Had the same problem, but now is solved.
Use View ID Not account ID, the 'View ID', can be found in the Admin->Profiles->Profile Settings tab
UPDATE
now, if you have more a account , you must go: Admin -> Select account -> under View-> click on View Settings
| 0 | 0 | 1 | 0 |
2014-04-17T09:07:00.000
| 3 | 1.2 | true | 23,128,964 | 0 | 0 | 1 | 2 |
I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth.
When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back?
I am doing this in Python with Django and I have Analytics API turned on i my API console!
|
Python logging with thread locals
| 23,138,661 | 1 | 2 | 989 | 0 |
python,django,logging,thread-local-storage
|
You can create a logging.Filter() object that grabs the thread-local variable (or a suitable default when its not there) and add it to the log record. Attach that filter to the root logger and it will be called for all log records before they are passed to handlers. Once the variable is in the log record it can be used in the formatters you use to display/save the information.
| 0 | 0 | 0 | 0 |
2014-04-17T14:36:00.000
| 1 | 1.2 | true | 23,136,122 | 1 | 0 | 1 | 1 |
I'd like to prepend the user email to all web app logs.
I can store the email (taken from the cookie and such) in threading.local(). But I can't be always sure the variable will be there in the thread locals.
Is there a way to tell all the loggers in my app to act like that?
|
How Django framework works behind the scenes?
| 23,143,062 | 4 | 1 | 651 | 0 |
python,django,uwsgi,gunicorn,django-middleware
|
You don't say where your "understanding" comes from, but it's not really accurate. Django itself is pretty agnostic about how it runs - it depends on the server - but it's very unusual for it to be invoked from scratch on each request. About the only method where that's the case is CGI, and it'll run like a dog.
Speaking in very general terms, there are two ways Django can be run. Either it runs inside a process of the web server itself - as with mod_wsgi on Apache - or it runs in a completely separate process and receives requests via reverse proxy from the server, as with uwsgi/gunicorn. Either way, the lifetime of the Django process is not directly connected with the request, but is persistent across many requests. In the case of mod_wsgi for example, the server starts up threads and/or processes (depending on the configuration) and each one lasts for a large number of consecutive requests before being killed and restarted.
For each process, this means that any modules that have been loaded stay in memory for the lifetime of the process. Everything from the middleware onwards is executed once per request, but they wouldn't usually need to be re-imported and run each time.
| 0 | 0 | 0 | 0 |
2014-04-17T20:24:00.000
| 1 | 1.2 | true | 23,142,757 | 0 | 0 | 1 | 1 |
This might sound stupid question so apologies in advance.
I am trying to understand how Django framework actually works behind the scenes. It's my understanding that Django does not run all the time and gets invoked by uwsgi/gunicorn or anything else when a request comes in and processed as follows:
WsgiHandler or ModPythonHandler
Import settings, custom exceptions
Load middleware
Middleware -> URLResolver
Middleware -> View -> Template
Middleware -> HttpResponse
But what I cannot understand that is there any part of Django which keeps running all the time like cache management or some other functions or instances rather being created per request. I would really appreciate if you can explain a bit or give pointers.
|
google endpoint custom auth python
| 23,223,929 | 0 | 1 | 95 | 0 |
python,facebook,google-app-engine,authentication,google-cloud-endpoints
|
For request details, add 'HttpServletRequest' (java) to your API function parameter.
For Google authentication, add 'User' (java) to your API function parameter and integrate with Google login on client.
For twitter integration, use Google app-engine OpenID.
For facebook/loginForm, its all on you to develop a custom auth.
| 0 | 0 | 1 | 0 |
2014-04-18T12:26:00.000
| 1 | 1.2 | true | 23,154,120 | 0 | 0 | 1 | 1 |
I'm trying to implement a secure google cloud endpoint in python for multi-clients (js / ios / android)
I want my users to be able to log by three ways loginForm / Google / Facebook.
I read a lot of docummentation about that but I didn't realy understood how I have to handle connection flow and session (or something else) to keep my users logged.
I'm also looking for a way to debug my endpoint by displaying objects like Request for exemple.
If someone know a good tutorial talking about that, it will be verry helpfull.
thank you
|
How can I get all the plain text from a website with Scrapy?
| 50,809,934 | 1 | 23 | 27,552 | 0 |
python,html,xpath,web-scraping,scrapy
|
The xpath('//body//text()') doesn't always drive dipper into the nodes in your last used tag(in your case body.) If you type xpath('//body/node()/text()').extract() you will see the nodes which are in you html body. You can try xpath('//body/descendant::text()').
| 0 | 0 | 1 | 0 |
2014-04-18T15:03:00.000
| 3 | 0.066568 | false | 23,156,780 | 0 | 0 | 1 | 1 |
I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this?
|
Make python script to run forever on Amazon EC2
| 23,166,196 | 17 | 13 | 8,060 | 0 |
python,amazon-web-services,ssh,amazon-ec2
|
You can run the program using the nohup command, so that even when the SSH session closes your program continues running.
Eg: nohup python yourscriptname.py &
For more info you can check the man page for it using
man nohup.
| 0 | 1 | 0 | 1 |
2014-04-19T05:16:00.000
| 2 | 1 | false | 23,166,158 | 0 | 0 | 1 | 1 |
I have a python script that basically runs forever and checks a webpage every second and notifies me if any value changes. I placed it on an AWS EC2 instance and ran it through ssh. The script was running fine when I checked after half an hour or so after I started it.
The problem is that after a few hours when I checked again, the ssh had closed. When I logged back in, there was no program running. I checked all running processes and nothing was running.
Can anyone teach me how to make it run forever (or until I stop it) on AWS EC2 instances? Thanks a lot.
Edit: I used the Java SSH Client provided by AWS to run the script
|
Automatic SignOut in OpenERP 7 during System Shutdown
| 23,171,711 | -1 | 0 | 183 | 0 |
python,windows,openerp,openerp-7
|
Can we make it in such way that when we Shutdown the System it should
SignOut Automatically without the User interference
there is no need to logoff the users. HTTP is a transactional protocol. All is done when the client has made any request. After any client request the system is always in a clean state. There is no state in the clients, that must be flushed to the server before switching off.
When you shutdown and start-up the OpenERP server again, all clients has lost their "session" and if they do a new request they will be redirect to the login page.
Of course, this could by annoying when users starts to fill a screen form (still in browser), send the request and then get redirected to the login, because there were no valid session.
| 0 | 1 | 0 | 0 |
2014-04-19T05:41:00.000
| 1 | -0.197375 | false | 23,166,349 | 0 | 0 | 1 | 1 |
To SignIn/SignOut into OpenERP 7 we have to login into OpenERP and click on the Icon which is on the right top just beside the "Compose New Message" Icon. Now most of the users forget to SignOut from ERP. Can we make it in such way that when we Shutdown the System it should SignOut Automatically without the User interference. Just like a Windows service. Is there any way to do that ?
Please help me out.
|
Issue with PyCharm and RubyMine, JRE folder?
| 23,365,722 | 1 | 1 | 200 | 0 |
java,python,ruby,pycharm,rubymine
|
IDE comes with it's own version of JRE on Windows.
You can easily configure your environment to use your system wide or any custom JRE (and then delete bundled one if so desired). Just check .bat file in INSTALL_FOLDER\bin folder and see what environment variables and in what order it uses when searching for JRE.
By overriding one of them (IDE-specific one has priority) you can point to a desired JRE installation.
| 0 | 0 | 0 | 0 |
2014-04-20T11:44:00.000
| 1 | 1.2 | true | 23,181,615 | 1 | 0 | 1 | 1 |
The PyCharm and RubyMine IDE's comes with a folder named JRE in the root installation dir, the JRE folder increments the size of the installation around 150 MB, well, I supposse that this folder just contains exactlly the same java runtime environment that an official JRE installer downloaded from Java.com installs, so my question is:
If I've previouslly installed JRE from Java site I can delete forever the JRE folder from PyCharm and/or RubyMine installation directories to reduce the total size?
I've tried to delete the JRE folder from PyCharm and RubyMine root directories to test whether the IDE's really depends from that folder and seems that both IDE's works as normally with the JRE folder deleted, but I need to be sure that is safe or not to delete the JRE folder from Pychar/RubyMine directories if I currentlly have JRE installed.
|
How do I get xhtml2pdf working on GAE?
| 23,335,617 | 0 | 0 | 113 | 0 |
python,google-app-engine,app.yaml,xhtml2pdf
|
I got it now! Don't use XHTML2PDF - use ReportLab on its own instead.
| 0 | 1 | 0 | 0 |
2014-04-20T16:17:00.000
| 1 | 0 | false | 23,184,702 | 0 | 0 | 1 | 1 |
I am new to GAE, web dev and python, but am working my way up.
I have been trying to get xhtml2pdf working on GAE for some time now but have had no luck. I have downloaded various packages but keep getting errors of missing modules. These errors vary depending on what versions of these packages and dependencies I use. I have even tried using the xhtml2pdf "required dependency" versions.
I know xhtml2pdf used to be hosted on GAE according to a stackoverflow post from 2010, but I don't know if this is the case anymore. Have they replaced it with something else that the GAE team think is better?
I have also considered that the app.yaml is preventing my app from running. As soon as I try importing the pisca module, my app stops.
Could anyone please give me some direction on how to get this working? In the sense of how to install these packages with dependencies and where they should be placed in my project folder (note that I am using Windows). And what settings I would need to add to my app.yaml file.
|
django-tables2 'module' object has no attribute 'LinkColumn'
| 67,400,201 | 1 | 0 | 1,133 | 0 |
django,python-2.7,django-tables2
|
You can use tables.columns.LinkColumn instead of tables.LinkColumn.
I solved my problem this way.
| 0 | 0 | 0 | 0 |
2014-04-21T02:08:00.000
| 1 | 0.197375 | false | 23,189,794 | 0 | 0 | 1 | 1 |
I have this problem when using django-tables2 and a custom template rendering.
The issue arises when I added another column, one that is not specified in the model, and the error AttributeError: 'module' object has no attribute 'LinkColumn' pops up.
The table and the custom rendering worked when just the model columns where used.
|
How to check if Django 1.3 project is also compatible with Django 1.6
| 23,219,502 | 5 | 1 | 66 | 0 |
python,django,compatibility
|
The best way is to build a virtualenv with Django 1.6, install your app, and run its tests. There will likely be some small breaks—Django has changed since 1.3—but they should be relatively easy to patch up.
| 0 | 0 | 0 | 0 |
2014-04-22T12:22:00.000
| 1 | 1.2 | true | 23,219,456 | 0 | 0 | 1 | 1 |
I working on project where I must use things from an existing Django application. The application is written with Django 1.3. Is there a way to determine if it is possible to use it for a project that use in Django 1.6.
|
No distributions at all found for some package
| 23,223,408 | 8 | 26 | 83,294 | 0 |
python,django
|
I got the solution ,Try with --allow-unverified
syntax: pip install packagename=version --allow-unverified packagename
Some package condains insecure and unverifiable files. it will not download to the system . and it can be solved by using this method --allow-unverified. it will allow the installation.
Eg: pip install django-ajax-filtered-fields==0.5 --allow-unverified
django-ajax-filtered-fields
| 0 | 0 | 0 | 0 |
2014-04-22T14:14:00.000
| 9 | 1.2 | true | 23,222,104 | 0 | 0 | 1 | 1 |
error when installing some package but its actualy existing example django-ajax-filtered-fields==0.5
Downloading/unpacking django-ajax-filtered-fields==0.5 (from -r
requirements.example.pip (line 13)) Could not find any downloads
that satisfy the requirement django-ajax-filtered-fields==0.5(from
-r requirements.example.pip (line 13))
No distributions at all found for django-ajax-filtered-fields==0.5 Storing debug log for failure in /home/pd/.pip/pip.log
(peecs)pd@admin:~/proj/django/peecs$ pip install
django-ajax-filtered-fields==0.5 --allow-unverified
django-ajax-filtered-fields==0.5 Downloading/unpacking
django-ajax-filtered-fields==0.5 Could not find any downloads that
satisfy the requirement django-ajax-filtered-fields==0.5 Some
externally hosted files were ignored (use --allow-external
django-ajax-filtered-fields to allow). Cleaning up... No distributions
at all found for django-ajax-filtered-fields==0.5 Storing debug log
for failure in /home/pd/.pip/pip.log
|
Flash content not loading using headless selenium on server
| 23,436,481 | 0 | 0 | 574 | 0 |
python,selenium,flash
|
It turns out, I needed to use selenium to scroll down the page to load all the content.
| 0 | 0 | 1 | 0 |
2014-04-22T18:41:00.000
| 1 | 1.2 | true | 23,227,680 | 0 | 0 | 1 | 1 |
I am running selenium webdriver (firefox) using python on a headless server. I am using pyvirtualdisplay to start and stop the Xvnc display to grab the image of the sites I am visiting. This is working great except flash content is not loading on the pages (I can tell because I am taking screenshots of the pages and I just see empty space where flash content should be on the screenshots).
When I run the same program on my local unix machine, the flash content loads just fine. I have installed flash on my server, and have libflashplayer.so in /usr/lib/mozilla/plugins. The only difference seems to be that I am using the Xvnc display on the server (unless plash wasn't installed properly? but I believe it was since I used to get a message asking me to install flash when I viewed a site that had flash content but since installing flash I dont get that message anymore).
Does anyone have any ideas or experience with this- is there a trick to getting flash to load using a firefox webdriver on a headless server? Thanks
|
How to override Django model creation without affecting update?
| 23,249,113 | 3 | 2 | 24 | 0 |
python,django
|
You should not override __init__, because that is called in all cases when a model is being instantiated, including when you load it from the database.
A good way to do what you want is to check the value of self.pk within your save method: if it is None, then this is a new instance being created.
| 0 | 0 | 0 | 0 |
2014-04-23T15:24:00.000
| 1 | 1.2 | true | 23,248,765 | 0 | 0 | 1 | 1 |
I overrided the save() method of my Fooclass so that when I create a Foo instance, some logic occurs. It works well.
Nevertheless, I have other methods in other classes that update Foo instances, and of course, I have to save changes calling the save() method. But I want them to directly update without passing into the logic I made for object creation.
Is there an elegant solution to that?
What about overriding __init__() method instead of save()? (I was told it was a bad practice, but not sure to understand why)
Thank you.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.