Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why would running scheduled tasks with Celery be preferable over crontab?
| 18,451,537 | 4 | 47 | 15,773 | 0 |
python,django,celery,django-celery
|
Celery is indicated any time you need to coordinate jobs across multiple machines, ensure jobs run even as machines are added or dropped from a workgroup, have the ability to set expiration times for jobs, define multi-step jobs with graph-style rather than linear dependency flow, or have a single repository of scheduling logic that operates the same across multiple operating systems and versions.
| 0 | 1 | 0 | 0 |
2013-08-12T13:05:00.000
| 2 | 0.379949 | false | 18,187,751 | 0 | 0 | 1 | 2 |
Considering Celery is already a part of the stack to run task queues (i.e. it is not being added just for running crons, that seems an overkill IMHO ).
How can its "periodic tasks" feature be beneficial as a replacement for crontab ?
Specifically looking for following points.
Major pros/cons over crontab
Use cases where celery is better choice than crontab
Django specific use case: Celery vs crontab to run django based periodic tasks, when celery has been included in the stack as django-celery for queing django tasks.
|
Detecting multiple sessions from same user on Google App Engine
| 18,197,132 | 0 | 0 | 84 | 0 |
python,google-app-engine,user-management
|
Yes you can, but you'll have to build the session tracking functionality yourself.
| 0 | 1 | 0 | 0 |
2013-08-12T18:23:00.000
| 1 | 0 | false | 18,193,967 | 0 | 0 | 1 | 1 |
I am writing an application which uses the Google user API and anyone having a google account can login. I would want to prevent from multiple users using the same google account to login simultaneously. Basically, I would like to allow only 1 user / account to be using my application. As I am running a subscription service, I need to restrict users sharing accounts and simultaneously logging in.
Can I accomplish this somehow in App Engine using Users module? If not, can someone please suggest an alternate mechanism?
I am using Python on App Engine.
|
connect server-side computing with client-side visualization
| 18,213,493 | 0 | 0 | 288 | 0 |
javascript,python,plot,visualization,data-visualization
|
First of all, I would suggest JSON rather than XML to be used for exchange format, it is much easier to parse JSON at the javascript side.
Then, speaking about the architecture of your app, I think that it is better to write a server web application in Python to generate JSON content on the fly than to modify and serve static files (at least that is how such things are done usually).
So, that gives us three components of your system:
A client app (javascript).
A web application (it does not matter what framework or library you prefer: django, gevent, even Twisted will work fine, as well as some others). What it should do is, firstly, giving the state of the points to the client app when it requests, and, secondly, accepting updates of the points' state from the next app and storing them in a database (or in global variables: that strongly depends on how you run it, a single process gevent app may use variables, when an app running withing a multi-process web server should use a database).
An app performing calculations that periodically publishes the points' state by sending it to the web app, probably as JSON body in a POST request. This one most likely should be a separate app due to the typical environment of the web applications: usually it is a problem to perform background processes in a web app, and, anyway, the way this can be done strongly depends on the environment you run your app in.
Of course, this architecture is based on "server publishes data, clients ask for data" model. That model is simple and easy to implement, and the main problem with it is that the animation may not be as smooth as one may want. Also you are not able to notify clients immediately if some changes require urgent update of client's interface. However, smoothness and immediate client notifications are usually hard to implement when a javascript client runs within a browser.
| 0 | 0 | 1 | 0 |
2013-08-13T15:19:00.000
| 1 | 1.2 | true | 18,212,995 | 0 | 0 | 1 | 1 |
I am working on a project which would animate points on a plain by certain methods. I intend to compute the movements of the points in python on server-side and to do the visualization on client-side by a javascript library (raphaeljs.com).
First I thought of the following: Running the process(python) and saving the states of the points into an xml file, than load that from javascript and visualize. Now I realized that maybe it would run for infinity thus I would need a realtime data exchange between the visualization part and the computing part.
How would you do that?
|
django internationalization didn't work on webfaction
| 20,224,198 | 0 | 0 | 65 | 0 |
python,django,internationalization,web-deployment
|
it works now i was passing wrong url to LOCALE_PATHS
| 0 | 0 | 0 | 0 |
2013-08-13T16:12:00.000
| 1 | 1.2 | true | 18,214,098 | 0 | 0 | 1 | 1 |
I have a Django project that works fine on my local server but when I
deploy it to web faction, internationalization doesn't work anymore.
How can I resolve this issue?
|
Reusing an object from a context after submitting a form
| 18,214,393 | 0 | 0 | 51 | 0 |
python,django
|
This is not related to django. It's how the web works. HTTP is stateless.
When you generate the page, you've finished with that task.
The model instance is destroyed.
When the user submits the form or sends the modifications in any other way, a new connection starts with a new request and a new context.
At this point you need to re-instance the object to modify.
Depends on the application and the model itself.
You can pass the unique_id of the object, if it has one, and get it back in your actual context querying for it.
| 0 | 0 | 0 | 0 |
2013-08-13T16:12:00.000
| 3 | 1.2 | true | 18,214,104 | 0 | 0 | 1 | 1 |
I'm trying to do something that may appear to be simple, but I can't figure it out. As always, django surprises me with its complexity...
My view generates an instance of a model and "passes it on" in a context to a template. On that template, the user fills a form and submits it. And this is what should happen next: the object that was in the context when the page loaded is modified a bit and submitted in a context once again (to the same template). However, I can't get the instance of the object that was in the context when the page loaded. Is it possible to do? Maybe as a hidden input? Or with some fancy django function? Any other idea is appreciated as well, even workarounds (it's not really a professional project, I'm doing it for fun and for experience).
I'm sorry if this question is stupid, but I'm new to django and my brain still has troubles with understanding everything. Thanks for your help!
|
C pointer equivalents on other languages
| 18,257,064 | 0 | 0 | 188 | 0 |
php,javascript,python,c,pointers
|
In Java:
Instead of having a pointer to a struct that you allocate with malloc, you have a reference to an instance of a class that you instantiate with "new". (In Java, you cannot allocate memory for objects on the heap directly as you can in C/C++)
Primitives have no pointers, BUT there are libraries built into the main library for wrapping int,double, etc. in objects (Integer, Double).
| 0 | 0 | 0 | 1 |
2013-08-15T16:13:00.000
| 3 | 0 | false | 18,256,915 | 0 | 0 | 1 | 2 |
As a undergraduate in CS, I started with C, where pointer is an important data type. Thereafter I touched on Java, JavaScript, PHP, Python. None of them have pointer per se.
So why? Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
I roughly have some idea but I'd love to hear from more experienced programmers regarding this.
|
C pointer equivalents on other languages
| 18,257,037 | 5 | 0 | 188 | 0 |
php,javascript,python,c,pointers
|
So why?
In general, pointers are considered too dangerous, so modern languages try to avoid their direct use.
Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
The functionality is VERY important. But to make them less dangerous, the pointer has been abstracted into less virulent types, such as references.
Basically, this boils down to stronger typing, and the lack of pointer arithmetic.
| 0 | 0 | 0 | 1 |
2013-08-15T16:13:00.000
| 3 | 1.2 | true | 18,256,915 | 0 | 0 | 1 | 2 |
As a undergraduate in CS, I started with C, where pointer is an important data type. Thereafter I touched on Java, JavaScript, PHP, Python. None of them have pointer per se.
So why? Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
I roughly have some idea but I'd love to hear from more experienced programmers regarding this.
|
App Engine deserializing records in python: is it really this slow?
| 18,281,029 | 7 | 6 | 264 | 0 |
python,google-app-engine,app-engine-ndb
|
Short answer: yes.
I find deserialization in Python to be very slow, especially where repeated properties are involved. Apparently, GAE-Python deserialization creates boatloads of objects. It's known to be inefficient, but also apparently, no one wants to touch it because it's so far down the stack.
It's unfortunate. We run F4 Front Ends most of the time due to this overhead (i.e., faster CPU == faster deserialization).
| 0 | 1 | 0 | 0 |
2013-08-15T18:58:00.000
| 1 | 1.2 | true | 18,259,697 | 0 | 0 | 1 | 1 |
In profiling my python2.7 App Engine app, I find that it's taking an average of 7ms per record to deserialize records fetched from ndb into python objects. (In pb_to_query_result, pb_to_entity and their descendants—this does not include the RPC time to query the database and receive the raw records.)
Is this expected? My model has six properties, one of which is a LocalStructuredProperty with 15 properties, which also includes a repeated StructuredProperty with four properties, but the average object should have less than 30 properties all told, I think.
Is it expected to be this slow? I want to fetch a couple of thousand records to do some simple aggregate analysis, and while I can tolerate a certain amount of latency, over 10 seconds is a problem. Is there anything I can do to restructure my models or my schema to make this more viable? (Other than the obvious solution of pre-calculating my aggregate analysis on a regular basis and caching the results.)
If it's unusual for it to be this slow, it would be helpful to know that so I can go and look for what I might be doing that impairs it.
|
Python Development Enviorment
| 18,260,621 | 0 | 3 | 163 | 0 |
python,django,web
|
The answer to this question would be pretty subjective, but lets try.
Minimal requirements
Knowledge about Python (basics, idioms, language characteristics),
Some server solution (if you want to put it live; otherwise local development is possible without web server),
At that point you are already able to code. You can write your code in even the simplest text editor, so no need for an IDE.
Good to have
Good IDE with autocompletion and inspections (I recommend PyCharm, but any decent one would do),
Knowledge about how to install Python modules,
At that point you are more efficient with your coding and see some errors before you execute your code.
Best practices (not necessarily all at once)
Virtualenv,
Vagrant,
Configured web server ,atching the one that will serve your Python app,
At that point you should have clean and separate environments for every project. They should also resemble the target environment as much as possible.
List could probably be completed with more items, though.
| 0 | 0 | 0 | 0 |
2013-08-15T19:44:00.000
| 6 | 0 | false | 18,260,514 | 1 | 0 | 1 | 1 |
I guess I am having a hard time understanding what is needed to start web development with Python. I am new to both web development and Python and I am having a hard time figuring out what really is needed for a "Python Development Environment. I have heard that I should use virtualenv for all my developing. Others say a good IDE. Some day a VM with all the tools you need. It all is a bit overwhelming.
So from a Python developer standpoint. I ask what is the way to start. What do I need? What don't I need? Should I just get a good IDE or use a VM.
|
Django-sphinx How to get a list of keywords
| 18,271,689 | 0 | 0 | 87 | 0 |
javascript,jquery,python,django,sphinx
|
You cant get a list of works as such. Morphology processing to produce a stem is a one way process.
But Sphinx does include a BuildExceprts function! This understands morphology settings and will highlight the relevent matching words.
| 0 | 0 | 0 | 0 |
2013-08-16T06:53:00.000
| 1 | 0 | false | 18,267,445 | 1 | 0 | 1 | 1 |
If I type in the search for "Home", the answer I get the same word "Homes", etc.
I need to highlight the words on the client, as a result of the search. How to get a list of keywords based on the morphology of the client?
|
Do I need to use virtualenv with Vagrant?
| 18,271,644 | 12 | 22 | 13,000 | 0 |
python,django,virtual-machine,virtualenv,vagrant
|
If you run one vagrant VM per project, then there is no direct reason to use virtualenv.
If other contributors do not use vagrant, but do use virtualenv, then you might want to use it and support it to make their lives easier.
| 0 | 0 | 0 | 0 |
2013-08-16T10:09:00.000
| 3 | 1.2 | true | 18,270,859 | 1 | 0 | 1 | 2 |
I was used VirtualBox manual setups with virtualenvs inside them to run Django projects on my local machine. Recently I discovered Vagrant and decided to switch to it, because it seems very easy and useful.
But I can not figure - do I need still use virtualenv Vagrant VM, is it encouraged practice or forbidden?
|
Do I need to use virtualenv with Vagrant?
| 28,601,794 | 9 | 22 | 13,000 | 0 |
python,django,virtual-machine,virtualenv,vagrant
|
Virtualenv and other forms of isolation (Docker, dedicated VM, ...) are not necessarily mutually exclusive. Using virtualenv is still a good idea, even in an isolated environment, to shield the virtual system Python from your project packages. *nix systems use plethora of Python based utilities dependent on specific versions of packages being available in system Python and you don't want to mess with these.
Mind that virtualenv can still only go as far as pure Python packages and doesn't solve the situation with native extensions that will still mix with the system.
| 0 | 0 | 0 | 0 |
2013-08-16T10:09:00.000
| 3 | 1 | false | 18,270,859 | 1 | 0 | 1 | 2 |
I was used VirtualBox manual setups with virtualenvs inside them to run Django projects on my local machine. Recently I discovered Vagrant and decided to switch to it, because it seems very easy and useful.
But I can not figure - do I need still use virtualenv Vagrant VM, is it encouraged practice or forbidden?
|
django inspectdb not getting all my tables
| 18,277,208 | 0 | 0 | 847 | 0 |
python,django,legacy,inspectdb
|
figured it out, my 'Name' in my settings.py was incorrect so it was looking at the wrong database
| 0 | 0 | 0 | 0 |
2013-08-16T15:25:00.000
| 1 | 0 | false | 18,276,893 | 0 | 0 | 1 | 1 |
Running the "inspectdb" with django and it returns a model file but the model file is missing some of the tables in my db. It actually has a table that was put in awhile ago but later deleted or replaced. Do I need to update my DB or something, it seems like django is looking at an older "version" of the db.
|
Caching large objects in a python Flask/Gevent web service
| 18,286,160 | 0 | 3 | 1,511 | 0 |
python,caching,flask,nlp,gevent
|
Can't you unpickle the files when the sever is instanciated, and then keep the unpickled data into the global namespace? This way, it'll be available for each requests, and as you're not planning to write anything in it, you do not have to fear any race conditions.
| 0 | 0 | 0 | 0 |
2013-08-16T19:08:00.000
| 2 | 0 | false | 18,280,454 | 1 | 0 | 1 | 2 |
I am building a python based web service that provides natural language processing support to our main app API. Since it's so NLP heavy, it requires unpickling a few very large (50-300MB) corpus files from the disk before it can do any kind of analyses.
How can I load these files into memory so that they are available to every request? I experimented with memcached and redis but they seem designed for much smaller objects. I have also been trying to use the Flask g object, but this only persists throughout one request.
Is there any way to do this while using a gevent (or other) server to allow concurrent connections? The corpora are completely read-only so there ought to be a safe way to expose the memory to multiple greenlets/threads/processes.
Thanks so much and sorry if it's a stupid question - I've been working with python for quite a while but I'm relatively new to web programming.
|
Caching large objects in a python Flask/Gevent web service
| 18,284,611 | 1 | 3 | 1,511 | 0 |
python,caching,flask,nlp,gevent
|
If you are using Gevent you can have your read-only data structures in the global scope of your process and they will be shared by all the greenlets. With Gevent your server will be contained in a single process, so the data can be loaded once and shared among all the worker greenlets.
A good way to encapsulate access to the data is by putting access function(s) or class(es) in a module. You can do the unpicliking of the data when the module is imported, or you can trigger this task the first time someone calls a function into the module.
You will need to make sure there is no possibility of introducing a race condition, but if the data is strictly read-only you should be fine.
| 0 | 0 | 0 | 0 |
2013-08-16T19:08:00.000
| 2 | 1.2 | true | 18,280,454 | 1 | 0 | 1 | 2 |
I am building a python based web service that provides natural language processing support to our main app API. Since it's so NLP heavy, it requires unpickling a few very large (50-300MB) corpus files from the disk before it can do any kind of analyses.
How can I load these files into memory so that they are available to every request? I experimented with memcached and redis but they seem designed for much smaller objects. I have also been trying to use the Flask g object, but this only persists throughout one request.
Is there any way to do this while using a gevent (or other) server to allow concurrent connections? The corpora are completely read-only so there ought to be a safe way to expose the memory to multiple greenlets/threads/processes.
Thanks so much and sorry if it's a stupid question - I've been working with python for quite a while but I'm relatively new to web programming.
|
How to run custom python code when server starts with django framework?
| 18,290,626 | 3 | 2 | 895 | 0 |
python,django
|
put your code in __init__.py file of your app folder.
| 0 | 0 | 0 | 0 |
2013-08-17T15:19:00.000
| 1 | 1.2 | true | 18,290,296 | 0 | 0 | 1 | 1 |
I have a python file in my app folder in my django project, I want it to run when the server starts, How do I do that ?
|
Python: running process in the background with ability to kill them
| 18,293,596 | 1 | 0 | 124 | 0 |
python,subprocess,gevent
|
It depends on your application logic. If you just feed the data into the database without any CPU intensive tasks, then most of your application time will be spent on IO and threads would be sufficient. If you are doing some CPU intensive suff then you should use the multiprocessing module so you can use all your CPU cores, which threads wont allow you because of the GIL.
Using subprocess would just add an additional task of implementing the same stuff that's already implemented in the multiprocessing module so I would skip that (why reinvent the wheel). And gevents is just an event loop I don't see how will that be better than using threads. But if I'm wrong please correct me, I never used gevent.
| 0 | 1 | 0 | 0 |
2013-08-17T21:05:00.000
| 1 | 0.197375 | false | 18,293,345 | 0 | 0 | 1 | 1 |
I need to constantly load a number of data feeds. The data feeds can take 20-30 seconds to load. I know what feeds to load by checking a MySQL database every hour.
I could have up to 20 feeds to load at the same time. It's important that non of the feeds block each other as I need to refresh them constantly.
When I no longer need to load the feeds the database that I'm reading gets updated and I thus need to stop loading the feed which I would like to do from my main program so I don't need multiple connections to the db.
I'm aware that I could probably do this using this using threading, subprocess or gevents. I wanted to ask if any of these would be best.
Thanks
|
Building the web app with json only data with javascript and ORM
| 18,331,119 | 1 | 2 | 191 | 0 |
php,javascript,jquery,python,extjs
|
I'm working on a Django project for the past six months, where I'm using Django for the backend service, returning only json responses, and the frontend code is completely separate.
jQuery by itself would result in unmaintainable code, even on a smaller scale, so you definitely need a high level frontend framework. I settled with Durandal.js, which includes:
Knockout.js for the ui bindings
Sammy.js for view routing
Require.js to modularize the code
I think it was a good choice at the time, and I feel very productive with that tech stack. If I were to start from scratch again, it would be very likely a similar stack.
As for ExtJS, it's a component/widget based framework, which philosophy I don't very much like, I saw the future, and it wasn't written in ExtJS :)
Although I see AngularJS and EmberJS as the titans that will very likely win the battle of frameworks, at least for now.
| 0 | 0 | 0 | 0 |
2013-08-20T01:18:00.000
| 1 | 1.2 | true | 18,325,521 | 0 | 0 | 1 | 1 |
I have given the new project to complete where i have separate components which talk to each other via services calls
They are not linked directly.
The technical head wants to build the entire frontend in ExtJS or jquery and then use JSON to load the data. I mean all forms , login etc will be JSON.
Now i ahve not done anything like that that. I mean i have always generated forms , data from server side controllers and views. like PHP or Django python.
I want to know that is this way good or achievable because i don't want to chnage things after spending time initially.
but is its the good way then i can start with it
|
How to authenticate android user POST request with Django REST API?
| 18,334,430 | 0 | 4 | 9,898 | 0 |
java,android,python,django,authentication
|
You may want to use the Django Sessions middleware, which will set a cookie with a django session_id. On the following requests, the sessions middleware will set an attribute on your request object called user and you can then test if user is authenticated by request.user.is_authenticated() ( or login_required decorator) . Also, you can set the session timeout to whatever you like in the settings.
This middleware is enabled in default django settings.
| 0 | 0 | 0 | 0 |
2013-08-20T08:57:00.000
| 3 | 0 | false | 18,330,916 | 0 | 0 | 1 | 1 |
As of now, I have a Django REST API and everything is hunky dory for the web app, wherein I have implemented User Auth in the backend. The "login_required" condition serves well for the web app, which is cookie based.
I have an Android app now that needs to access the same API. I am able to sign in the user. What I need to know is how to authenticate every user when they make GET/POST request to my views?
My research shows a couple of solutions:
1) Cookie-backed sessions
2) Send username and password with every GET/POST request(might not be secure)
Any ideas?
|
Celery - how to susbstitute 'None' in logs
| 18,336,614 | 1 | 0 | 281 | 0 |
python,django,celery
|
Are you defining your tasks with ignore_result=True (or did you set CELERY_IGNORE_RESULT to True)? If you did, you should try disabling it.
| 0 | 1 | 0 | 0 |
2013-08-20T12:19:00.000
| 1 | 1.2 | true | 18,334,877 | 0 | 0 | 1 | 1 |
In Celery's logs there are
Task blabla.bla.bla[arguments] succeeded in 0.757446050644s: None
How to replace this None with something more meaningfull? I tried to set return value in tasks but no luck.
|
Safely retrieving data from POST or GET in django
| 18,354,920 | 3 | 2 | 2,083 | 0 |
python,django,security
|
r prefixes to strings are not retained in the string value. 'a\\b' and r'a\b' are exactly the same string, which has a single backslash. u prefixes determine whether the string holds bytes or Unicode characters. In general strings in Django apps should be Unicode strings, but Python will automatically convert bytes to characters where necessary (this can blow up if you use non-ASCII characters).
None of this determines whether a string is ‘safe’.
Using the cleaned_data store on a Form means that the data has been validated for the particular type of field it is associated with. If you have an e-mail field, then the cleaned_data value is sure to look like a valid e-mail address. If you have a plain text field then cleaned_data can be any string. Neither of those provide you any guarantee that a string is ‘safe’; input validation is a good thing to do in general and a useful defense-in-depth but it does not make an application secure against injection.
Since these values are not escaped as far as I can see is it possible that they are not safe?
Input values should never be escaped and are never ‘safe’. It is not the job of the input handling phase to do escaping; it is when you drop the value into a string with a different context that you have to worry about escaping.
So, when you create an HTML response with a string in, you HTML-escape that string. (But better: use a templating language that automatically escapes for you, like Django's autoescape.)
When you create an SQL query with a string in, you SQL-escape that string. (But better: use parameterised queries or an ORM so that you never have to create a query with string variables.)
When you create a JavaScript variable assignment with a string in, you JS-escape that string. (But better: pass the data in a DOM data- attribute and read it from JS instead of using inline code.)
And so on. There are many different forms of escaping and there is no global escaping scheme which can protect you against the range of possible injection attacks. So leave the input as it is, and escape at the output phase, or better use existing framework tools to avoid having to explicitly escape at all.
| 0 | 0 | 0 | 0 |
2013-08-20T20:23:00.000
| 1 | 1.2 | true | 18,344,338 | 0 | 0 | 1 | 1 |
I was wondering about the safest way to retrieve data from the POST or GET variable in Django. Sometimes I use the variable that is directly passed into the view function by url patterns in urls.py. I am told (not sure) that they are safe to use when I start the pattern a ''r''. But I dont know why this is the case.
For retrieving POST data I know of two options:
Using a form, Django forms have a cleaned data function which should make the data safe to use.....
Using request.POST.get('someval'). Since these values are not escaped as far as I can see is it possible that they are not safe? Secondly does putting a u or r symbol make it safe and if so why?
|
How to get IP address of hostname inside jinja template
| 37,210,237 | 4 | 12 | 21,366 | 0 |
python,jinja2,salt-stack
|
This is a very old post, but it is highly ranked in Google for getting the ipv4 address. As of salt 2015.5.8, the best way to get the primary ipv4 address is {{ grains['ipv4'][0] }}.
| 0 | 0 | 0 | 0 |
2013-08-21T14:42:00.000
| 7 | 0.113791 | false | 18,360,528 | 0 | 0 | 1 | 1 |
Our saltstack is based on hostnames (webN., dbN., etc.). But for various things I need IPs of those servers. For now I had them stored in pillars, but the number of places I need to sync grows.
I tried to use publish + network.ip_addrs, but that kinda sucks, because it needs to do the whole salt-roundtrip just to resolve a hostname. Also it's dependent on the minions responding. Therefore I'm looking for a way to resolve hostname to IP in templates.
I assume that I could write a module for it somehow, but my python skills are very limited.
|
how to define reserved template for python file for eclipse
| 18,384,978 | 2 | 1 | 379 | 0 |
python,eclipse,ide,pydev
|
You can define the templates used in PyDev (both for code-completion and for new modules) in window > preferences > pydev > editor > templates.
Anything with the context 'new module' there will be shown to you when you create a new module (and you can have many templates, such as one for unittests, empty modules, class modules, etc).
Note that the templates are only presented when you create a module with Alt+Shift+N > pydev module (or file > new > pydev module), not when you create a regular 'file' (even if it ends with .py)
| 0 | 0 | 0 | 0 |
2013-08-21T23:35:00.000
| 1 | 1.2 | true | 18,369,347 | 1 | 0 | 1 | 1 |
I need to use a default template for my files, such as first heading : description of file, author, shebang line and so on. But PyDev and eclipse don't do it for me.
When i want to create a new file in my project, how i have them?
|
Google App engine change parent of entity that is not stored
| 18,377,368 | 1 | 0 | 35 | 0 |
python,google-app-engine
|
create a new model with the data from the existing one..
or don't create the model until you have all the facts.
| 0 | 1 | 0 | 0 |
2013-08-22T09:57:00.000
| 2 | 0.099668 | false | 18,377,222 | 0 | 0 | 1 | 2 |
I know we cant change the parent of an entity that is stored , but can we change the parent of the entity that is not stored? For example i am declaring a model as
my_model = MyModel(parent = ParentModel1.key)
but after some checks i may have to change the parent of my_model (i have not run my_model.put() ) to ParentModel2. How can i do this ?
|
Google App engine change parent of entity that is not stored
| 18,377,371 | 1 | 0 | 35 | 0 |
python,google-app-engine
|
You still can't do it. You should probably delay instantiation of the MyModel object until you know its parent. Perhaps you could collect the attributes in a dictionary, then when it comes to instantiation you can do my_instance = MyModel(parent=parent_instance, **kwargs).
| 0 | 1 | 0 | 0 |
2013-08-22T09:57:00.000
| 2 | 1.2 | true | 18,377,222 | 0 | 0 | 1 | 2 |
I know we cant change the parent of an entity that is stored , but can we change the parent of the entity that is not stored? For example i am declaring a model as
my_model = MyModel(parent = ParentModel1.key)
but after some checks i may have to change the parent of my_model (i have not run my_model.put() ) to ParentModel2. How can i do this ?
|
I need a script that searches files for SSI and replaces the include with the actual HTML
| 22,432,278 | 0 | 1 | 417 | 0 |
python,html,ruby,bash,ssi
|
On your dev machine, use your browser to display the web page, and then save the 'result' with an appropriate file name/in an output directory.
Thus, if you had mainfile.html which executed various time/last-mod directives and which included fileA.inc and fileB.inc at appropriate places, the resulting display (and save-able HTML file) will comprise all four/five components.
=dn
| 0 | 1 | 0 | 0 |
2013-08-22T10:11:00.000
| 2 | 0 | false | 18,377,549 | 0 | 0 | 1 | 1 |
I am developing the front end code of a website which I will be handing over to some developers for them to integrate it with the backend. The site will be written in .NET but I'm developing the front end code with static HTML files (and a bit of javascript).
Because the header, footer and a few other elements are the same across all pages I am using Server Side Includes in my development environment. However, every time I hand the code to the developers I need to manually replace each SSI with the actual HTML by copying and pasting. This is starting to get tedious.
I have tried writing a bash script to do this but my bash knowledge is extremely limited so I have failed miserably (I'm not really sure where to start).
What I tried to achieve was:
Loop through all the HTML files in my project
Look for an include ( <!--#include file="myfile.html"--> )
If one is found, replace the include with the HTML from the file specified in the include
Keep doing this until there are no more includes and move on to the next file
Does anyone know of a script that can do this, or can point me in the right direction for achieving this myself? I'm happy for it to be in any language as long as I can run it on my Mac.
Thanks.
EDIT
It is safe to assume that all instances of <!--#include file="myfile.html"--> are on their own line.
|
How to install NLTK modules in Heroku
| 62,639,159 | 0 | 13 | 8,697 | 0 |
python,heroku,nltk
|
You need to follow the below steps.
nltk.txt needs to present at the root folder
Add the modules you want to download like punkt, stopwords as separate row items
Change the line ending from windows to UNIX
Changing the line ending is a very important step. Can be easily done through Sublime Text or Notepad++. In Sublime Text, it can done from the View menu, then Line Endings.
Hope this helps
| 0 | 0 | 0 | 0 |
2013-08-22T15:51:00.000
| 4 | 0 | false | 18,385,303 | 0 | 0 | 1 | 3 |
Hey i'd like to install the NLTK pos_tag on my Heroku server. How can i do so. Please give me the steps as im new to the Heroku server system.
|
How to install NLTK modules in Heroku
| 42,257,701 | 17 | 13 | 8,697 | 0 |
python,heroku,nltk
|
I just added official nltk support to the buildpack!
Simply add a nltk.txt file with a list of corpora you want installed, and everything should work as expected.
| 0 | 0 | 0 | 0 |
2013-08-22T15:51:00.000
| 4 | 1 | false | 18,385,303 | 0 | 0 | 1 | 3 |
Hey i'd like to install the NLTK pos_tag on my Heroku server. How can i do so. Please give me the steps as im new to the Heroku server system.
|
How to install NLTK modules in Heroku
| 44,803,942 | 2 | 13 | 8,697 | 0 |
python,heroku,nltk
|
If you want to use simple functionalities like pos_tag, tokenizer, stemming, etc. then you can do the following steps
mention nltk in requirements.txt
mention following modules in nltk.txt
wordnet
pros_cons
reuters
hmm_treebank_pos_tagger
maxent_treebank_pos_tagger
universal_tagset
punkt
averaged_perceptron_tagger_ru
averaged_perceptron_tagger
snowball_data
rslp
porter_test
vader_lexicon
treebank
dependency_treebank
| 0 | 0 | 0 | 0 |
2013-08-22T15:51:00.000
| 4 | 0.099668 | false | 18,385,303 | 0 | 0 | 1 | 3 |
Hey i'd like to install the NLTK pos_tag on my Heroku server. How can i do so. Please give me the steps as im new to the Heroku server system.
|
How to get content of element in embeded python codes in web2py view
| 18,414,025 | 0 | 0 | 79 | 0 |
javascript,python,web2py
|
I used a Form to achieve this. Working quite well.
| 0 | 0 | 1 | 0 |
2013-08-22T16:24:00.000
| 1 | 1.2 | true | 18,386,023 | 0 | 0 | 1 | 1 |
I want to know how I can get content of a certain element by a dynamic id/name in embeded python codes in web2py view page?
Basically I want something like:
{{for task in tasks:}}
...
{{=TEXTAREA(task['remark'], _name='remark'+str(task['id']), _id='remark'+str(task['id']), _rows=2)}}
{{=A('OK', _class='button', _href=URL('update_remark', vars=dict(task_id=task['id'], new_remark=['remark'+str(task['id'])])))}}
What I want the ['remark'+str(task['id'])] do is to get the content automatically but obviously it won't work, I'm wondering how I can achieve this? Is there any API that can help?
Thanks in advance!
|
Scapy fields under encryption
| 19,122,626 | 1 | 2 | 635 | 0 |
python,scapy
|
OK, at the beginning I put the fields behind the encryption in a packet, and do all the encryption magic in post_build (encrypt) and pre_dissect (decrypt), but that was really tricky... so Instead I created another packet (EncryptedPacket) which overloads addfield and getfield to do all the encryption stuff, this solution is much cleaner and nicer then the previous one. I will add examples later.
| 0 | 0 | 0 | 0 |
2013-08-22T21:12:00.000
| 1 | 1.2 | true | 18,390,935 | 0 | 0 | 1 | 1 |
I have a protocol with encrypted fields.
I want to be able when dissecting the packet, decrypt them
and when building it will encrypt them (lets say I know the private\public key...).
Need this for changing the fields under the encryption.
What is the best way to do this with scapy...
I couldn't find anything usefull..
maybe something with post_build post_dissect ?
|
Is it possible to use both cheaper and emperor with uWSGI
| 18,396,906 | 2 | 1 | 304 | 0 |
python,django,wsgi,uwsgi
|
There are no problems as each "vassal" can be configured with its special cheaper mode. In this way you can have QoS for your customers.
| 0 | 1 | 0 | 0 |
2013-08-22T21:48:00.000
| 1 | 0.379949 | false | 18,391,405 | 0 | 0 | 1 | 1 |
I need to host multiple Django sites (quite a lot of sites actually) and currently I am using Apache+mod_wsgi but I want to switch to uWSGI.
One of the nice features of uWSGI is cheaper mode that spawns processes as needed and shuts them down as needed as well. On the other hand, it seems that the way to make it run multiple sites is to use emperor mode.
Can emperor mode be used together with cheaper subsystem? Are there any quirks/problems I should be aware of? Has anyone ever done this?
|
Feeding ES directly - Is a queue needed?
| 18,465,751 | 3 | 2 | 205 | 0 |
python,performance,search,queue,elasticsearch
|
I am new here, but I will try to share my own experience with ES.
Here, we are using couchDB to store the json we are indexing into ES. However, we do heavy modifications on those docs, like creating new nodes, etc etc. The docs are big, hundreds of fields, more than 15 nested collections.
Finally, there are thousands of docs.
So, yes, in my humble opinion, if you can create your docs via your application, I do not see why ES would have trouble with that.
For the python part, though, I cannot help, we're doing things in java, here.
However, for ES, I would
use the bulk api. ES is (much, much) more efficient that way.
I'd probably store the ids of the docs that couldn't be indexed due to random errors in another index (or in a file, or somewhere else) so that you can reconstruct and reindex them afterward, instead of a retry on error. (Though I couldn't know about the feasibility in python)
not use a replica for an index currently indexing.
For the retry on errors, I have mixed feelings. If the error is due to a wrong construction of the doc or to a mapping error, it will fail on each retry.
Here, we are indexing thousands of those docs a minute, and still can issue search and facet requests (those could be slightly slower, though).
This is not much, but I hope it helps.
Good luck.
| 0 | 0 | 0 | 0 |
2013-08-23T12:59:00.000
| 2 | 0.291313 | false | 18,403,526 | 0 | 0 | 1 | 1 |
I am looking at the possibility of using ES without a database, constructing my data from my python application and sending it straight to ES in real time. It says me a lot of complexity, however my concern is that I might be generating data very quickly and sending requests relentlessly, even when ES might not be ready to accept it.
My question is, in this case does it makes sense to use a queue system as a buffer between the two, so my application sends everything to a queue, and then queue tries to add it to ES, retrying if it doesn't have success.
I am not sure if this is the most logical or efficient method. If anyone has any information or ideas on what queue systems would be suited, or if I even need one, I'd be very interested to hear.
James
|
Connecting iOS app to Windows/Linux apps
| 18,773,687 | 0 | 4 | 1,265 | 0 |
python,ios,windows,web,bonjour
|
I would definitely suggest a webapp. And the answer to your questions are given below:
How would I receive and send notifications over a local network.
Use a REST based web service to communicate with the server.
You have to use polling to receive data:-(
How could I connect to the server using NSURLConnection if it does not have a static ip?
If possible configure a domain name in your network which points to server ip. (Configure local DHCP to give same IP to your server every time based on mac address!)
Have a IP Range and when the app starts, try to reach a specific URL and check if it is responding.
Ask the user to enter the server IP every time the app starts!
| 1 | 1 | 0 | 0 |
2013-08-23T14:46:00.000
| 2 | 0 | false | 18,405,726 | 0 | 0 | 1 | 1 |
Background:
I am just about to start development on an mobile and desktop app. They will both be connected to a local wifi network (no internet connection) and will need to communicate with one another. At the outset we are targeting iOS and Windows as the two platforms, with the intention of adding Linux, OSX, and Android support in that order. The desktop app will largely be a database server/notification center for receiving updates from the iOS apps and sending out the data to other iOS apps. There may be a front end to the desktop app, but we could also incorporate it into the iOS app if needed.
For the moment we just want the iOS app to automatically detect when it is on the same network as the server and then display the data that is sent by that server (bonjour like).
As far as I see it there are two paths we could take to implement this
Create a completely native app for each platform (Windows, Linux, OSX).
Pro: We like the ideas of having native apps for performance and ease of install.
Con: I know absolutely nothing about Windows or Linux development.
Create an app that is built using web technologies (probably python) and create an easy to use installer that will create a local server out of the desktop machine which the mobile apps can communicate with.
Pro: Most of the development would be cross-platform and the installer should be easy enough to port.
Con: If we do want to add a front-end to the server app it will not be platform native and would be using a css+html+javascript GUI.
Question:
My question is how would implement the connection between the iOS app and server app in each circumstance.
How would I receive and send notifications over a local network.
How could I connect to the server using NSURLConnection if it does not have a static ip?
I hope this is clear. If not please ask and I will clarify.
Update 09/06/2013
Hopefully this will clear things up. I need to have a desktop app that will manage a database, this app will connect to iOS devices on a local wireless network that is not connected to the internet. I can do this with either the http protocol (preferably with a flask app) or by using a direct socket connection between the apps and the server. My question is which of the above two choices is best? My preference would be for a web-based app using Python+Flask, but I would have no idea how to connect the iOS app to a flask app running on a local network with out a static ip. Any advice on this would be appreciated.
|
Google App Engine, Change which python version
| 18,955,756 | 1 | 2 | 1,807 | 0 |
python,google-app-engine,path,google-cloud-storage,sys.path
|
In GAE change the python path via Preferences settings, set Python Path to match your python 27 path.
| 0 | 1 | 0 | 0 |
2013-08-23T16:05:00.000
| 1 | 0.197375 | false | 18,407,249 | 0 | 0 | 1 | 1 |
I'm trying to use the GCS client library with my app engine app and I ran into this -
"In order to use the client library in your app, put the /src/cloudstorage directory in your sys.path so Python can find it."
First, does this mean I need to move the directory into my sys.path OR does it need to add the ~/src/cloudstorage/ to my PATH environment variable?
Second, when I print sys.version and sys.path from the App Engine Interactive Console, I see a Python Version of 2.7.2, but when I print from my Terminal (on a Mac), I get the Python I want to use and installed via Homebrew - 2.7.5. The sys.path in the Console shows all App Engine paths and the default Python installation - /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
On my terminal - /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/
I need help understanding how to change this.
** UPDATE **
Okay, I figured out part of this answer. "In order to use the client library in your app, put the /src/cloudstorage directory in your sys.path so Python can find it." means moving the actual directory to the App Engine project directory.
The second piece still remains - why is my Mac PATH environment variable not used in APP Engine. How can I change the default version of Python used by the App Engine (from 2.7.2 to 2.7.5)? This is not related to changing the version in the YAML file.
|
Python Frameworks vs Firebase
| 18,420,953 | 4 | 0 | 2,929 | 0 |
python,django,pyramid,firebase,bottle
|
Some contras of using Firebase:
Your data is in an external server (deal-breaker for sensitive data)
It costs money
You have an additional dependency that you don't fully control (if they go out of service/business you might be in trouble)
You know the pros. If you think these are not relevant to you then go for it.
| 0 | 0 | 0 | 1 |
2013-08-24T16:48:00.000
| 1 | 0.664037 | false | 18,420,854 | 0 | 0 | 1 | 1 |
Since Firebase can do user login as well as hold a lot of other stuff about users and their interactions with my app.
What are some of the advantages and disadvantages of using Firebase solely as a web framework, instead of using django, pyramids, bottle, etc etc?
http routing, etc etc.... I have that sorta stuff handle by another process...
So, if I'm looking basically to hold some user stuff and allow for user logins and user to user private/personal communications.
It seems firebase is an almost total solution, no?
I know this isn't a technical question, but I'm just looking for opinions from a realtime crowd....stackoverflow seems the best fit.
|
How to debug Django unittests with PyDev?
| 19,337,234 | 5 | 1 | 1,988 | 0 |
python,django,unit-testing,debugging,pydev
|
Setup a new debug configuration.
Run -> Debug Configurations...
Select 'PyDev Django'
Click 'New Launch Configuration (top left corner)
Name your new configuration
Set the project to your project
Set the module to your manage.py (browse to your manage.py)
Go to the 'Arguments' tab and enter 'test' under 'Program arguments'
Click 'Apply'
This will allow you to run 'manage.py test' and be able to stop on your breakpoints.
Unfortunately, you'll have to create different configurations if you only want to run a subset of tests.
| 0 | 0 | 0 | 1 |
2013-08-25T00:28:00.000
| 1 | 0.761594 | false | 18,424,495 | 0 | 0 | 1 | 1 |
i've written a few unittests for a Django project. I'd like to debug them. I've set a break point on the server side. what should I click to run the Django unittest with debugging enabled in PyDev Eclipse?
It seems I can run the manage.py test command from Pydev, but then there's no debugging. If I run the unittest with right-click debug unittest, then I get all sort Internal Server errors presumably because the test envrionment wasnt set up correctly.
|
Django not installed in venv?
| 51,107,469 | 0 | 1 | 1,802 | 0 |
django,python-venv
|
What happened to me was that I was trying to install django from outside the environment directory/folder.
So make sure you are inside the environment directory and then use pip install django
| 0 | 0 | 0 | 0 |
2013-08-25T10:50:00.000
| 3 | 0 | false | 18,428,204 | 1 | 0 | 1 | 2 |
Would anyone know possible reasons why Django is being installed in the global site package and not my venv's site package folder?
Here's my set up and what I did, this is a bit detailed since I'm new to Python/Django and not sure which information is important:
Python 3.3 is installed in c:\python33
I have virtualenv, pip, easy_install installed in C:\Python33\Scripts.
My venv is c:\users\username\projects\projB
This venv was created using pyvenv, not virtualenv.
I activated the venv.
I changed directory to C:\Python33\Scripts to run "pip install django".
Django was created inside C:\Python33\Lib\site-packages and not inside C:\users\username\projects\projB\Lib\site-packages.
Do I need to install pip inside my venv and use that to install Django?
|
Django not installed in venv?
| 18,429,225 | 1 | 1 | 1,802 | 0 |
django,python-venv
|
Pip should be installed when you create the virtual environment. Don't change directory into C:\Python33\Scripts before running pip. It looks like that means you use the base install's pip instead of your virtual environment's pip.
You should be able to run pip from any other directory. However I'm not familiar with python on Windows, so I'm not certain that pip is added to the path when you activate the environment. If that doesn't work, you'll have to change directory into the bin directory of your virtual environment, then run pip.
| 0 | 0 | 0 | 0 |
2013-08-25T10:50:00.000
| 3 | 1.2 | true | 18,428,204 | 1 | 0 | 1 | 2 |
Would anyone know possible reasons why Django is being installed in the global site package and not my venv's site package folder?
Here's my set up and what I did, this is a bit detailed since I'm new to Python/Django and not sure which information is important:
Python 3.3 is installed in c:\python33
I have virtualenv, pip, easy_install installed in C:\Python33\Scripts.
My venv is c:\users\username\projects\projB
This venv was created using pyvenv, not virtualenv.
I activated the venv.
I changed directory to C:\Python33\Scripts to run "pip install django".
Django was created inside C:\Python33\Lib\site-packages and not inside C:\users\username\projects\projB\Lib\site-packages.
Do I need to install pip inside my venv and use that to install Django?
|
Does Google App Engine's git Push-to-Deploy also update backends?
| 18,455,561 | 2 | 1 | 157 | 0 |
git,google-app-engine,python-2.7
|
No -- It doesn't update backends.
(My cron jobs ran last night and failed because they were running old code.)
Nothin' like good ol' appcfg.py update ./ --backends
| 0 | 1 | 0 | 0 |
2013-08-25T23:15:00.000
| 1 | 1.2 | true | 18,434,685 | 0 | 0 | 1 | 1 |
When using appcfg.py, I had to specify backends to update them.
What about when I'm using Push-to-Deploy?
I ask because I see two of my Versions don't have the same "deployed" date -- the backend still says "6 days ago". I didn't change backends.yaml, but I did change the code that runs on that backend.
Should I see a new "deployed" date? Is git Push-to-Deploy working?
|
What's the difference between self.browse() and self.pool.get() in OpenERP development?
| 18,457,696 | 8 | 4 | 11,039 | 0 |
python,odoo
|
self.pool.get is used to get the Singleton instance of the orm model from the registry pool for the database in use. self.browse is a method of the orm model to return a browse record.
As a rough analogy, think of self.pool.get as getting a database cursor and self.browse as a sql select of a records by Id. Note if you pass a browse an integer you get a single browse record, if you pass a list of ids you get a list of browse records.
| 0 | 0 | 1 | 0 |
2013-08-26T05:27:00.000
| 2 | 1.2 | true | 18,437,202 | 0 | 0 | 1 | 1 |
I have been working on developing a module in OpenERP 7.0. I have been using Python and the Eclipse IDE for development. I wanted to know the difference between self.browse() and self.pool.get() in OpenERP development.
Thanks.
|
How do I make files downloadable for a particular role in Plone?
| 18,516,979 | 2 | 1 | 155 | 0 |
python,plone
|
The codeless way to do this is to make use of Plone's workflow system.
Out-of-the-box, Plone's file and image content types do not have their own workflow. That means that files and images will simply inherit the publication state of their parent folder. This is easy and sensible, but it doesn't meet the need you're describing.
To change the situation, you may use the "types" configuration panel to turn on independent workflow for files and images. Then, their publication status may be set separately from their containing folders. Typically, you'd choose the same workflow that you're using for documents. Then, you may publish a folder and list its contents while having the files within be private -- thus requiring login for viewing.
If you need this to work differently in different places, you may turn on "placeful" workflow (turn it on by adding it in the add-ons panel; it's pre-installed, but not active). This allows different workflows in different parts of a site. It increases complexity, but is often an ideal solution to this kind of puzzle.
| 0 | 0 | 0 | 0 |
2013-08-26T05:36:00.000
| 2 | 1.2 | true | 18,437,284 | 0 | 0 | 1 | 1 |
I wish to make the contents of a folder in Plone downloadable only for certain roles. Can this be done easily? At present anybody who clicks the hyperlink for file name in the folder contents can download the file easily. I know about the site-wide option of overriding the at_download code using ZMI.
|
Can i generate reporting using google calendar
| 52,007,377 | 0 | 3 | 3,005 | 0 |
python,django,reporting,google-calendar-api
|
I'm probably going to write a Python script for this soon.
I had written a reporting app before in C#, but it's badly written and I think Google has changed their API again so it's not working anymore.
The way I did it was to use tags. I wanted my total work hours per client. I would enter them into Google Calendar as:
@clientname description of work
Where clientname can be the first few letters only and the software matches it to a full name from a list of clients. My software would then allow you to chose a time-period and one or more clients and would output them into a neat looking Word file.
PS: to by honest the suggestion of using gtimereport.com seems very bad to me. You're basically uploading all of your calendars to strangers. That's why I'm going to write a script for this.
| 0 | 0 | 0 | 0 |
2013-08-27T00:42:00.000
| 3 | 0 | false | 18,455,213 | 0 | 0 | 1 | 1 |
I am planning my all day activities in Google calendar LIke
Offcie time
sleeping
playing
Gym
I am happy with Google but issue is that i don't get reporting so that i can see how much time is spent in each category.
I know python and django so i was thinking is it possible that i still log all events in Google calendar and then i have daily cron jobs which will fetch events from Google calendar and then put in mysql database.
The main issue is i want to define separate categories for different things. Like WORK, SLEEP, SHOPPING etc.
But how can i do that from event name only. DO i need to enter some words in events whicg i can grab and make them as category. ANy ideas on that
|
Extracting links from HTML in Python
| 18,456,494 | 1 | 1 | 718 | 0 |
python,html,python-3.x,html-parsing
|
try to use HTML.Parser library or re library
they will help you to do that
and i think you can use regex to do it
r'http[s]?://[^\s<>"]+|www.[^\s<>"]+
| 0 | 0 | 1 | 0 |
2013-08-27T02:27:00.000
| 2 | 0.099668 | false | 18,455,991 | 0 | 0 | 1 | 1 |
i have to basically make a program that take a user-input web address and parses html to find links . then stores all the links in another HTML file in a certain format. i only have access to builtin python modules (python 3) . im able to get the HTML code from the link using urllib.request and put that into a string. how would i actually go about extracting links from this string and putting them into a string array? also would it be possible to identify links (such as an image link / mp3 link) so i can put them into different arrays (then i could catagorize them when im creating the output file)
|
Most efficient way to migrate from one model to another?
| 18,468,452 | 0 | 0 | 46 | 0 |
python,google-app-engine,app-engine-ndb
|
You haven't described many limitations, so I assume it's just a simple copy operation you're after. "Best way" is kinda vague, I don't know what you're comparing against. The only thing that you'd want to be careful about is to do the actual work of creating the new entity, copying data over, and deleting the old entity in a transaction. This is simple to do, and will prevent you from creating duplicates in case something goes wrong.
The remote API shell is definitely the least-coding-effort way to do it. You can write simple python functions to do your transactional copy, and run it in the shell. You don't need to write any extra handlers, and you don't even need to deploy a new version of your app. The problem with the remote shell is that it's probably 100x slower in accessing your datastore, so it could take a long time. If you let it run overnight, it potentially could stop if you have a hiccup in your internet connection - though this shouldn't be a huge problem if you copied your entities in a transaction, you can just restart the operation. Just as a reference, I recently ran an operation via remote API that uploaded 6000 entities, it took maybe 5 minutes. If you're ok with letting the operation run overnight, this is probably the way to go unless you have > 100K entities.
The mapreduce API method will run faster, since the load will be spread across a number of instances. A bit more effort to get mapreduce set up, and you'll have to deploy a new version of your app with the functionality, kick it off, wait until it finishes, and maybe clean out the code, as well as a bunch of logging entities that mapreduce automatically generates.
| 0 | 0 | 0 | 0 |
2013-08-27T04:17:00.000
| 2 | 0 | false | 18,456,841 | 0 | 0 | 1 | 1 |
I want to consolidate my logging data into a single StatisticStore model. Right now, my logging data is scattered around 3 models, which is a mess.
What would be the best way to iterate over all those records of all 3 models, and create a copy of each in the new StatisticStore model?
|
Reducing Google App Engine costs
| 18,468,740 | 3 | 2 | 638 | 0 |
javascript,python,google-app-engine
|
I'm assuming you're paying a lot in instance hours. Reading from the GAE filesystem is rather slow. So the easiest way to optimize is only read from the static file once on your instance startup, keep the js file in memory (ie a global variable), and print it.
Secondly, make sure your js is being cached by the customers so when they reload your page, you don't have to serve the js to them again unnecessarily.
Next way is to serve the js file as a static file if possible. This would save you some money if the js file is big and you're consuming CPU cycles just printing it. In this case have your handler that generates the HTML insert the appropriate URL to the appropriate js file instead of regenerating the entire js each time. You'll save money because you won't get charged instance hours for files served as static files, plus they can get cached in the edge cache (GAE's CDN), and you won't get billed anything at all for them.
| 0 | 1 | 0 | 0 |
2013-08-27T13:43:00.000
| 3 | 0.197375 | false | 18,467,222 | 0 | 0 | 1 | 1 |
We have a piece of Javascript which is served to millions of browsers daily.
In order to handle the load, we decided to go for Google App Engine.
One particular thing about this piece of Javascript is that it is (very) slightly different per company using our service.
So far we are handling this by serving everything through main.py which basically goes:
- Read the JS static file and print it
- Print custom code
We do this on every load, and costs are starting to really add-up.
Apart from having a static version of the file per customer, is there any other way that you could think about in order to reduce our bill? Would using memcache instead of reading a file reduce the price in any way?
Thanks a lot.
|
hasattr() throws AttributeError
| 18,474,264 | 3 | 0 | 361 | 0 |
python,django
|
The error is complaining about LargeImage. That's being caused by this expression: product.LargeImage. You might want to check for that first, or even better, put this in a try/except block.
| 0 | 0 | 1 | 0 |
2013-08-27T19:50:00.000
| 1 | 1.2 | true | 18,474,222 | 0 | 0 | 1 | 1 |
I'm trying to check if some xml in a django app has certain elements/nodes and if not just to skip that code block. I'm checking for the elements existance using hasattr(), which should return false if the element doesn't exist:
if hasattr(product.ItemAttributes, 'ListPrice') \ and hasattr(product.Offers.Offer.OfferListing, 'PercentageSaved') \
and hasattr(product.LargeImage, 'URL'):
Except in my case it's throwing an attribute error:
AttributeError at /update_products/
no such child: {http://webservices.amazon.com/AWSECommerceService/2011-08-01}LargeImage
I don't understand why it's throwing an error instead of just returning false and letting me skip the code block?
|
virtualenv/virtualenvwrapper confusion - how to properly use
| 18,475,165 | 2 | 0 | 329 | 0 |
python,django,virtualenv,virtualenvwrapper
|
When i do tutorials like "Effective Django" they use the virtualenv command on an empty folder, then activate it. That works, until tomorrow when I want to work on the app again at which point the virtualenv is gone.
I strongly doubt that this is the case, unless something is deleting your directories overnight. If that is the case, stop putting your code where it is being deleted.
Assuming that is not the case, the solution is for you to go back to the directory you created as a virtualenv, and reactivate it.
| 0 | 0 | 0 | 0 |
2013-08-27T20:43:00.000
| 1 | 0.379949 | false | 18,475,116 | 1 | 0 | 1 | 1 |
Sorry if this is dumb but every piece of documentation i read doesn't ever seem to answer this question in a direct way. How do i properly use virtualenv so that I have a virtualenv i can call with workon?
When i do tutorials like "Effective Django" they use the virtualenv command on an empty folder, then activate it. That works, until tomorrow when I want to work on the app again at which point the virtualenv is gone. What do i do at this point, I've used mkvirtualenv before and that creates a "permanent" virtualenv i can call with "workon" but I don't understand how i would use mkvirtualenv on an existing project or if this is a good idea or not, as it stands i have a project I virtualenv yesterday that has a bin folder in it and I am not sure if I need to source it again or what. Ideally i want to just workon project and get to work.
|
Digits in Eclipse's console after HTTP status codes
| 18,476,086 | 1 | 0 | 45 | 0 |
python,django,eclipse,http,pydev
|
It's the size of the response, in bytes.
Note that this has nothing to do with Eclipse, it's just the way Django's runserver formats its output.
| 0 | 0 | 1 | 0 |
2013-08-27T20:56:00.000
| 1 | 1.2 | true | 18,475,321 | 0 | 0 | 1 | 1 |
I am a programming newbie, and I recently installed Python + Django, and successfully created a very small web app. Everything works fine, but I am puzzled about 4 digits that appear after HTTP status codes in Eclipse's console following any request I make to my server.
Example: [27/Aug/2013 22:53:32] "GET / HTTP/1.1" 200 1305
What does 1305 represent here and in every other request?
|
Py4J has bigger overhead than Jython and JPype
| 21,098,619 | 1 | 7 | 6,465 | 0 |
java,python,mahout,py4j
|
I don't know Mahout. But think about that: At least with JPype and Py4J you will have performance impact when converting types from Java to Python and vice versa. Try to minimize calls between the languages. Maybe it's an alternative for you to code a thin wrapper in Java that condenses many Javacalls to one python2java call.
| 0 | 0 | 0 | 0 |
2013-08-28T10:03:00.000
| 5 | 0.039979 | false | 18,484,879 | 1 | 0 | 1 | 1 |
After searching for an option to run Java code from Django application(python), I found out that Py4J is the best option for me. I tried Jython, JPype and Python subprocess and each of them have certain limitations:
Jython. My app runs in python.
JPype is buggy. You can start JVM just once after that it fails to start again.
Python subprocess. Cannot pass Java object between Python and Java, because of regular console call.
On Py4J web site is written:
In terms of performance, Py4J has a bigger overhead than both of the previous solutions (Jython and JPype) because it relies on sockets, but if performance is critical to your application, accessing Java objects from Python programs might not be the best idea.
In my application performance is critical, because I'm working with Machine learning framework Mahout. My question is: Will Mahout also run slower because of Py4J gateway server or this overhead just mean that invoking Java methods from Python functions is slower (in latter case performance of Mahout will not be a problem and I can use Py4J).
|
Django MVT design: Should I have all the code in models or views?
| 18,494,672 | 0 | 3 | 166 | 0 |
python,django
|
Experienced Django users seem to always err on the side of putting code in models. In part, that's because it's a lot easier to unit test models - they're usually pretty self-contained, whereas views touch both models and templates.
Beyond that, I would just ask yourself if the code pertains to the model itself or whether it's specific to the way it's being accessed and presented in a given view. I don't entirely understand your example (I think you're going to have to post some code if you want more specific help), but everything you mention sounds to me like it belongs in the model. That is, creating a new Operation sounds like it's an inherent part of what it means to do something called add_operation()!
| 0 | 0 | 0 | 0 |
2013-08-28T14:39:00.000
| 2 | 0 | false | 18,491,040 | 0 | 0 | 1 | 1 |
I'm pretty novice so I'll try to explain in a way that you can understand what I mean.
I'm coding a simple application in Django to track cash operations, track amounts, etc.
So I have an Account Model (with an amount field to track how many money is inside) and an Operation Model(with an amount field as well).
I've created a model helper called Account.add_operation(amount). Here is my question:
Should I include inside the code to create the new Operation inside Account.add_operation(amount) or should I do it in the Views?
And, should I call the save() method in the models (for example at the end of Account.add_operation() or must it be called in the views?)
What's the best approach, to have code inside the models or inside the views?
Thanks for your attention and your patience.
|
Forking Django DB connections
| 18,496,589 | 0 | 2 | 941 | 1 |
python,django,postgresql
|
The libpq driver, which is what the psycopg2 driver usually used by django is built on, does not support forking an active connection. I'm not sure if there might be another driver does not, but I would assume not - the protocol does not support multiplexing multiple sessions on the same connection.
The proper solution to your problem is to make sure each forked processes uses its own database connection. The easiest way is usually to wait to open the connection until after the fork.
| 0 | 1 | 0 | 0 |
2013-08-28T15:42:00.000
| 2 | 0 | false | 18,492,467 | 0 | 0 | 1 | 2 |
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.
My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:
DatabaseError: SSL error: decryption failed or bad record mac
InterfaceError: connection already closed
What is the best solution to make the forked TCP server work?
Can I ensure the forked process uses its own database connection?
Should I be looking at other modules for writing to the postgres database?
|
Forking Django DB connections
| 18,531,322 | 1 | 2 | 941 | 1 |
python,django,postgresql
|
So one solution I found is to create a new thread to spawn from. Django opens a new connection per thread so spawning from a new thread ensures you pass a new connection to the new process.
In retrospect I wish I'd used psycopg2 directly from the beginning rather than Django. Django is great for the web front end but not so great for a standalone app where all I'm using it for is the model layer. Using psycopg2 would have given be greater control over when to close and open connections. Not just because of the forking issue but also I found Django doesn't keep persistent postgres connections - something we should have better control of in 1.6 when released and should for my specific app give a huge performance gain. Also, in this type of application I found Django intentionally leaks - something that can be fixed with DEBUG set to False. Then again, I've written the app now :)
| 0 | 1 | 0 | 0 |
2013-08-28T15:42:00.000
| 2 | 0.099668 | false | 18,492,467 | 0 | 0 | 1 | 2 |
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.
My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:
DatabaseError: SSL error: decryption failed or bad record mac
InterfaceError: connection already closed
What is the best solution to make the forked TCP server work?
Can I ensure the forked process uses its own database connection?
Should I be looking at other modules for writing to the postgres database?
|
Can a Django application authenticate with MySQL using its linux user?
| 18,496,083 | 1 | 3 | 691 | 1 |
python,mysql,django
|
MySQL controls access to tables from its own list of users, so it's better to create MySQL users with permissions. You might want to create roles instead of users so you don't have as many to manage: an Admin, a read/write role, a read-only role, etc.
A Django application always runs as the web server user. You could change that to "impersonate" an Ubuntu user, but what if that user is deleted? Leave it as "www-data" and manage the database role that way.
| 0 | 0 | 0 | 0 |
2013-08-28T18:39:00.000
| 1 | 0.197375 | false | 18,495,773 | 0 | 0 | 1 | 1 |
The company I work for is starting development of a Django business application that will use MySQL as the database engine. I'm looking for a way to keep from having database credentials stored in a plain-text config file.
I'm coming from a Windows/IIS background where a vhost can impersonate an existing Windows/AD user, and then use those credentials to authenticate with MS SQL Server.
As an example: If the Django application is running with apache2+mod_python on an Ubuntu server, would it be sane to add a "www-data" user to MySQL and then let MySQL verify the credentials using its PAM module?
Hopefully some of that makes sense. Thanks in advance!
|
How to separate languages in Django Model / Database
| 18,507,451 | 2 | 1 | 82 | 0 |
python,database,django,model,internationalization
|
I have alreay made a project like this. I have used one table with mixed languages, with a column to specify which language it is. I have no problem with this implementation.
An other approach I had thought is to create dynamically a table like content_ and to fill in. But very boring (you have to manage id dependancy with other tables) and not necessary for me.
Have you got a fixed number language ?
| 0 | 0 | 0 | 0 |
2013-08-29T09:35:00.000
| 2 | 0.197375 | false | 18,507,246 | 0 | 0 | 1 | 1 |
In my Django Project I am using the i18n internationalization to translate all templates. Now, depending on the chosen language I also would like to separate the data that users are submitting to the database. I do not want to have mixed languages in one table. What is the best approach how to solve this problem? I am developing using Django 1.5.2.
|
Is there a efficient way to override get() and put() method in an appengine entity to make it use memcache?
| 18,520,162 | 2 | 1 | 148 | 0 |
python,google-app-engine,optimization,memcached,entity
|
If you're not using NDB, use NDB. Your data won't change, just the way you interface with the datastore will. NDB entities are automatically cached so any requests by key are searched for in memcache first and then the datastore if the entity is not found.
NDB is the new standard anyways, so you might as well switch now instead of later.
| 0 | 1 | 0 | 0 |
2013-08-29T14:14:00.000
| 1 | 1.2 | true | 18,513,411 | 0 | 0 | 1 | 1 |
I have several appengine entities that are frequently read at different places of my applications and not so frequently updated.
I'd like to use memcache to reduce the number of datastore reads of my app, but i don't really want to update my code everywhere.
I was wondering if there is a decent way to override the get() method of my entity to check if we stored it in memcache before doing a datastore read, and use put() to delete this memcache entry.
Does someone have a good solution for that ?
|
How do I package a Scrapy script into a standalone application?
| 18,535,442 | 0 | 3 | 2,202 | 0 |
python,scrapy,desktop-application,py2exe,pyinstaller
|
The simplest way is write a script in python for them I guess...
If you are running a Windows Server you can even schedule the comand that you use (scrapy crawl yoursprider) to run the spiders.
| 0 | 0 | 0 | 0 |
2013-08-30T12:09:00.000
| 3 | 0 | false | 18,532,596 | 0 | 0 | 1 | 1 |
I have a set of Scrapy spiders. They need to be run daily from a desktop application.
What is the simplest way (from user's point of view) to install and run it on another windows machine?
|
Python Integration with Java / C
| 18,609,443 | 0 | 1 | 251 | 0 |
java,c++,python,interface,integration
|
gcj (gcc compiler for java) supports java 1.5 syntax (1.4 is working better on it) and therefore some Java programs may be compiled to native code. gcjh (or javah) can produce headers for java libraries, so you can write C extensions for python. Of course some libraries could not be compiled with gcj (like Apache Commons Logging) because of using com.sun packages. Did not updated from 2009.
There is another Java to native compiler, commercial Excelsior Jet (it's another JavaVM, it supports Java 1.6 and soon Java 1.7). They said linux-64bit version of their product will be available in 2013-Q4. But I didn't try it well, I don't know, are headers for compiled library can be produced.
There is a lot packages at pypi, like JCC (from PyLucene creator) or Py4J that can use Oracle JavaVM through JNI or sockets.
| 1 | 0 | 0 | 0 |
2013-08-30T17:09:00.000
| 3 | 1.2 | true | 18,538,170 | 0 | 0 | 1 | 2 |
I implemented most of my projects in C++ and python. However, we recently got a new database interface that I could only use Java to retrieve data.
I want to stay with my Python/C++ tools but I am wondering if there is a good solution to integrate Java to my Python application. I heard about Jython, but it is a different python implementation and I am concerned some of my C++ tools will not work well with it. Jpype seems simple but it hasn't been updated since 2011, so a little concerned with the compatiablity with the current python/java.
Is there a good solution to this? all opinions are welcomed.
|
Python Integration with Java / C
| 18,538,315 | 0 | 1 | 251 | 0 |
java,c++,python,interface,integration
|
One way to do this is to write web services. A web service can accept an HTTP request, marshal it into a data request, pass that to a Java class that get the data out, map the quert results into a response of some kind, and send it back.
Any client that can send an HTTP request, accept the response and unmarshal it can interact with that service. They need not know that it's implemented in Java.
You pay the price of an extra network roundtrip to get the benefit of language interoperability.
| 1 | 0 | 0 | 0 |
2013-08-30T17:09:00.000
| 3 | 0 | false | 18,538,170 | 0 | 0 | 1 | 2 |
I implemented most of my projects in C++ and python. However, we recently got a new database interface that I could only use Java to retrieve data.
I want to stay with my Python/C++ tools but I am wondering if there is a good solution to integrate Java to my Python application. I heard about Jython, but it is a different python implementation and I am concerned some of my C++ tools will not work well with it. Jpype seems simple but it hasn't been updated since 2011, so a little concerned with the compatiablity with the current python/java.
Is there a good solution to this? all opinions are welcomed.
|
Fastest text search in Python
| 18,541,075 | 0 | 0 | 1,008 | 0 |
python,sqlite,search,flask,typeahead
|
You are looking for "partial matches". I would load all possible names into an array, and sort them. Then I would separately create a (26x26) lookup array that shows the index of the first element in the list of names that corresponds to a combination of the first two letters; you might also have a dict (rather than an exhaustive list) of all possible three letter combinations, which would speed up your search (because it limits it to a much smaller slice of the array).
In other words - you would not really be searching at all (for the two and three letter combo's); you would be returning a slice of the array. Once you have a match of more than three, you probably can search the slice (not worth creating tables beyond three characters).
| 0 | 0 | 0 | 1 |
2013-08-30T20:28:00.000
| 2 | 0 | false | 18,540,987 | 0 | 0 | 1 | 1 |
I am developing my first Flask application (with sqlite as a database). It takes a single name from user as a query, and displays information about this name as a response.
All working well, but I want to implement typeahead.js to make better user experience. Typeahead.js sends requests to server as user types, and suggests possible names in dropdown. Right now I'm searching the database with select * from table_name where name like 'QUERY%'. But this of course is not so fast as I would like it to be - it works, but with noticable input lag (less or around a second I suppose).
In order to speed up things I looked at some memory caching options (like Redis or memcached), but they are key-value store, therefore I think do not fit my needs. I think possible option would be to make list of names (["Jane", "John", "Jack"], around 200k names total), load it into ram and do searches there. But how do I load something in memory in Flask?
Anyway, my question is: What is the best way to make such search (by first few letters) faster (in Python/Flask)?
|
Efficient session variable server-side caching with Python+Flask
| 21,949,675 | 1 | 7 | 2,346 | 0 |
python,flask,web,session,caching
|
Your instinct is correct, it's probably not the way to do it.
Session data should only be ephemeral information that is not too troublesome to lose and recreate. For example, the user will just have to login again to restore it.
Configuration data or anything else that's necessary on the server and that must survive a logout is not part of the session and should be stored in a DB.
Now, if you really need to easily keep this information client-side and it's not too much of a problem if it's lost, then use a session cookie for logged in/out state and a permanent cookie with a long lifespan for the rest of the configuration information.
If the information it too much size-wise, then the only option I can think of is to store the data other than the logged in/out state in a DB.
| 0 | 0 | 0 | 0 |
2013-09-01T19:18:00.000
| 2 | 0.099668 | false | 18,562,006 | 0 | 0 | 1 | 1 |
Scenario:
Major web app w. Python+Flask
Flask login and Flask.session for basic session variables (user-id and session-id)
Flask.session and limitations? (Cookies)
Cookie based and basically persist only at the client side.
For some session variables that will be regularly read (ie, user permissions, custom application config) it feels awkward to carry all that info around in a cookie, at every single page request and response.
Database is too much?
Since the session can be identified at the server side by introducing unique session id at login, some server-side session variable management can be used. Reading this data at the server side from a database also feels like unnecessary overhead.
Question
What is the most efficient way to handle the session variables at the server side?
Perhaps that could be a memory-based solution, but I am worried that different Flask app requests could be executed at different threads that would not share the memory-stored session data, or cause conflicts in case of simultaneous reading-writing.
I am looking for advice and best practice for planning the basic level architecture.
|
How to execute a function when uwsgi is stopped
| 18,570,411 | 2 | 0 | 890 | 0 |
python,flask,uwsgi
|
you can use the python atexit module or the uwsgi.atexit hook
| 0 | 0 | 0 | 0 |
2013-09-02T08:49:00.000
| 1 | 1.2 | true | 18,569,056 | 0 | 0 | 1 | 1 |
i am using flask and uwsgi.. At some point i need to know when uwsgi is stopped or when my app (Flask) object is destroyed and when it happens, execute a function.
Any ideas ??
Please
|
Startup building and mokup
| 18,577,154 | 2 | 0 | 70 | 0 |
php,python,mysql,mongodb,startup
|
The idea that PHP/MySQL is easier or simpler than say Python/MongoDB is just inconsistent.
If you compare for example, Django (the most popular python web framework) with symfony(PHP) you will find that they are almost identical in terms of features and architecture (symfony is actually slightly more complex but also has more very advanced features).
For mockups, if I were you, I would use solely HTML/jQuery/CSS.
Build your pages just like you would like to have them in your beta version, use jQuery to load sample data written in json.
That's all you need. You can even find WYSIWYG application to speed up the process.
Later on, you can build the back-end application using either python or php, it won't matter.
The integration process will be identical, create your models, create the controllers, and use the HTML you already have as templates.
Building your app in php/mysql then convert it to python/mangodb will make you rewrite almost all the code simply because python is so much different from php (easier I would say too, but that's just my opinion) and because mangodb is not a relational database meaning you will have also to rethink partially your architecture.
| 0 | 0 | 0 | 1 |
2013-09-02T15:44:00.000
| 1 | 0.379949 | false | 18,576,838 | 0 | 0 | 1 | 1 |
since about two years ago, I did find my interest in code (Hardware/Sytems/Web) and now, I've found a project which motivates me a lot (It takes all my free time indeed).
Starting this point and because my project could soon switch from a free time project to a daily job, I'm currently developing a mockup of this project based on PHP/MySQL and JQuery.
Even if I'm a true Python/MongoDB lover and a System Engineer, I did prefer those technologies to build up my mockup because of their simplicity to build a complete functional private stack at home.
I'm pretty advanced on my mockup and it seems to work as I want it.
Now I'm wondering if, about your point of view, would have been better to start to build my mockup using directly the targeted technologies (Python/MongoDB) rather than to use the easy PHP/MySQL couple to do it?
Obviously, because I plan to made this project my daily job, I had to have something visually functionnal to be able to raise a little bit of money, and about me, using an easier stack it's more easy, but I would like to have your feedback on this kind of question.
|
Django bootstrap/middleware/enter-exit
| 18,594,433 | 0 | 0 | 216 | 0 |
python,django,request
|
Middleware is very certainly not thread-safe. You should not store anything per-request either on the middleware object, or in the global namespace.
The usual way to do this is sort of thing to annotate it onto the request object. Middleware and views have access to this, but to get it anywhere else (eg in the model) you'll need to pass it around.
| 0 | 0 | 0 | 0 |
2013-09-03T13:58:00.000
| 2 | 0 | false | 18,594,248 | 0 | 0 | 1 | 1 |
I have following problem. I want to add to django some kind of setup/teardown for each request. For example at the beginning of per user request I want to collect start data collection and at the end of request dump all data to database (1).
What comes to my mind right now, at the start of middleware instantiate an object (like singleton), every other part of the code can import this object, use its methods and then same middleware before returning response will scrap the object. The only concern I have is to be a threadsafe, so maybe create a global dict, and register keys that are build upon url + session_id hash or maybe request object id (internal python object id, maybe is good way to go?). At the end of request key will be scrapped from dict.
Any recommendations, thoughts, ideas?
(1) Please do not ask me why I cannot access DB directly or anything like this. This is only an example. I'm looking for general idea for something like enter and exit but request-response wise that can be imported in any place in a code and safely used.
|
ImportError: Could not import settings 'company.foo.settings' (Is it on sys.path?): No module named foo.settings
| 18,644,769 | 0 | 2 | 1,119 | 0 |
python,django,namespaces,setuptools
|
The only way I found for now is the following:
Do not use the namespace_packages parameter in setuptools.setup()
Instead, explicitly define "namespace" packages with this in the __init__.py file:
__import__('pkg_resources').declare_namespace(__name__)
I have not done a lot of testings as to if this will work if I don't install my libraries in the same order, but at least, I am able to import my django settings (and everything else).
EDIT
This ended up not being a viable solutions because if librairies are uninstalled and reinstalled, they will delete more than they should, so ending up loosing modules.
I ended up using namespace_packages in setuptools.setup() and use a different first level "namespace" for the running django project.
EDIT 2
Scrap all this, this namespace thing seems to be a good idea, but ended up just being a nightmare with django. So i reverted everything back to non-namespace code. I'm not happy that I have to do this, but that's the price to work with django.
| 0 | 0 | 0 | 0 |
2013-09-03T16:16:00.000
| 3 | 1.2 | true | 18,596,971 | 0 | 0 | 1 | 1 |
I was asked to split a big project into some reusable libraries (packages). So the idea was to do this:
company-django-shared
company-django-shared-dev
company-python-shared
company-python-shared-dev
These are installable with setuptools and namespaces:
company
company.packagename
company.packagename.tests
company.util
etc...
All this works fine. I can start a shell and do any of the import i need. The problem arrives when I now want to use this in a django project. My settings are in:
company.foo.settings
At this point, since setuptools installed some packages, when I try to
$ ./manage.py shell
I get the error::
ImportError: Could not import settings 'company.foo.settings' (Is it on sys.path?): No module named foo.settings
I really can't figure out how to use namespace within django apps. If I fire up a shell and do:
import company
company.__path__
The installed paths are found, but not the current directory. What am I missing?
EDIT
I would like to point out that the problem is Python cannot find any package under company because setuptools-installed packages define company as a namespace.
EDIT 2
Django is just unhappy with namespaces. It seems there are no viable solutions.
|
Pip doesn't install packages to activated virtualenv, ignores requirements.txt
| 25,869,799 | 3 | 11 | 6,391 | 0 |
python,django,git,virtualenv,virtualenvwrapper
|
Struggled with some variation of this issue not long ago; it ended up being my cluttered .bash_profile file.
Make sure you don't have anything that might mess up your virtualenv inside your .bash_profile/.bashrc, such as $VIRTUAL_ENV or $PYTHONHOME or $PYTHONPATH environment variables.
| 0 | 1 | 0 | 0 |
2013-09-03T23:57:00.000
| 3 | 0.197375 | false | 18,603,302 | 1 | 0 | 1 | 2 |
I am attempting to setup a development environment on my new dev machine at home. I have just installed Ubuntu and now I am attempting to clone a remote repo from our web-server and install its dependencies so I can begin work.
So far I have manually installed virtualenv and virtualenvwrapper from pypi and edited my bash.rc appropriately to source my virtualenvs when i start my terminal. I then cloned my repo to ~/projects/project-name/websitename.com. Then I used virtualenvwrapper to mkvirtualenv env-name from ~/projects/project-name/websitename.com. This reflects exactly the file-structure/setup of the web-server I am cloning from. So far so good.
I logged into the dev server and activate the virtualenv there and use pip freeze -l > req.txt to render a dependencies list and scp to my local machine. I activate the virtualenv on my local machine, navigate to the ~/projects/project-name/websitename.com and execute pip install -r path-to-req.txt and it runs through all of the dependencies as if nothing is wrong. However, when i attempt to manage.py syncdb i get an error about not finding core django packages. What the hell? So i figure somehow Django failed to install, i run pip install Django==1.5.1 and it completes successfully. I got to setup my site again and get another error about no module named django_extensions. Okay, what the hell with it, i just installed all of these packages with pip?!
So i pip freeze -l > test.txt and cat test.txt, what does it list? Django==1.5.1, the one package I just manually installed. Why isn't pip installing my dependencies from my specified list into my virtualenv? What am I messing up here?
-EDIT-------------
Which pip gives me the path to pip in my virtualenv
I have only 1 virtualenv and it is activated
|
Pip doesn't install packages to activated virtualenv, ignores requirements.txt
| 32,925,897 | 2 | 11 | 6,391 | 0 |
python,django,git,virtualenv,virtualenvwrapper
|
I know this is an old post, but I just encountered a similar problem. In my case the cause was that I was running the pip install command using sudo. This made the command run globally and the packages install in the global python path.
Hope that helps somebody.
| 0 | 1 | 0 | 0 |
2013-09-03T23:57:00.000
| 3 | 0.132549 | false | 18,603,302 | 1 | 0 | 1 | 2 |
I am attempting to setup a development environment on my new dev machine at home. I have just installed Ubuntu and now I am attempting to clone a remote repo from our web-server and install its dependencies so I can begin work.
So far I have manually installed virtualenv and virtualenvwrapper from pypi and edited my bash.rc appropriately to source my virtualenvs when i start my terminal. I then cloned my repo to ~/projects/project-name/websitename.com. Then I used virtualenvwrapper to mkvirtualenv env-name from ~/projects/project-name/websitename.com. This reflects exactly the file-structure/setup of the web-server I am cloning from. So far so good.
I logged into the dev server and activate the virtualenv there and use pip freeze -l > req.txt to render a dependencies list and scp to my local machine. I activate the virtualenv on my local machine, navigate to the ~/projects/project-name/websitename.com and execute pip install -r path-to-req.txt and it runs through all of the dependencies as if nothing is wrong. However, when i attempt to manage.py syncdb i get an error about not finding core django packages. What the hell? So i figure somehow Django failed to install, i run pip install Django==1.5.1 and it completes successfully. I got to setup my site again and get another error about no module named django_extensions. Okay, what the hell with it, i just installed all of these packages with pip?!
So i pip freeze -l > test.txt and cat test.txt, what does it list? Django==1.5.1, the one package I just manually installed. Why isn't pip installing my dependencies from my specified list into my virtualenv? What am I messing up here?
-EDIT-------------
Which pip gives me the path to pip in my virtualenv
I have only 1 virtualenv and it is activated
|
How to deserialize xml created by to_xml() in google appengine
| 20,461,649 | 0 | 4 | 190 | 0 |
python,xml,google-app-engine
|
Just to clarify, I'm going to assume that you're asking about the Model.to_xml() method, and that by efficient you mean a single method which you can call which will provide you with a model object.
As you noted, there is no such method on the Model class for the datastore API. I think the intention is that the purpose of the to XML method is to make the model easily exportable to another application, such as a javascript client or for importing into another database or storage mechanism, similar to using the remote API.
It should be possible to create a function or static method of a specific Model class which would generate a new model of a particular type from parsed XML. You will then most likely want to perform a get_or_insert() to write the resulting object.
If you're looking for a native Python to Python serialization method, you could consider pickle.
| 0 | 1 | 0 | 0 |
2013-09-04T17:33:00.000
| 1 | 0 | false | 18,620,311 | 0 | 0 | 1 | 1 |
In Google App Engine, I can serialize an object by calling its to_xml() method.
There doesnt appear to be an equivalent from_xml() method to deserialize the xml.
Is there an efficient way to deserialize back to an object?
|
GAE Python Server initiating Angular Routes from Responses
| 18,626,669 | 0 | 0 | 298 | 0 |
python,google-app-engine,angularjs,routes,webapp2
|
I'm working on the same stack and the way we do it is that we have the main pages (index, login, signup) configured as regular individual pages where we use angular without routing. Any page that anonymous users will access will be such pages which work over server side routing. But once the user login is successful we serve a page which will start serving other views through client side routing.
| 0 | 1 | 0 | 0 |
2013-09-04T23:31:00.000
| 2 | 0 | false | 18,625,446 | 0 | 0 | 1 | 2 |
I'm currently building a web application using AngularJS, Webapp2, and the Python Google App Engine environment. This app is supposed to have all the features of modern social networks (users, posts, likes, comments). I want the page hierarchy to look like this, the main pages are from the server and the sub pages are supposed to be angular routes:
Index
Learn More
Sign up
Log in
Feed Page
Popular Feed
Following Feed
Profile
Interactions
Posts
Settings
Profile
Account
The problem is that when a user wants to signup I want them to be able to go to /signup and get the index page with the signup route loaded. How can I get the server to preload an angular route from the response
|
GAE Python Server initiating Angular Routes from Responses
| 21,590,989 | 0 | 0 | 298 | 0 |
python,google-app-engine,angularjs,routes,webapp2
|
Make both GAE and Angular understand your routes. You will need to define them for one, why not both?
You just have to organise your markup and structure so it can support complete page loading and ajax loading. For example, initial load is done on any route by GAE, then Angular can take over, loading each page "content" as it goes.
This has the additional advantage of public pages being crawler friendly while real users get ajax loading (which should reduce bandwidth once you scale).
You may need to load user state in via the server, and or force a page reload on log in or out to do so.
I have done the above on a few apps, and it works well.
| 0 | 1 | 0 | 0 |
2013-09-04T23:31:00.000
| 2 | 0 | false | 18,625,446 | 0 | 0 | 1 | 2 |
I'm currently building a web application using AngularJS, Webapp2, and the Python Google App Engine environment. This app is supposed to have all the features of modern social networks (users, posts, likes, comments). I want the page hierarchy to look like this, the main pages are from the server and the sub pages are supposed to be angular routes:
Index
Learn More
Sign up
Log in
Feed Page
Popular Feed
Following Feed
Profile
Interactions
Posts
Settings
Profile
Account
The problem is that when a user wants to signup I want them to be able to go to /signup and get the index page with the signup route loaded. How can I get the server to preload an angular route from the response
|
Django custom user VS user profile
| 18,625,653 | 1 | 0 | 907 | 0 |
python,django,django-authentication,django-registration,django-1.5
|
When you have it all in one table, then database access is faster. With the old way you had to join on the auxiliary table to get all the information of the user.
Usually when you see a One-to-One relation, it would be better just to merge them in one table.
But the new custom User model solves also another problem, which is what atributes a User should have? What attributes are essential for your application? Is an email required? Should the email be also the username with which a user logs in?
You couldn't do these stuff before this feature was introduced.
Regarding your question about where to put additional user information like "hobbies" and such, it really depends on how often you will query/need this attributes. Are they gonna be only on the user's profile page? Well then you could have them in a seperate table and there wouldn't be much problem or performance hit. Otherwise prefer to store them on the same table as the User.
| 0 | 0 | 0 | 0 |
2013-09-04T23:47:00.000
| 1 | 1.2 | true | 18,625,601 | 0 | 0 | 1 | 1 |
I'm currently using Django 1.5.1 and using a custom user as described in the official documentation. I realized everything is stored under one table, the auth_user one.
My question is, why is it better to have everything in one big table, instead of having 2 tables like it used to be prior to 1.5 by using a user_profile table for all additional data? It seems smarter the way it used to be, in case we want to add 20 new fields for information about the user, it is weird to have everything in auth_user.
In my case, for now I have class MyUser(AbstractUser) with 2 additional fields gender and date_of_birth, so it's all good with this, but now I would like to have many other information (text fields) like "favorite movies", "favorite books", "hobbies", "5 things I could not live without", etc. etc., to have way more information about my user. So I was just wondering if I should put that under MyUser class, or should I define a UserProfile one? And why?
Thanks!
|
Get Shape from selected Edges
| 18,669,628 | 0 | 0 | 1,151 | 0 |
python,maya,pymel
|
We can find the shape from the edge, by simply listing it's immediate connection with the use of node()
PYMEL:
pm.PyNode(selection[0].node().getParent())
No need to split the string, or re map the array.
| 0 | 0 | 0 | 0 |
2013-09-05T12:13:00.000
| 2 | 1.2 | true | 18,636,019 | 0 | 0 | 1 | 1 |
Im using Maya to perform a certain task on selected edges.
Let's say I save these edges like this:
edges = pm.filterExpand(sm=32)
From here, I can just select the first edge, and get the object by splitting the unicode string:
'pSphere1.e[274]'
Here's how I split it, and it gave me pSphere1, however calling getShape() on that still doesn't work because it's a unicode object.
object = edges[0].split('.')[0].getShape()
Is there a better way to do this?
|
Django 1.3 authentication
| 19,995,173 | 0 | 0 | 31 | 0 |
django,python-2.7,django-1.3
|
I will try to answer why this was happening for us, i worked on it long time ago so will try to recollect as much as i can.
We were allowing admins to modify the login id of the user, This would go and change the email id int he partial digest table. A lot of times they would use this to disable an account by changing the login id of that user. Now what would happen is this user who's not able to login as his id is changed did a trial registration with us using the same email id/password as before and hence now the partial digest table will have two entries.
| 0 | 0 | 0 | 0 |
2013-09-05T12:20:00.000
| 1 | 0 | false | 18,636,159 | 0 | 0 | 1 | 1 |
We have digest authentication in our application. For some reason we are seeing for a few users having different id, username as in "auth_user" table but for some reason in the django_digest_partialdigest the user_id is different but the "login" column has the same username.
I Am not able find out what scenario would lead to this kind of entry in the db.
we allow signup/activation of account/resetting password.
|
JavaScript client failed to send WebSocket frame to WebSocket server on Amazon EC2
| 20,952,263 | 1 | 0 | 1,419 | 0 |
python,amazon-web-services,amazon-ec2,websocket
|
This could be caused by to a Chrome Extension.
| 0 | 1 | 1 | 0 |
2013-09-05T22:46:00.000
| 1 | 0.197375 | false | 18,647,154 | 0 | 0 | 1 | 1 |
I have a web socket server running on an Ubuntu 12.04 EC2 Instance. My web socket server is written in Python, I am using Autobahn WebSockets.
I have a JavaScript client that uses WebRTC to capture webcam frames and send them to the web socket server.
My webserver (where the JavaScript is hosted) is not deployed on EC2. The python web socket server only do video frame processing and is running over TCP and port 9000.
My Problem:
The JS client can connect to the web socket and the server receives and processes the webcam frames. However, after 5 or 6 minutes the client stops sending the frames and displaying the following message:
WebSocket connection to 'ws://x.x.x.x:9000/' failed: Failed to send
WebSocket frame.
When I print the error data I got "undefined".
Of course, this never happens when I run the server on my local testing environment.
|
How to properly decouple Django from an AJAX application?
| 18,650,699 | 1 | 1 | 275 | 0 |
javascript,python,django,client-server,tastypie
|
A mobile client doesn't care if the Javascript comes from Django or any other web server. So go ahead and put all your JavaScript and static HTML on another server.
If you want your mobile app to see if the user is logged in, it should make an AJAX call to your Django backend (where the request is authenticated). The data returned should indicate if the session is active (user is logged in).
Another AJAX call can perform the Django logout function.
| 0 | 0 | 0 | 0 |
2013-09-06T05:30:00.000
| 1 | 1.2 | true | 18,650,551 | 0 | 0 | 1 | 1 |
I'm using TastyPie and Django to build out my backend for an application that will have browser and mobile (native iOS) clients.
I have been through the TastyPie and Django docs, can authenticate successfully either using the TastyPie resources I set up, or using Djangos built in views. I see a lot of examples on including the CSRF token on the page and grabbing it with your JavaScript, and that works, but I don't understand now to actually determine whether a user is logged in on initial page load (from JavaScript).
Example:
If I want to serve static HTML from a separate, fast web server, and cache my application JavaScript, and only interact with Django through TastyPie views, how do I determine if the user is logged in (and know to render a login form or the app views using JavaScript), and after logout, is there any session information I need to remove from the client browser?
If I were to serve up HTML through Django's template engine, I could render the login form through there appropriately, but that seems not ideal if I want to truly decouple my JavaScript app from Django (and behave like a mobile client).
Edit: I am using Backbone.js, but I don't think that should matter.
UPDATE:
I think I figured it out reading through Django's CSRF documentation again.
If your view is not rendering a template containing the csrf_token template tag, Django might not set the CSRF token cookie. This is common in cases where forms are dynamically added to the page. To address this case, Django provides a view decorator which forces setting of the cookie: ensure_csrf_cookie().
If I do not want to render Django templates, this reads like I can still use the cookie and pull that into my Backbone or jQuery AJAX methods. I'm not sure if TastyPie ensures the cookie will be sent or how to tie into it.
If I use AJAX to logout, will the cookie automatically be removed or does it become invalid? Are these CSRF tokens unique to each user session? I'll have to test some things tomorrow with it. Is it possible to use Django decorators on TastyPie views?
|
Test an app in django and connect to local database
| 18,655,003 | 0 | 0 | 129 | 0 |
python,django
|
Can't you specify the details of your development database in settings.py? This would connect you to the existing database and remove the need to create a new test instance.
| 0 | 0 | 0 | 0 |
2013-09-06T06:49:00.000
| 1 | 0 | false | 18,651,617 | 0 | 0 | 1 | 1 |
How can I test my app in django, so that it connects to my local database and does NOT require me to create a test database?
|
Light Weight Python Web FrameWork
| 18,658,647 | 0 | 0 | 379 | 0 |
python,html
|
Flask. Just use Flask.
Honestly you'll finish your project in 20 minutes.
| 0 | 0 | 0 | 0 |
2013-09-06T12:21:00.000
| 1 | 1.2 | true | 18,657,777 | 0 | 0 | 1 | 1 |
I have a database that can be fetched using mysql query and send the data as response for an ajax jquery request. Which python framework would be best suitable to satisfy this primary need?
Thanks in advance.
|
Convert Web Friendly Django app to a Mobile friendly django app
| 29,824,864 | 1 | 4 | 4,667 | 0 |
python,django,user-interface,packages
|
Use the bootstrap. I developed the django app too using the bootstrap and it works fine in all smartphone, tablet and desktop devices.
| 0 | 0 | 0 | 0 |
2013-09-06T20:40:00.000
| 2 | 0.099668 | false | 18,666,243 | 0 | 0 | 1 | 1 |
I've made a desktop-friendly django app and would prefer not to have to rewrite all of the html/css to allow proper view on mobile browsers.
I'm on django 1.5 and python 2.7
Is there a package or library or quicker way to efficiently create a mobile version of my django (web) app instead of having to re-write a whole new template with html/css ?
Thank you!
|
Python: Parsing out publication dates from html pages
| 18,669,461 | 1 | 1 | 506 | 0 |
python,regex
|
I don't see why a collection of regexs wouldn't work. There are a variety of different formats, but there are really only a handful that are most common. With, say, a dozen easy regex, you could probably scrub 90% of the dates out there.
Another (partial) approach would be to scan for month names and abbreviations, and then scan the surrounding text for days and year.
For numeric-only, the hardest part would be figuring out whether it's month then date or date then month. So will be easy, if the date part is greater than 12, but otherwise there's not really anyway of knowing.
You could also look for <time> elements with the datetime attribute, which is supposed to follow an unambiguous format (though not necessarily consistent).
Bottom line, I don't think there is any one way to find all the dates in a document, unless you know they all follow the same format, which obviously isn't going to be the case in general. To have a good shot of finding them, you'll just need to employ several different strategies.
| 0 | 0 | 0 | 0 |
2013-09-07T03:25:00.000
| 2 | 0.099668 | false | 18,669,374 | 1 | 0 | 1 | 1 |
For html pages, and news-related pages especially, it would be very helpful and incredibly useful if there were a mechanism for parsing out the publication dates.
Unfortunately, there is not one set regex/pattern for dates on the internet. CNN may publish it like MONTH DD, YYYY and HuffingtonPost may publish as MM/DD/YY, and so on.
Does anyone have any strategies which are better than just pure regex parsing for extracting publication dates out of html pages?
Thank you.
|
Persistence of a large number of objects
| 18,674,706 | 0 | 0 | 95 | 1 |
python,persistence
|
Martijn's suggestion could be one of the alternatives.
You may consider to store the pickle objects directly in a sqlite database which still can manage from the python standard library.
Use a StringIO object to convert between the database column and python object.
You didn't mention the size of each object you are pickling now. I guess it should stay well within sqlite's limit.
| 0 | 0 | 0 | 0 |
2013-09-07T15:02:00.000
| 1 | 0 | false | 18,674,630 | 0 | 0 | 1 | 1 |
I have some code that I am working on that scrapes some data from a website, and then extracts certain key information from that website and stores it in an object. I create a couple hundred of these objects each day, each from unique url's. This is working quite well, however, I'm inexperienced in what options are available to me in Python for persistence and what would be best suited for my needs.
Currently I am using pickle. To do so, I am keeping all of these webpage objects and appending them in a list as new ones are created, then saving that list to a pickle (then reloading it whenever the list is to be updated). However, as i'm in the order of some GB of data, i'm finding pickle to be somewhat slow. It's not unworkable, but I'm wondering if there is a more well suited alternative. I don't really want to break apart the structure of my objects and store it in a sql type database, as its important for me to keep the methods and the data as a single object.
Shelve is one option I've been looking into, as my impression is then that I wouldn't have to unpickle and pickle all the old entries (just the most recent day that needs to be updated), but am unsure if this is how shelve works, and how fast it is.
So to avoid rambling on, my question is: what is the preferred persistence method for storing a large number of objects (all of the same type), to keep read/write speed up as the collection grows?
|
openERP restrict users
| 18,828,239 | 0 | 0 | 404 | 0 |
python,openerp,restrict
|
For your cases:
You can try using Many2Many relation so as to choose the record of users.
Use Groups to obtain your desired result.
For example:
<field name="user_id" groups="your_group" />
By doing this you can provide what fields to be visible to which user based on access rights provided in your GROUPS.
| 0 | 0 | 0 | 0 |
2013-09-09T14:48:00.000
| 1 | 0 | false | 18,700,969 | 0 | 0 | 1 | 1 |
I'm facing a complex problem, at least for me.
I have a form called "Task", which contains all the normal info, and I would like to add users to that Task.
If I want to add multiple users to that task, I should use the widget one2many, am I right? If so, is it possible to display a dropdown or something and add the users already registered? Because, with the default one2many, I have to register the users (like a Form) and then I can add them..but if they are already in the table, it should appear me a dropdown menu or something..
After the task is created, the users should only see the task with their name, only administrator can view it all. I think that to achieve this I need to create rules, right? If so, do I need to create them by code or could I use the openERP rule menu? And this will be enough: ('user_id', '=', user.id)]? The first column "user_id" is created on "Task" table?
I do not need to have a auxiliary table that would contain something like: id, task_id, id_user..and by this I could get which tasks belongs to whichs users??
Thanks guys
|
BeautifulSoup installation or alternative without easy_install
| 18,701,653 | 1 | 0 | 4,503 | 0 |
python,beautifulsoup,windows-7-x64
|
You just need to add to download it and add it to your python search path directly. (Which is in sys.path, if you need to check it.)
From the documentation:
Beautiful Soup is licensed under the MIT license, so you can also download the tarball, drop the bs4/ directory into almost any Python application (or into your library path) and start using it immediately. (If you want to do this under Python 3, you will need to manually convert the code using 2to3.)
| 0 | 0 | 0 | 0 |
2013-09-09T15:11:00.000
| 3 | 0.066568 | false | 18,701,464 | 1 | 0 | 1 | 1 |
I wanted to write a program to scrape a website from python. Since there is no built-in possibility to do so, I decided to give the BeautifulSoup module a try.
Unfortunately I encountered some problems using pip and ez_install, since I use Windows 7 64 bit and Python 3.3.
Is there a way to get the BeautifulSoup module on my Python 3.3 installation with Windows 7 64x without ez_install or easy_install, since I have too much trouble with this, or is there an alternative module which can be easily installed?
|
Run unit tests for two Django applications in PyCharm
| 18,716,562 | 0 | 1 | 701 | 0 |
python,django,pycharm
|
OK, I've found solution.
If you want to launch many test suits with one test launcher in PyCharm, you should add names of your applications separated by a space to Target field of your Django tests Run/Debug configuration.
| 0 | 0 | 0 | 0 |
2013-09-09T19:02:00.000
| 1 | 1.2 | true | 18,705,107 | 0 | 0 | 1 | 1 |
I've Django project with two applications with added unit tests (django.test.TestCase). In PyCharm I have created two Django test launchers (one for every app). In this configuration everything works correctly. Now I want to create one test launcher, which will be able to start unit test for both applications at once.
Is it possible?
|
How to create text box in plone site
| 18,714,130 | 5 | 0 | 111 | 0 |
python,html,plone
|
You can create a static text portlet in that context you need it: folder, page.
| 1 | 0 | 0 | 0 |
2013-09-10T06:12:00.000
| 1 | 1.2 | true | 18,711,815 | 0 | 0 | 1 | 1 |
Hi want to add text box and label in plone site
but, that plone site does not display text box
how can i create text box in plone site
thanks!
|
How to print report in EXCEL format (XLS)
| 18,716,823 | 0 | 1 | 2,902 | 1 |
python,openerp
|
In python library are available to export data in pdf and excel
For excel you can use:
1)xlwt
2)Elementtree
For pdf genration :
1)Pypdf
2)Reportlab
are available
| 0 | 0 | 0 | 0 |
2013-09-10T10:34:00.000
| 3 | 0 | false | 18,716,623 | 0 | 0 | 1 | 1 |
I'm a beginner of openerp 7. i just want to know the details regarding how to generate report in openerp 7 in xls format.
The formats supported in OpenERP report types are : pdf, odt, raw, sxw, etc..
Is there any direct feature that is available in OpenERP 7 regarding printing the report in EXCEL format(XLS)
|
How can I deploy a Flask application on Koding.com
| 18,719,851 | 5 | 1 | 626 | 0 |
python,flask
|
Use 0.0.0.0 as source IP. Also remember that, your VM will be turned off 15 minutes after logout.
| 0 | 0 | 0 | 0 |
2013-09-10T11:32:00.000
| 1 | 1.2 | true | 18,717,844 | 0 | 0 | 1 | 1 |
I am working on a Flask app, and want to deploy it on Koding so that my other team members can also view/edit it. I cloned the git repository inside a VM ( on Koding.com ), install PIP, installed dependencies, but when I start the flask server, it displays that the server has started and is running on 127.0.0.1:5000.
But when I go to :5000, it says VM is not active.
NOTE : normally works and displays the files under VM's "Web" folder.
|
Python/Cherrypy: set cookie on redirect
| 18,746,486 | 2 | 2 | 816 | 0 |
python,redirect,cookies,cherrypy
|
To answer my own question: It would appear that if I add cherrypy.response.cookie[<tag>]['path'] = '/' after setting the cookie value, it works as desired.
| 0 | 0 | 1 | 0 |
2013-09-11T16:13:00.000
| 1 | 1.2 | true | 18,746,272 | 0 | 0 | 1 | 1 |
I am trying to figure out how to set a cookie just before a redirect from Cherrypy. My situation is this:
when a user logs in, I would like to set a cookie with the users username for use in
client-side code (specifically, inserting the users name into each
page to show who is currently logged in).
The way my login system works is that after a successful login, the user is redirected to whatever page they were trying to access before logging in, or the default page. Technically they are redirected to a different domain, since the login page is secure while the rest of the site is not, but it is all on the same site/hostname. Redirection is accomplished by raising a cherrypy.HTTPRedirect(). I would like to set the cookie either just before or just after the redirect, but when I tried setting cherrypy.response.cookie[<tag>]=<value> before the redirect, it does nothing. At the moment I have resorted to setting the cookie in every index page of my site, in the hopes that that will cover most of the redirect options, but I don't like this solution. Is there a better option, and if so what?
|
Storing Images In DB Using Django Models
| 70,710,224 | 0 | 13 | 27,585 | 0 |
python,database,django
|
I think the best approach is to store the 'main file' in your media path of your project and save the address of file(path to the file) in your model. this way you dont need to convert....
| 0 | 0 | 0 | 0 |
2013-09-11T17:34:00.000
| 4 | 0 | false | 18,747,730 | 0 | 0 | 1 | 1 |
I am using Django for creating one Web service and I want that web service to return Images. I am deciding the basic architecture of my Web service. What I came up to conclusion after stumbling on google is:
I should store Images in DB after encoding them to Base64 format.
Transferring the Images would be easy when directly Bases64 decoded string is transmitted.
But I have one issue how can I store bases64 encoded string in DB using Django models? Also, if you see any flaw in my basic architecture please guide me.
I am new to Web services and Django
Thanks!!
|
Upgrading to Python 2.7 Google App Engine 500 server error
| 18,774,464 | 2 | 0 | 198 | 0 |
python,google-app-engine,python-2.7,server-error
|
I'm not sure if this is your formatting when you loaded your code here, but where you define app in main.py should not be part of the contacts class. If it is, your reference to main.app in your app.yaml won't work and your page won't load.
| 0 | 1 | 0 | 0 |
2013-09-12T02:08:00.000
| 3 | 0.132549 | false | 18,754,202 | 0 | 0 | 1 | 3 |
I just started using Google App Engine and I am very new to Python. I may have made a stupid mistake or a fatal error, I don't know, but I realized that the basic "template" I downloaded from a website was old and used Python 2.5.
So, I decided to update to Python 2.7 (after recieving a warning in the site's dashboard).
I have no idea how to do this, but I blindly followed some instructions on how to update but I'm not sure what I did wrong.
I know that I downloaded Python 2.7 (as the download path is C:/Python27/), so there shouldn't be a problem there. Can anybody tell what I'm doing wrong?
|
Upgrading to Python 2.7 Google App Engine 500 server error
| 18,778,368 | 0 | 0 | 198 | 0 |
python,google-app-engine,python-2.7,server-error
|
Thank you everyone for your respective answers and comments, but I recently stumbled upon GAE boilerplate and decided to use that and everything's fine. I kept having very odd problems with GAE beforehand, but the boilerplate is simple and seems to be working fine so far. Anyways, thanks again. (Note: I would delete the question but two people have already answered and received rep from +1s, and they are in fact helpful answers, so I'll leave it be).
| 0 | 1 | 0 | 0 |
2013-09-12T02:08:00.000
| 3 | 1.2 | true | 18,754,202 | 0 | 0 | 1 | 3 |
I just started using Google App Engine and I am very new to Python. I may have made a stupid mistake or a fatal error, I don't know, but I realized that the basic "template" I downloaded from a website was old and used Python 2.5.
So, I decided to update to Python 2.7 (after recieving a warning in the site's dashboard).
I have no idea how to do this, but I blindly followed some instructions on how to update but I'm not sure what I did wrong.
I know that I downloaded Python 2.7 (as the download path is C:/Python27/), so there shouldn't be a problem there. Can anybody tell what I'm doing wrong?
|
Upgrading to Python 2.7 Google App Engine 500 server error
| 18,754,606 | 2 | 0 | 198 | 0 |
python,google-app-engine,python-2.7,server-error
|
I'm submitting as an answer because I'm relatively new to SO and don't have enough rep to comment, so sorry about that... But line 7 of your new main.py uses webapp instead of webapp2, so that may be causing some troubles, but likely isn't the reason that it's not working. Could you also provide the contact.html template?
| 0 | 1 | 0 | 0 |
2013-09-12T02:08:00.000
| 3 | 0.132549 | false | 18,754,202 | 0 | 0 | 1 | 3 |
I just started using Google App Engine and I am very new to Python. I may have made a stupid mistake or a fatal error, I don't know, but I realized that the basic "template" I downloaded from a website was old and used Python 2.5.
So, I decided to update to Python 2.7 (after recieving a warning in the site's dashboard).
I have no idea how to do this, but I blindly followed some instructions on how to update but I'm not sure what I did wrong.
I know that I downloaded Python 2.7 (as the download path is C:/Python27/), so there shouldn't be a problem there. Can anybody tell what I'm doing wrong?
|
Django Clear All Admin List Filters
| 18,755,121 | 0 | 3 | 1,347 | 0 |
python,django,list
|
If you have at least one entry in search_fields and therefore are showing a search box on your admin changelist page, if you have any filters or search terms in effect you should see information to the right of it showing the number of rows that match your current filter and search criteria. It'll be worded as something like "5 results (50 total)". The "50 total" text will be a link to an unfiltered version of the list, showing the whole set. Possibly paginated, but all filters will be cleared.
This doesn't appear to be automatically exposed without the search box. The filter settings are simple arguments in the URL querystring, so it should be easy to add a link similar to the one in the search box that just drops the querystring, but you'd have to learn a little about the admin templates to do so. Setting a search_fields entry is probably simpler, if you have anything reasonable to search over.
| 0 | 0 | 0 | 0 |
2013-09-12T03:49:00.000
| 1 | 0 | false | 18,755,024 | 0 | 0 | 1 | 1 |
I have a Django admin control panel and in each list of objects there are lots and lots of list filters. I want to be able to clear all the filters with a click of a button, but can't find where this ability is, if it already exists in Django.
Routes I'm considering (but cannot figure out):
Make the last item in the breadcrumb link to the full list
Make a direct hyperlink as a filter list option
Find some way to access all the query options and remove them or simply return a blank one (queryset.all() isn't working; I'm probably barking up the wrong tree.)
That kind of thing should already exist! Find out how to use it.
Does anybody know how to accomplish this? I've been trying to figure it out all day.
|
How to run django tests in Eclipse to make debugging possible, but on test database
| 19,786,166 | 1 | 2 | 1,473 | 0 |
python,django,eclipse,unit-testing,pydev
|
You can create a new PyDev django debug configuration in eclipse and set the program arguments to 'test'.
In this case, the debug configuration will execute the following command: `python manage.py test' and your breakpoints inside test cases will get hit.
| 0 | 0 | 0 | 1 |
2013-09-12T15:58:00.000
| 1 | 1.2 | true | 18,769,092 | 0 | 0 | 1 | 1 |
I've got problem concerning me a long time. I either run tests from eclipse (Python unittest) using Pydev or Nose test runner. That way it's possible to debug tests and watch them in PyUnit view. But that way test database is not created, manage.py is not used.
Or I run them via manage.py test - test db is being created, but above features not available that way.
Is that possible to debug tests in eclipse which are being run on test db?
Regards,
okrutny
|
How to avoid repeated pre-calculation in django view
| 18,794,763 | 1 | 0 | 161 | 0 |
python,django
|
You could add a model (with a db table) that stores values for a, b and x. Then for each query, you could look for an instance with a and b and return the associated x.
| 0 | 0 | 0 | 0 |
2013-09-13T15:43:00.000
| 2 | 0.099668 | false | 18,790,301 | 1 | 0 | 1 | 2 |
I am writing an API which returns json according to queries. For example: localhost/api/query?a=1&b=2. To return the json, I need to do some pre-calculations to calculate a value, say, x. The pre-calculation takes long time (several hundred milliseconds). For example, The json file returns the value of x+a+b.
When the user query localhost/api/query?a=3&b=4, x will be calculate again and this is a waste of time since x won't change for any query. The question is how can I do this pre-calculation of x for all queries (In the real app, x is not a value but a complex object returned by wrapped C++ code).
|
How to avoid repeated pre-calculation in django view
| 18,790,769 | 2 | 0 | 161 | 0 |
python,django
|
If you are using some sort of cache (memcached, redis) you can store it there. You can try to serialize the object with pickle, msgpack etc. That you can retrieve and deserialze it.
| 0 | 0 | 0 | 0 |
2013-09-13T15:43:00.000
| 2 | 0.197375 | false | 18,790,301 | 1 | 0 | 1 | 2 |
I am writing an API which returns json according to queries. For example: localhost/api/query?a=1&b=2. To return the json, I need to do some pre-calculations to calculate a value, say, x. The pre-calculation takes long time (several hundred milliseconds). For example, The json file returns the value of x+a+b.
When the user query localhost/api/query?a=3&b=4, x will be calculate again and this is a waste of time since x won't change for any query. The question is how can I do this pre-calculation of x for all queries (In the real app, x is not a value but a complex object returned by wrapped C++ code).
|
Import XML data into a PDF form using Python
| 18,794,296 | 0 | 2 | 935 | 0 |
python,xml,pdf
|
Perhaps what you are looking for is whether Adobe LiveCycle Designer support command-line arguments to do that. You could then automate this with python by issuing the command-line, hum, commands.
| 0 | 0 | 1 | 0 |
2013-09-13T18:03:00.000
| 1 | 1.2 | true | 18,792,536 | 0 | 0 | 1 | 1 |
At my office we had PDFs designed using Adobe LiveCycle Designer that allows you to import xml data into the form to populate it. I would like to know if I could automate the process of importing the xml data into the form using python.
Ideally I would like it if I didn't have to re-create the form using python since the form itself is quite complex. I've looked up several different modules and they all seem to be able to read pdfs or create them from scratch, but not populate them.
Is there a python module out there that would have that kind of functionality?
Edit: I should mention that I don't have access to LiveCycle.
|
Do I need to reinstall Django for new virtualenv?
| 18,795,383 | 2 | 5 | 3,791 | 0 |
python,django,virtualenv,virtualenvwrapper
|
I would recommend just starting from scratch with a new virtualenv. That is the reason that they are built: one virtualenv can house a project that uses one version of Django, but another project can use a separate version of Django (perhaps an older version because an app you're using doesn't yet work with the newer version).
If you are attempting to completely recreate the same environment (probably because you want to run the project in another spot), you can use the pip freeze in alexcxe's answer. This will install everything again from scratch, attempting to install the exact same version. You may or may not want to do this, for the reasons I mentioned in the first paragraph.
This is the entire point of virtual environments. I have 20 different projects on my computer, each with their own virtualenv. It's fairly common to work in this manner.
| 0 | 0 | 0 | 0 |
2013-09-13T20:54:00.000
| 4 | 1.2 | true | 18,795,081 | 1 | 0 | 1 | 1 |
Using virtualenvwrapper, I installed Django for one virtualenv. Now I can't reach it outside that environment. I want to be able to start new Django projects both outside any virtualenv, and inside new virtualenvs.
Do I need to reinstall Django or can I somehow import the installation from my first virtualenv?
|
Debugging Cloud Endpoints: Error making request, but no request appears in logs
| 18,817,455 | 0 | 0 | 177 | 0 |
python,google-app-engine,google-cloud-endpoints
|
Check if you are running out of resources.
| 0 | 1 | 0 | 0 |
2013-09-14T14:48:00.000
| 2 | 0 | false | 18,802,940 | 0 | 0 | 1 | 1 |
I have an issue with debugging and Cloud Endpoints. I'm using tons of endpoints in my application, and one endpoint consistently returns with error code 500, message "Internal Error".
This endpoint does not appear in my app's logs, and when I run its code directly in the interactive console (in production), everything works fine.
There might be a bug in my code that I am failing to see, however, the real problem here is that the failing endpoints request is NOT showing up in my app's logs – which leaves me with no great way to debug the problem.
Any tips? Is it possible to force some kind of "debug" mode where more information (such as a stack trace) is conveyed back to me in the 500 response from endpoints? Why isn't the failing request showing up in my app's logs?
|
Where to hold static information for game logic?
| 18,807,184 | 1 | 0 | 209 | 1 |
python,google-app-engine,google-cloud-datastore
|
If the logic is fixed, keep it in your code. Maybe you can procedurally generate the dicts on startup. If there is a dynamic component to the logic (something you want to update frequently), a data store might be a better bet, but it sounds like that's not applicable here. Unless the number of combinations runs over the millions, and you'd want to trade speed in favour of a lower memory footprint, stick with putting it in the application itself.
| 0 | 0 | 0 | 0 |
2013-09-14T22:29:00.000
| 2 | 0.099668 | false | 18,807,022 | 1 | 0 | 1 | 1 |
The context for this question is:
A Google App Engine backend for a two-person multiplayer turn-based card game
The game revolves around different combinations of cards giving rise to different scores in the game
Obviously, one would store the state of a game in the GAE datastore, but I'm not sure on the approach for the design of the game logic itself. It seems I might have two choices:
Store entries in the datastore with a key that is a sorted list of the valid combinations of cards that can be player. These will then map to the score values. When a player tries to play a combination of cards, the server-side python will sort the combination appropriately and lookup the key. If it succeeds, it can do the necessary updates for the score, if it fails then the combination wasn't valid.
Store the valid combinations as a python dictionary written into the server-side code and perform the same lookups as above to test the validity/get the score but without a trip to the datastore.
From a cost point of view (datastore lookups aren't free), option 2 seems like it would be better. But then there is the performance of the instance itself - will the startup time, processing time, memory usage start to tip me into greater expense?
There's also the code maintanence issue of constructing that Python dictionary, but I can bash together some scripts to help me write the code for that on the infrequently occasions that the logic changes. I think there will be on the order of 1000 card combinations (that can produce a score) of between 2 and 6 cards if that helps anyone who wants to quantify the problem.
I'm starting out with this design, and the summary of the above is whether it is sensible to store the static logic of this kind of game in the datastore, or simply keep it as part of the CPU bound logic? What are the pros and cons of both approaches?
|
Where does Web2py save project files OS X?
| 18,813,520 | 2 | 4 | 912 | 0 |
python,macos,web2py
|
If you are using the Mac binary, I think the applications are in /web2py/web2py.app/Contents/Resources/applications/.
Note, you can also run the source version of web2py, in which case, the applications will be in /web2py/applications/.
| 0 | 1 | 0 | 0 |
2013-09-15T14:07:00.000
| 1 | 1.2 | true | 18,813,288 | 0 | 0 | 1 | 1 |
I am pulling my hair out trying to figure out where web2py stores the project files by default in OS X. It is not located in the same directoy as the web2py.app .
I can launch the web interface and see project in the admin view but want to edit the files from sublime text as opposed to the admin web interface.
I've looked through the web2py book and google user book with no luck. Any suggestions, this seems like it should be fairly obvious...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.