Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Getting Started With Python and Django
| 6,498,394 | 5 | 1 | 2,576 | 0 |
python,django
|
I'd suggest learning the basics of Python and at least the MVC design pattern first.
From there, yes, go ahead and starting creating a project in Django, it's the best way to learn Django.
As far as "python fundamentals" that means the obvious basic syntax and keywords. From there, it's more important that you understand "programming in general" especially the Object-oriented programming paradigm.
| 0 | 0 | 0 | 0 |
2011-06-27T20:11:00.000
| 4 | 0.244919 | false | 6,498,373 | 0 | 0 | 1 | 1 |
I am newbie to the Django framework. I want to learn it and use it to develop applications. I am new to python. To learn Django, do I need knowledge of python and design patterns?
I don't know whether I should learn the design patterns, e.g. MVC, and think about writing applications or instead start to learn the language by writing complex web applications. Your suggestions are welcome.
|
Django: Is there a way to prevent a view from being called twice simultaneously?
| 21,431,038 | 0 | 3 | 5,538 | 0 |
python,django
|
That is the best solution, however you could disable the button onclick by adding a disabled attribute to the button equalling 'true'
With jQuery:
$('#yourButtonId').attr("disabled", true);
| 0 | 0 | 0 | 0 |
2011-06-27T21:21:00.000
| 3 | 0 | false | 6,499,096 | 0 | 0 | 1 | 1 |
There's some functionality on our Django app that displays a link to refresh some information from our version control system. This is simply a link, and when they hit that URL the actions of going to version control, getting the information, checking it against the database and updating the database are performed.
Recently we had a user click the 'refresh' button twice for one asset. This resulted in the URL being hit twice, so the operations were performed twice and eventually a duplicate entry was created in our database.
We need to do something to remove the possibility of the user clicking that button twice. Should we move from a link to a javascript button? Should we set some flag in request.session as soon as the first click happens, then unset it upon completion, and always check that flag when performing a refresh? Those are just two ideas that seem feasible, but I really don't know.
|
Checking status of Task Queue in Google App Engine
| 9,361,730 | 2 | 9 | 4,031 | 0 |
python,google-app-engine,queue,task,task-queue
|
You can use memcache. Use a unique key specific to this task group. Set a count when you kick off your tasks, and have each task atomically decrement it. When the value is 0, your tasks are complete. The task that finds this value to be 0 can call your callback.
| 0 | 1 | 0 | 0 |
2011-06-28T02:16:00.000
| 3 | 0.132549 | false | 6,500,878 | 0 | 0 | 1 | 1 |
I'm putting several tasks into a task queue and would like to know when the specific tasks are done. I haven't found anything in the API about call backs, or checking the status of a task, so I thought I'd see what other people do, or if there's a work around (or official) way to check. I don't care about individual tasks, if it helps, I'm putting 6 different tasks in, and want to know when all 6 are complete.
Thanks!
|
How to inject template code in Plone?
| 6,503,962 | -1 | 1 | 667 | 0 |
python,html,templates,plone,zope
|
Perhaps you could approach this from a Javascript way? A lot of applications have like a global js file, that's included in all pages. Starting from that you could modify the DOM easily.
| 0 | 0 | 0 | 0 |
2011-06-28T08:44:00.000
| 2 | -0.099668 | false | 6,503,861 | 0 | 0 | 1 | 1 |
My goal is to inject some HTML-Code in front of every Plone article (between the page's header and the first paragraph)? I'm running Plone 4. Does anyone have a hint on how to realize that?
The other question is: is it possible to place some HTML code randomly in every Plone article?
|
Serve Django site using Nginx without proxy_pass
| 6,504,717 | 1 | 1 | 855 | 0 |
python,django,nginx,uwsgi
|
You don't state your OS, so... having deployed django behind both apache and nginx in Windows, I have to say that I found nginx to be infinitely easier. However, since nginx is more of a static file server with excellent proxying capability, I ran a separate wsgi server for the django app. After trying several (and finding they were unix-only), I found CheryPy's wsgi server (which can be used independently of rest of CherryPy) to work just fine (and it's pretty fast, to boot).
| 0 | 0 | 0 | 0 |
2011-06-28T09:13:00.000
| 2 | 0.099668 | false | 6,504,182 | 0 | 0 | 1 | 1 |
I have done the nginx configuration for serving django app .i am able to serve the django site using proxy pass for that i have to run the server manualy and then nginx serves the site.I want to execute the site using nginx but the server should get start automaticaly it shoud not be go through proxy_pass is this possible .please suggest some solution.
Thanks....
|
Selecting the next formfield with Selenium RC and python
| 6,510,614 | 0 | 0 | 385 | 0 |
python,forms,selenium-rc,field,autofill
|
Hi i got a solution for this i think,
problem is with generating the user interactions to the addition field.
use these statements
focus("form:addition");
keyPressNative("10") //this is ENTER command
it should work
| 0 | 0 | 1 | 0 |
2011-06-28T09:39:00.000
| 2 | 0 | false | 6,504,482 | 0 | 0 | 1 | 2 |
I am running an automated test to test a webform for my company. They have just installed a zipcode service which automatically adds the Street and City/Region when you fill in the Address and housenumber.
This autofill appears when you deselect the last form element (e.g. that of the housenumber).
This is the order of the fields I'm using;
form:zipcode
form:housenumber
form:addition (optional)
form:street (gets filled in by service after zipcode and housenumber are provided)
form:city (the other autofill field)
When you fill this form out manually, the address appears as soon as you click or tab into the addition field (as it is optional) but when it's done automated it doesn't work.
I have tried several things like;
focus('form:addition') or
select('form:addition') but these don't work. I have tried
type('\t') to tab to the form field, and
type('form:addition', ' ') to type a space into the add. field and even
type('form:addition', "") to leave it empty. None of these attempts have worked so far.
Is there anyone that can help me with this?
|
Selecting the next formfield with Selenium RC and python
| 6,531,214 | 0 | 0 | 385 | 0 |
python,forms,selenium-rc,field,autofill
|
Yesterday I found out that the zipcode service uses an Ajax call to retrieve the information. This call is executed when the zipcode and housenumber fields are both 'out of focus' (i.e. deselected).
The right statement I found to use this in my advantage is this;
selenium.fireEvent('form:number', 'blur') which deselects the last field where data was entered.
| 0 | 0 | 1 | 0 |
2011-06-28T09:39:00.000
| 2 | 0 | false | 6,504,482 | 0 | 0 | 1 | 2 |
I am running an automated test to test a webform for my company. They have just installed a zipcode service which automatically adds the Street and City/Region when you fill in the Address and housenumber.
This autofill appears when you deselect the last form element (e.g. that of the housenumber).
This is the order of the fields I'm using;
form:zipcode
form:housenumber
form:addition (optional)
form:street (gets filled in by service after zipcode and housenumber are provided)
form:city (the other autofill field)
When you fill this form out manually, the address appears as soon as you click or tab into the addition field (as it is optional) but when it's done automated it doesn't work.
I have tried several things like;
focus('form:addition') or
select('form:addition') but these don't work. I have tried
type('\t') to tab to the form field, and
type('form:addition', ' ') to type a space into the add. field and even
type('form:addition', "") to leave it empty. None of these attempts have worked so far.
Is there anyone that can help me with this?
|
Display streaming video and result of a python code at the same time in a web-browser
| 6,506,217 | 0 | 1 | 297 | 0 |
python,django
|
Are you actually sending the video, or are you sending an HTML page with an embedded video (either with Flash or HTML5)? If the former, showing other output would be impossible. If the latter, you can have it anywhere else on the page.
| 0 | 0 | 0 | 0 |
2011-06-28T11:59:00.000
| 1 | 0 | false | 6,506,064 | 0 | 0 | 1 | 1 |
I need to display a streaming video and also the output of a python code running simultaneously. What is the best framework to go about designing it.
|
Finding Only HTML Nodes Whose Attributes Match Exactly
| 6,506,453 | 0 | 1 | 146 | 0 |
python,xpath,selenium,beautifulsoup,lxml
|
There is no way to exclude unexpected attributes with XPath.
So you must find a safer way to locate elements you want. Things that you should consider:
In a form, each input should have a distinct name. The same is true for the form itself. So you can try //form[@name='...']/input[@name='...']
Add a class to the fields that you care about. Classes don't have be mentioned in any stylesheet. In fact, I used this for form field validation by using classes like decimal number or alpha number
| 0 | 0 | 1 | 0 |
2011-06-28T12:28:00.000
| 3 | 0 | false | 6,506,372 | 0 | 0 | 1 | 1 |
I'm working through a Selenium test where I want to assert a particular HTML node is an exact match as far as what attributes are present and their values (order is unimportant) and also that no other attributes are present. For example given the following fragment:
<input name="test" value="something"/>
I am trying to come up with the a good way of asserting its presence in the HTML output, such that the following (arbitrary) examples would not match:
<input name="test" value="something" onlick="doSomething()"/>
<input name="test" value="something" maxlength="75"/>
<input name="test" value="something" extraneous="a" unwanted="b"/>
I believe I can write an XPath statement as follows to find all of these, for example:
//input[value='something' and @name='test']
But, I haven't figured out how to write in such a way that it excludes not exact matches in a generalize fashion. Note, it doesn't have to be an XPath solution, but that struck me as the most likely elegant possibility.
|
Apache mod_wsgi and python 2.7
| 6,514,333 | 4 | 3 | 8,016 | 0 |
python,django,apache,wsgi
|
To answer the specific question, no it is not possible to make a mod_wsgi installation compiled for one Python version to use a different version at run time.
Why don't you create a separate question for the actual problem you are having with compiling from source code. Better still ask it on the mod_wsgi mailing list where best people to help you can be found.
| 0 | 0 | 0 | 0 |
2011-06-28T16:10:00.000
| 2 | 1.2 | true | 6,509,571 | 0 | 0 | 1 | 1 |
My WSGI installation uses python2.6 and my django project requires python 2.7 to work properly. Is is possible to relink the python version WSGI uses without recompiling wsgi? I've been having some odd errors trying to compile wsgi and I'd prefer to sidestep that whole troubleshooting process if possible.
Thanks
|
Making an array Field in Django
| 54,308,907 | 0 | 4 | 6,477 | 0 |
python,django,arrays,list
|
I know it seems counter intuitive, but you definitely want to use something general like a many to one (foreign key) from your answers table to your poll table. You want your app to easily grow to support future ideas, like associating metadata with those answer-poll pairings, and being able to efficiently do queries on that metadata, like what do people in Portland answer to this question.
If you really want a deadend, dead simple, DB schema, just create an array of pks . But don't be tempted to use a postgres-only ArrayField. If you want your Django models to work with backends other than postgres (MySQL, MariaDB, or sqlite) you'll need to store your array as a string serialization. The easiest format would be a json string, but anything will do, even csv.
| 0 | 0 | 0 | 0 |
2011-06-28T17:19:00.000
| 5 | 0 | false | 6,510,517 | 0 | 0 | 1 | 1 |
I need to make a model that stores the historical data of a Field, for example a poll that has a Field of its answers so it can make things with that.
I was looking to the ManytoMany field but its too general, it feels like im doing too much for something easy.
|
Can I have some code constantly run inside Django like a daemon
| 6,577,278 | 0 | 16 | 8,945 | 0 |
python,django,daemon,daemons,python-daemon
|
I previously used a cron job but I telling you, you will switch to celery after a while.
Celery is the way to go. Plus you can tasked long async process so you can speed up the request/response time.
| 0 | 0 | 0 | 0 |
2011-06-30T09:31:00.000
| 3 | 0 | false | 6,532,744 | 0 | 0 | 1 | 1 |
I'm using mod_wsgi to serve a django site through Apache. I also have some Python code that runs as a background process (dameon?). It keeps polling a server and inserts data into one of the Django models. This works fine but can I have this code be a part of my Django application and yet able to constantly run in the background? It doesn't need to be a process per se but a art of the Django site that is active constantly. If so, could you point me to an example or some documentation that would help me accomplish this?
Thanks.
|
When should the save method be called in Django?
| 6,534,100 | 3 | 6 | 2,139 | 0 |
python,django
|
The save method should be used when you modify an object that you got by any other means than create, such as .objects.get. Otherwise, your modifications are lost.
| 0 | 0 | 0 | 0 |
2011-06-30T11:31:00.000
| 3 | 0.197375 | false | 6,534,038 | 0 | 0 | 1 | 1 |
Should the save method be called after every create method or does calling the create method automatically call the save method?
If the save method is called automatically after creating an object then what would be a good use-case for the save method?
Thanks.
|
Should I add methods to my classes that inherit db.Model, or should I inherit those classes into a new class?
| 6,543,279 | 2 | 1 | 335 | 0 |
python,google-app-engine,google-cloud-datastore,models
|
Adding methods to db.Model subclasses is perfectly fine practice. There's only any point in having your actual model subclass something that is itself a db.Model subclass if you have common functionality you want shared by several model classes - just like in standard inheritance.
I'm not sure how your proposed approach would help with "not saving to the datastore without required attributes", unless you're planning on creating your own data models that you translate to and from datastore models - which is just going to be a huge waste of time (both yours and the processor's). The way the datastore library works, it's not possible to create a model with values that don't validate, and I'm not sure why you'd want to.
| 0 | 0 | 0 | 0 |
2011-06-30T12:53:00.000
| 3 | 1.2 | true | 6,535,062 | 0 | 0 | 1 | 1 |
When working with classes that inherit db.Model, is it better practice to add methods, or should I instead create a separate class?
E.g., if I want to store information on posts, should I have Post extend db.Model, or should I have PostData extend db.Model and Post extend (or even reference?) PostData?
The difference, I think, is that classes that inherit db.Model won't create instances without all the required attributes. The behaviour I'd like to see is not saving to the datastore without required attributes. Which is cleaner? Which is preferred?
|
A question about sites downtime updates
| 6,535,604 | 0 | 2 | 109 | 0 |
python,django,apache
|
When using apachectl graceful, you minimize the time the website is unavailable when 'restarting' Apache. All children are 'kindly' requested to restart and get their new configuration when they're not doing anything.
The USR1 or graceful signal causes the parent process to advise the children to exit after their current request (or to exit immediately if they're not serving anything). The parent re-reads its configuration files and re-opens its log files. As each child dies off the parent replaces it with a child from the new generation of the configuration, which begins serving new requests immediately.
At a heavy-traffic website, you will notice some performance loss, as some children will temporarily not accept new connections. It's my experience, however, that TCP recovers perfectly from this.
Considering that some websites take several minutes or hours to update, that is completely acceptable. If it is a really big issue, you could use a proxy, running multiple instances and updating them one at a time, or update at an off-peak moment.
| 0 | 0 | 0 | 0 |
2011-06-30T13:16:00.000
| 5 | 0 | false | 6,535,333 | 0 | 0 | 1 | 4 |
I've a server when I run a Django application but I've a little problem:
when with mercurial I commit and pushing new changes on the server, there's a micro time (like 1 microsec) where the home page is unreachable.
I have apache on the server.
How can I solve this?
|
A question about sites downtime updates
| 6,535,372 | -1 | 2 | 109 | 0 |
python,django,apache
|
i think it's normal, since django may be needing to restart its server after your update
| 0 | 0 | 0 | 0 |
2011-06-30T13:16:00.000
| 5 | -0.039979 | false | 6,535,333 | 0 | 0 | 1 | 4 |
I've a server when I run a Django application but I've a little problem:
when with mercurial I commit and pushing new changes on the server, there's a micro time (like 1 microsec) where the home page is unreachable.
I have apache on the server.
How can I solve this?
|
A question about sites downtime updates
| 6,538,942 | 0 | 2 | 109 | 0 |
python,django,apache
|
If you're at the point of complaining about a 1/1,000,000th of a second outage, then I suggest the following approach:
Front end load balancers pointing to multiple backend servers.
Remove one backend server from the loadbalancer to ensure no traffic will go to it.
Wait for all traffic that the server was processing has been sent.
Shutdown the webserver on that instance.
Update the django instance on that machine.
Add that instance back to the load balancers.
Repeat for every other server.
This will ensure that the 1/1,000,000th of a second gap is removed.
| 0 | 0 | 0 | 0 |
2011-06-30T13:16:00.000
| 5 | 0 | false | 6,535,333 | 0 | 0 | 1 | 4 |
I've a server when I run a Django application but I've a little problem:
when with mercurial I commit and pushing new changes on the server, there's a micro time (like 1 microsec) where the home page is unreachable.
I have apache on the server.
How can I solve this?
|
A question about sites downtime updates
| 6,535,569 | 1 | 2 | 109 | 0 |
python,django,apache
|
If you have any significant traffic in time that is measured in microsecond it's probably best to push new changes to your web servers one at a time, and remove the machine from load balancer rotation for the moment you're doing the upgrade there.
| 0 | 0 | 0 | 0 |
2011-06-30T13:16:00.000
| 5 | 0.039979 | false | 6,535,333 | 0 | 0 | 1 | 4 |
I've a server when I run a Django application but I've a little problem:
when with mercurial I commit and pushing new changes on the server, there's a micro time (like 1 microsec) where the home page is unreachable.
I have apache on the server.
How can I solve this?
|
How do I use django db API to save all the elements of a given column in a dictionary?
| 6,539,716 | 0 | 0 | 223 | 1 |
python,django
|
You can use cursor.fetchall() instead of cursor.fetchone() to retrieve all rows.
And then extract nessesary field:
raw_items = cursor.fetchall()
items = [ item.field for item in raw_items ]
| 0 | 0 | 0 | 0 |
2011-06-30T18:54:00.000
| 2 | 0 | false | 6,539,687 | 0 | 0 | 1 | 2 |
I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row.
Is there any way that I can return an entire column in a django query set?
|
How do I use django db API to save all the elements of a given column in a dictionary?
| 6,539,798 | 1 | 0 | 223 | 1 |
python,django
|
dict(MyModel.objects.values_list('id', 'my_column')) will return a dictionary with all elements of my_column with the row's id as the key. But probably you're just looking for a list of all the values, which you should receive via MyModel.objects.values_list('my_column', flat=True)!
| 0 | 0 | 0 | 0 |
2011-06-30T18:54:00.000
| 2 | 1.2 | true | 6,539,687 | 0 | 0 | 1 | 2 |
I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row.
Is there any way that I can return an entire column in a django query set?
|
Is it time.time() a safe approach when creating content types on plone programatically?
| 6,540,543 | 5 | 1 | 112 | 0 |
python,time,plone,race-condition,collision
|
You'll be perfectly safe doing this, even in the unlikely event two requests are processed at exactly the same time, in event of a conflict the ZODB will raise a ConflictError and retry your request.
Responding to the discussion below:
On a single computer then by defition both transactions must overlap (you got the same result from time.time() in each thread.) ZODB is MVCC, so each thread sees a consistent view of the database as it was when the transaction began. When the second thread commits, a conflict error will be raised because it will write to an object that has changed since the beginning of the transaction.
If you have clients running on multiple computers then you need to think about the possibility of clock drift between the clients. For its transaction ids, ZODB chooses whichever is the greater of either the current timestamp or the last transaction id + 1.
However, perhaps you should consider not using a timestamp as an id at all, as it will lead to conflicts under heavy load, as all requests will want to create entries in the same BTree bucket. Picking ids randomly will eliminate almost all of the conflicts, but will lead to inefficiently filled BTrees. The recommended approach is for each thread that creates objects to start at a random point in the number space and create ids sequentially. If it finds that an id has already been used then it should randomly pick another point in the number space and start again from there. I believe zope.intid contains an implementation of this strategy.
| 0 | 0 | 0 | 0 |
2011-06-30T19:52:00.000
| 1 | 1.2 | true | 6,540,330 | 1 | 0 | 1 | 1 |
I have to use _createObjectByType on Plone. I have as an argument the id of the object. Is it going to be safe, in this scenario, to create an id based on time.time() to avoid collisions? Can two requests have exactly the same timestamp as shown by time.time()?
|
Python- is there a module that will automatically scrape the content of an article off a webpage?
| 6,543,634 | 0 | 0 | 323 | 0 |
python,algorithm,screen-scraping,beautifulsoup,lxml
|
Extracting the real content from a content-page can not be done automatically - at least not with the standard tools. You have to define/identify where the real content is stored (by specifying the related CSS ID or class in your own HTML extraction code).
| 0 | 0 | 1 | 0 |
2011-07-01T04:31:00.000
| 3 | 0 | false | 6,543,599 | 0 | 0 | 1 | 2 |
I know there is lxml and BeautifulSoup, but that won't work for my project, because I don't know in advance what the HTML format of the site I am trying to scrape an article off of will be. Is there a python-type module similar to Readability that does a pretty good job at finding the content of an article and returning it?
|
Python- is there a module that will automatically scrape the content of an article off a webpage?
| 6,567,206 | 0 | 0 | 323 | 0 |
python,algorithm,screen-scraping,beautifulsoup,lxml
|
Using HTQL, the query is:
&html_main_text
| 0 | 0 | 1 | 0 |
2011-07-01T04:31:00.000
| 3 | 0 | false | 6,543,599 | 0 | 0 | 1 | 2 |
I know there is lxml and BeautifulSoup, but that won't work for my project, because I don't know in advance what the HTML format of the site I am trying to scrape an article off of will be. Is there a python-type module similar to Readability that does a pretty good job at finding the content of an article and returning it?
|
class definition dependence on runtime
| 6,546,832 | 0 | 0 | 77 | 0 |
python,inheritance,superclass
|
I am not sure I entirely understood your question, but are you sure you need to use inheritance? Couldn't the class you need just be a member instead?
Changing the superclass at runtime does not sound like a very good design, if at all possible.
Just to be sure I got you, what you want to do is have Controller inherit from one of the SessionHandlers, but select which one at runtime? I would rather make Controller have a member variable of type SessionHandler.
| 0 | 0 | 0 | 0 |
2011-07-01T10:31:00.000
| 1 | 1.2 | true | 6,546,786 | 0 | 0 | 1 | 1 |
in my webapp I made two different sessionhandler classes inheriting from a class called SessionHandler
Now I'd like to initiate the appropriate handler (dependent on a cookie value.)
Background: My SessionHandler should be the base class of the Controller as it needs to call a Controller backend method otherwise i would assign the handler object to a ctrl member
Is there a way to set the superclass at runtime?
Or other way to solve that? Hope you got what i meant!
|
Dealing with legacy django project in new localized projects
| 7,359,061 | 1 | 4 | 184 | 0 |
python,django,dependency-management,legacy-code,modularity
|
Start by decoupling the components where possible, and convert the legacy code to (portable) apps if not the case already, and the legacy code should not live under the main project tree.
Any new features should be well documented and decoupled apps or generic libraries themselves, even if they override/interact/depend or even monkeypatch the legacy code. You want most of your project to live outside of the main project itself, and installable via pip as if they were 3rd party apps.
The main project tree should be not much more than the project main project templates, an urls.py, settings.py, any configuration/deployment templates and a fabfile, and any core apps that will be rarely customized.
Every localized customization should just be either "customization" apps themselves, or small tweaks to main project (that should be made in a reproducible way via fab, or any provider of your choice)
Needless to say, if every team can commit to the core project, a good git/hg workflow is essential, and use a central CI server with a good test suite.
| 0 | 0 | 0 | 0 |
2011-07-01T12:20:00.000
| 1 | 1.2 | true | 6,547,863 | 0 | 0 | 1 | 1 |
I am right now in the situation to plan the internationalization of a django project that contains mainly legacy code. The old project itself has different applications which have a strong dependency to each other, so it is hard to separate them. Looking at the time left it is impossible at all.
The main requirements for the internationalization are:
Having separate projects for each country
Each country will later have different templates
each country will introduce new features which other countries may want to use as well
the main old codebase will still be maintained and should work with new features/changes to the country projects
Do you have any ideas/setups to deal with the old code AND starting new projects with the dependency to the old code and new features? I would like to start a discussion about this.
|
How to delete an object and all related objects with all imageFields insite them (photo gallery)
| 6,553,381 | 0 | 1 | 80 | 1 |
python,django,django-models,django-views
|
Here is a possible answer for the question i figured out:
Getting the list of albums in a string, in my case separated by commas
You need to import shutil, then:
@login_required
def remove_albums(request):
if request.is_ajax():
if request.method == 'POST':
#if the ajax call for delete what ok we get the list of albums to delete
albums_list = request.REQUEST['albums_list'].rsplit(',')
for album in albums_list:
obj_album = Album.objects.get(id=int(album))
#getting the directory for the images than need to be deleted
dir_path = MEDIA_ROOT + '/images/galleries/%d' % obj_album.id
#deleting the DB record
obj_album.delete()
#testing if there is a folder (there might be a record with no folder if no file was uploaded - deleting the album before uploading images)
try:
#deleting the folder and all the files in it
shutil.rmtree(dir_path)
except OSError:
pass
return HttpResponse('')
Sorry for how the code look like, don't know why, I can't make it show correct...
Have fun and good luck :-)
| 0 | 0 | 0 | 0 |
2011-07-01T15:30:00.000
| 1 | 1.2 | true | 6,550,003 | 0 | 0 | 1 | 1 |
I have a photo gallery with an album model (just title and date and stuff) and a photo model with a foriegn key to the album and three imageFields in it (regular, mid and thumb).
When a user delete an album i need to delete all the photos reletaed to the album (from server) then all the DB records that point to the album and then the album itself...
Couldn't find anything about this and actualy found so many answers what one say the oposite from the other.
Can any one please clarify this point, how does this is beeing done in the real world?
Thank you very much,
Erez
|
Function call using import in web2py
| 6,846,235 | 0 | 1 | 3,186 | 0 |
python,function,import,call,web2py
|
you can create python files in modules folder and import them just like how you import python libraries in your controllers. But you have to give the path to those files like
from applications.myApp.modules.myModule import *
this is my solution for my wrappers. now you can use your functions by calling their name
myFunction
| 0 | 0 | 1 | 0 |
2011-07-02T12:32:00.000
| 2 | 0 | false | 6,557,000 | 0 | 0 | 1 | 1 |
I have split the code into multiple files. I have imported all the functions from all other files into admin.py. Lets say I want to call a function XYZ. If I give path to function as admin/XYZ it gives me error as invalid function and for this I have to give the path as file_with_XYZ_function/XYZ.
Is there a way to overcome this problem and simple call all the imported functions from one single file
|
Attachments in mails sent via App Engine not readable on every mail client/ device
| 6,602,467 | 1 | 2 | 391 | 0 |
python,google-app-engine,blackberry,attachment,vcf-vcard
|
Obviously as I mentioned on my comment above:
- it has nothing to do with Google App Engine
- some devices are just not able to read vcards in format 3.0
But I haven't found a good parser/ converter so far (from vcard 3.0 to vcard 2.1 in python) so if anyone knows one, please let me know. Otherwise, I'll have to build it myself...
| 0 | 1 | 0 | 0 |
2011-07-03T21:41:00.000
| 2 | 1.2 | true | 6,565,728 | 0 | 0 | 1 | 1 |
I'm using App Engine with Python. My application basically sends vcards (.vcf) by email when users request it.
Indeed, files with .vcf extension are supported by App Engine. I use the mail API to send them as attachment. Before, I stored them as db.Blob().
Problem:
Most of the time, Blackberry users cannot read the vcards sent as attachments by my application. At the bottom of the mail, it displays: "application/X-rimdeviceAddress Book:" and when you click on the file, it says: "This type of attachment cannot be opened on your device".
Exception:
A blackberry that receives a vcard serialized from a Blackberry can open it.
Fortunately, it perfectly works on the iPhone and (most of the time) on Android phones.
As vcards serialized from a Blackberry can be correctly opened by Blackberry users, I guess I'm doing something wrong during the storage and/ or the mail dispatch. Or maybe, the MIME type is not correctly set by App Engine methods...
Can someone give a few leads to investigate this pretty annoying problem (I was expecting a pretty big user base on Blackberry phones...)?
|
Dynamically loading and static loading modules in Django
| 6,569,873 | 0 | 0 | 910 | 0 |
python,django,django-models,loading
|
First, this is a Python question, not a Django one as modules are a Python concept.
Secondly, there is no such thing as static loading in Python because it's an dynamic language by essence.
Therefor, loading a module in Python is always dynamic.
The only thing you should know are:
once a module is imported, the code is executed;
if you import the module again, Python get the reference from the first loading so it's fast and does not execute the code again;
you can use import in a function, but it's not recommanded and the imported module won't be available outside of teh function scope.
The last options is the closest thing to 'dynamic loading', as you can choose what to import at run time.
| 0 | 0 | 0 | 0 |
2011-07-04T09:41:00.000
| 2 | 0 | false | 6,569,721 | 0 | 0 | 1 | 1 |
Is any difference between the dynamically loading and static loading modules in Django?
How about its efficiency? Who can tell me the principles about the mechanism of dynamically loading and static loading modules in Django?
|
separate django projects on different machines using a common database
| 6,574,660 | 1 | 2 | 253 | 1 |
python,django,deployment,architecture,amazon-web-services
|
This isn't really a Django question. It is more a Python Question.
However to answer your question Django is going to have to be able to import these files one way or another. If they are on seperate machines you really should refactor the code out into it's own app and then install this app on each of the machines.
The only other way I can think of to do this is to make your own import hook that can import a file from across a network but that is a really bad idea for a multitude of reasons.
| 0 | 0 | 0 | 0 |
2011-07-04T13:31:00.000
| 1 | 0.197375 | false | 6,572,203 | 0 | 0 | 1 | 1 |
Our site has two separate projects connected to the same database. This is implemented by importing the models from project1 into project2 and using it to access and manipulate data.
This works fine on our test server, but we are planning deployment and we decided we would rather have the projects on two separate machines, with the database on a third one.
I have been looking around for ideas on how to import the model from a project on another machine but that doesn't seem to be possible.
An obvious solution would be to put the models in a separate app and have it on both boxes, but that means code is duplicated and changes have to be applied twice.
I'm looking for suggestions on how to deal with this and am wondering if other people have encountered similar issues. We'll be deploying on AWS if that helps. Thanks.
|
Math Intensive, Calculation Based Website - Which Language Should I Use?
| 6,574,755 | 0 | 3 | 1,004 | 0 |
java,c++,python,math,matlab
|
I think you can use PHP or Java Web.
| 0 | 0 | 0 | 1 |
2011-07-04T18:04:00.000
| 5 | 0 | false | 6,574,740 | 0 | 0 | 1 | 2 |
I am very new to programming. I am familiar with HTML, C++ and learning PHP to start a database.
I want to make a website which tracks a stock price. I have written various algorithms in Matlab however, MATLAB only has a to-Java conversion.
I was wondering what language would be the best to do a lot of calculations. I want my calculations to be done in real time and plotted. Would Java be the best language for this?
I can do the calculations in C++ but I don't know how to put the plots on the website. Likewise I believe I can do everything in Matlab but the conversion looks a little sketchy.
I would be very thankful if someone with experience with Java, or I also heard python, would comment on my post.
|
Math Intensive, Calculation Based Website - Which Language Should I Use?
| 6,574,803 | 0 | 3 | 1,004 | 0 |
java,c++,python,math,matlab
|
I would do C++ and write them to a database, then using php you can grab them from the same database and show them online, otherwise then java can do all that but make sure all calculations aren't done on the fly since that will kill your server, especially with stocks that can turn into a lot of data.
| 0 | 0 | 0 | 1 |
2011-07-04T18:04:00.000
| 5 | 0 | false | 6,574,740 | 0 | 0 | 1 | 2 |
I am very new to programming. I am familiar with HTML, C++ and learning PHP to start a database.
I want to make a website which tracks a stock price. I have written various algorithms in Matlab however, MATLAB only has a to-Java conversion.
I was wondering what language would be the best to do a lot of calculations. I want my calculations to be done in real time and plotted. Would Java be the best language for this?
I can do the calculations in C++ but I don't know how to put the plots on the website. Likewise I believe I can do everything in Matlab but the conversion looks a little sketchy.
I would be very thankful if someone with experience with Java, or I also heard python, would comment on my post.
|
Implementation of sso with django auth. Is this safe?
| 8,865,656 | 0 | 3 | 530 | 0 |
python,django,django-authentication
|
I don't profess to be a web security expert, but I've done something similar in the past. As long as you properly set the cookie domain (which you claim to be doing) I don't really see any security issues, especially since both sites are on the same domain.
If you really want to be safe I suppose you could set up your own OAuth portal or something, but quite frankly that seems to be overkill.
| 0 | 0 | 0 | 0 |
2011-07-04T19:12:00.000
| 1 | 0 | false | 6,575,280 | 0 | 0 | 1 | 1 |
I'm trying to implement single sign-on using only django auth.
Let's assume two django projects, on different sub-domains: site.com(auth) and app1.site.com(app1)
The auth table in site.com is master. site.com handles: login, logout, account registration, etc.
site.com sets SESSION_COOKIE_DOMAIN to .site.com to allow it to be read by subdomains
app1 will have login_url set to a view in the app1 project, which does the following:
retrieves site.com's session_id value(from cookie)
validates session_id by making a request to: site.com/validate/[session_id]/
If False, redirects to site.com/login?next=[...]
If True, request user data to: site.com/attributes/[session_id]/
site.com/attributes/ delivers a dictionary with all the User values, encrypted using a shared SSO_KEY(encryption done the same way django encodes and decodes session_id)
Now, app1 has a model SSO_User which has two fields, a foreign key to User model and an integer field. The SSO_User models links local auth User to the id of master auth table.
Using the id retrieved from site.com, we check SSO_User for existing local user, if true we simply update the values and login; if non existing, we create the user and SSO_User and login.
app1(or any other sub-domain) can keep their own profile information, without interfering with anything.
It seems simple to implement and safe, but before implementing I wanted some opinions. What do you think?
|
Accessing forms from one website to another, python/django
| 6,576,961 | 0 | 1 | 88 | 0 |
python,django,web-services,authentication
|
Not really sure what you're looking to do as it doesn't make much sense. But you need to validate data provided by users on your site against data available in another database that isn't accessible to your app.
This means you need to send the data your users are providing to you to the other service that is providing the validation. Perhaps this other service provides an API to do this, perhaps it just provides a form you can post the data to (with python urllib2).
Without have a lot more information on what you're looking to do I can't even venture to guess whether either of these two things are feasible.
| 0 | 0 | 0 | 0 |
2011-07-04T23:45:00.000
| 1 | 0 | false | 6,576,909 | 0 | 0 | 1 | 1 |
I'm trying to make a website that requires users to enter information about themselves. In order to check to see if this information is correct, it needs to enter the information on another website (that has an entire database of these types of users). It will then return the results found. How do I do such a thing? Where do I start? I tried googling but I couldn't even think of what this would be called?
|
Is it ok to spawn threads in a wsgi-application?
| 6,587,335 | 13 | 14 | 3,562 | 0 |
python,django,multithreading,wsgi,flask
|
WSGI does not specify the lifetime of an application process (as WSGI application is a Python callable object). You can run it in a way that is completely independent of the web server, in which case, only you control the lifetime.
There is also nothing in the WSGI that would prohibit you from spawning threads, or processes, or doing whatever the hell you want.
| 0 | 1 | 0 | 0 |
2011-07-05T07:53:00.000
| 2 | 1 | false | 6,579,467 | 0 | 0 | 1 | 1 |
To achieve something similar to google app engines 'deferred calls' (i.e., the request is handled, and afterwards the deferred task is handled), i experimented a little and came up with the solution to spawn a thread in which my deferred call is handled.
I am now trying to determine if this is an acceptable way.
Is it possible (according to the WSGI specification) that the process is terminated by the webserver after the actual request is handled, but before all threads run out?
(if there's a better way, that would be also fine)
|
Django installing packages via symlinking
| 6,584,353 | 0 | 0 | 138 | 0 |
python,django
|
If you can, it's better to include the packages as an svn external / git module / etc. That way they're always available, even to others that checkout your project.
Short of that, virtualenv is the better solution. You still "install" the packages, but they are added to the virtual environment (hence the name). You get all the benefits of packages installed right into site-packages, without actually adding anything to site-packages.
| 0 | 0 | 0 | 0 |
2011-07-05T13:41:00.000
| 1 | 0 | false | 6,583,593 | 0 | 0 | 1 | 1 |
Is it possible to "install" third party django packages by sym linking them into your site directory?
For example, I'd like to use the django-registration package. However, I don't want to install django-registration into my site-packages directory. I'd rather just link django-registration/registration into my project directory as /myproject/registration. However, this doesn't seem to work. I get the following:
Error: No module named registration
|
Is it possible to fetch a google profile nickname and use in app engine apps?
| 6,590,472 | 0 | 0 | 246 | 0 |
python,google-app-engine,login
|
No - the User API only provides the user's email address, an app-specific unique ID, and their nickname (Which, as you observe, is derived from the email address).
| 0 | 1 | 0 | 0 |
2011-07-05T16:51:00.000
| 2 | 0 | false | 6,586,106 | 0 | 0 | 1 | 1 |
Currently the method nickname() in the Users class seem to return either the initial way you entered your username when you signed up for gmail (IE. "UseRnAme" if thats how you chose to type it, rather then "username" as it would show up when you log in on a google service) or the entire email if you signed up for a google account with another email provider.
Google profiles have a field called Nickname though, is it possible to use that inside an app engine app in any way?
|
Django: how to do get_or_create() in a threadsafe way?
| 6,586,594 | 11 | 24 | 9,325 | 0 |
python,database,django,concurrency,thread-safety
|
This must be a very common situation. How do I handle it in a threadsafe way?
Yes.
The "standard" solution in SQL is to simply attempt to create the record. If it works, that's good. Keep going.
If an attempt to create a record gets a "duplicate" exception from the RDBMS, then do a SELECT and keep going.
Django, however, has an ORM layer, with it's own cache. So the logic is inverted to make the common case work directly and quickly and the uncommon case (the duplicate) raise a rare exception.
| 0 | 0 | 0 | 0 |
2011-07-05T17:31:00.000
| 4 | 1.2 | true | 6,586,552 | 0 | 0 | 1 | 1 |
In my Django app very often I need to do something similar to get_or_create(). E.g.,
User submits a tag. Need to see if
that tag already is in the database.
If not, create a new record for it. If
it is, just update the existing
record.
But looking into the doc for get_or_create() it looks like it's not threadsafe. Thread A checks and finds Record X does not exist. Then Thread B checks and finds that Record X does not exist. Now both Thread A and Thread B will create a new Record X.
This must be a very common situation. How do I handle it in a threadsafe way?
|
Django - Mac development, environment hell
| 6,590,829 | 1 | 2 | 2,030 | 0 |
python,django,macos,python-imaging-library
|
Second, would it be easier to set up
the entire environment on Ubuntu than
it is on a Mac?
To answer this question (though I never used Mac though): I never had problems setting up a python environment for Django development on Ubuntu. Though in any case you should go with the built-in Python version if possible. Attempting to install any other Python versions usually ends up messy. Luckily with Ubuntu 11.04 the standard version is already 2.7.
| 0 | 0 | 0 | 0 |
2011-07-05T22:44:00.000
| 5 | 0.039979 | false | 6,589,684 | 1 | 0 | 1 | 2 |
I was trying to setup Django dev environment on Mac and arrived into a hell. It all started when trying to install PIL, which failed after trying 15 or so different recipes I found on blogs. So I wanted to install the Python, this time 2.7, and reinstall setuptools, easy_install, pip from scratch.
After just installing Python 2.7, and easy_install with setuptools for 2.7, this all in turn created such a mess that is unbelievable. Different version of Python are installed everywhere, easy_install is installed everywhere and points randomly to different python hashbangs (sometimes to #!/usr/bin, #!/usr/local/, #!/Library/...)
Now I can't even do easy_install pip, which I always could. So I'm already in a hell and I haven't even attempted to install MySQL yet.
My question finally is did anyone bump into such problems, it would help enough to know that I'm not alone.
Second, would it be easier to set up the entire environment on Ubuntu than it is on a Mac?
Thirdly, is there any guide that can really clearly explain how to set up but also tear down the stack for Python development on a Mac?
|
Django - Mac development, environment hell
| 6,599,875 | 0 | 2 | 2,030 | 0 |
python,django,macos,python-imaging-library
|
Yes I have had problems with MacOS. I think rather than trying to figure it out I just switched to Ubuntu. I use a mac with Ubuntu installed in VMware Fusion. I have developed on both and prefer the Ubuntu because I'm just more comfortable with installing packages and the file structure.
I love using the VM because I'm never scared of having to start over. I can get a whole new OS installed and get the packages with what I use in just a few hours. Not to mention with 6month rollouts I can do complete installs of new versions instead of updates.
Depending on your production environment, it may be beneficial to use an OS that is similar, if you can install a package on ubuntu desktop, you already know how to do it on ubuntu server.
| 0 | 0 | 0 | 0 |
2011-07-05T22:44:00.000
| 5 | 0 | false | 6,589,684 | 1 | 0 | 1 | 2 |
I was trying to setup Django dev environment on Mac and arrived into a hell. It all started when trying to install PIL, which failed after trying 15 or so different recipes I found on blogs. So I wanted to install the Python, this time 2.7, and reinstall setuptools, easy_install, pip from scratch.
After just installing Python 2.7, and easy_install with setuptools for 2.7, this all in turn created such a mess that is unbelievable. Different version of Python are installed everywhere, easy_install is installed everywhere and points randomly to different python hashbangs (sometimes to #!/usr/bin, #!/usr/local/, #!/Library/...)
Now I can't even do easy_install pip, which I always could. So I'm already in a hell and I haven't even attempted to install MySQL yet.
My question finally is did anyone bump into such problems, it would help enough to know that I'm not alone.
Second, would it be easier to set up the entire environment on Ubuntu than it is on a Mac?
Thirdly, is there any guide that can really clearly explain how to set up but also tear down the stack for Python development on a Mac?
|
Is it bad to have my virtualenv directory inside my git repository?
| 6,591,061 | 39 | 367 | 149,954 | 0 |
python,django,virtualenv
|
I used to do the same until I started using libraries that are compiled differently depending on the environment such as PyCrypto. My PyCrypto mac wouldn't work on Cygwin wouldn't work on Ubuntu.
It becomes an utter nightmare to manage the repository.
Either way I found it easier to manage the pip freeze & a requirements file than having it all in git. It's cleaner too since you get to avoid the commit spam for thousands of files as those libraries get updated...
| 0 | 0 | 0 | 0 |
2011-07-06T01:42:00.000
| 8 | 1 | false | 6,590,688 | 1 | 0 | 1 | 4 |
I'm thinking about putting the virtualenv for a Django web app I am making inside my git repository for the app. It seems like an easy way to keep deploy's simple and easy. Is there any reason why I shouldn't do this?
|
Is it bad to have my virtualenv directory inside my git repository?
| 6,590,779 | 24 | 367 | 149,954 | 0 |
python,django,virtualenv
|
I think one of the main problems which occur is that the virtualenv might not be usable by other people. Reason is that it always uses absolute paths. So if you virtualenv was for example in /home/lyle/myenv/ it will assume the same for all other people using this repository (it must be exactly the same absolute path). You can't presume people using the same directory structure as you.
Better practice is that everybody is setting up their own environment (be it with or without virtualenv) and installing libraries there. That also makes you code more usable over different platforms (Linux/Windows/Mac), also because virtualenv is installed different in each of them.
| 0 | 0 | 0 | 0 |
2011-07-06T01:42:00.000
| 8 | 1 | false | 6,590,688 | 1 | 0 | 1 | 4 |
I'm thinking about putting the virtualenv for a Django web app I am making inside my git repository for the app. It seems like an easy way to keep deploy's simple and easy. Is there any reason why I shouldn't do this?
|
Is it bad to have my virtualenv directory inside my git repository?
| 42,731,272 | 0 | 367 | 149,954 | 0 |
python,django,virtualenv
|
If you just setting up development env, then use pip freeze file, caz that makes the git repo clean.
Then if doing production deployment, then checkin the whole venv folder. That will make your deployment more reproducible, not need those libxxx-dev packages, and avoid the internet issues.
So there are two repos. One for your main source code, which includes a requirements.txt. And a env repo, which contains the whole venv folder.
| 0 | 0 | 0 | 0 |
2011-07-06T01:42:00.000
| 8 | 0 | false | 6,590,688 | 1 | 0 | 1 | 4 |
I'm thinking about putting the virtualenv for a Django web app I am making inside my git repository for the app. It seems like an easy way to keep deploy's simple and easy. Is there any reason why I shouldn't do this?
|
Is it bad to have my virtualenv directory inside my git repository?
| 59,330,598 | 0 | 367 | 149,954 | 0 |
python,django,virtualenv
|
I think is that the best is to install the virtual environment in a path inside the repository folder, maybe is better inclusive to use a subdirectory dedicated to the environment (I have deleted accidentally my entire project when force installing a virtual environment in the repository root folder, good that I had the project saved in its latest version in Github).
Either the automated installer, or the documentation should indicate the virtualenv path as a relative path, this way you won't run into problems when sharing the project with other people. About the packages, the packages used should be saved by pip freeze -r requirements.txt.
| 0 | 0 | 0 | 0 |
2011-07-06T01:42:00.000
| 8 | 0 | false | 6,590,688 | 1 | 0 | 1 | 4 |
I'm thinking about putting the virtualenv for a Django web app I am making inside my git repository for the app. It seems like an easy way to keep deploy's simple and easy. Is there any reason why I shouldn't do this?
|
Following links, Scrapy web crawler framework
| 6,591,511 | 1 | 9 | 6,599 | 0 |
python,web-crawler,scrapy
|
If you want selective crawling, like fetching "Next" links for pagination etc., it's better to write your own crawler. But for general crawling, you should use crawlspider and filter out the links that you don't need to follow using Rules & process_links function.
Take a look at the crawlspider code in \scrapy\contrib\spiders\crawl.py , it isn't too complicated.
| 0 | 0 | 1 | 0 |
2011-07-06T03:27:00.000
| 2 | 0.099668 | false | 6,591,255 | 0 | 0 | 1 | 1 |
After several readings to Scrapy docs I'm still not catching the diferrence between using CrawlSpider rules and implementing my own link extraction mechanism on the callback method.
I'm about to write a new web crawler using the latter approach, but just becuase I had a bad experience in a past project using rules. I'd really like to know exactly what I'm doing and why.
Anyone familiar with this tool?
Thanks for your help!
|
Django In-Memory SQLite3 Database
| 6,600,219 | 3 | 1 | 1,200 | 1 |
python,database,django,sqlite,in-memory-database
|
Disconnect django.contrib.auth.management.create_superuser from the post_syncdb signal, and instead connect your own function that creates and saves a new superuser User with the desired password.
| 0 | 0 | 0 | 0 |
2011-07-06T16:20:00.000
| 3 | 1.2 | true | 6,599,716 | 0 | 0 | 1 | 1 |
I'm trying to have a purely in-memory SQLite database in Django, and I think I have it working, except for an annoying problem:
I need to run syncdb before using the database, which isn't too much of a problem. The problem is that it needs to create a superuser (in the auth_user table, I think) which requires interactive input.
For my purposes, I don't want this -- I just want to create it in memory, and I really don't care about the password because I'm the only user. :) I just want to hard-code a password somewhere, but I have no idea how to do this programmatically.
Any ideas?
|
Django many-to-many relationships: prevailing naming convention
| 6,601,656 | 4 | 1 | 336 | 0 |
python,django
|
The _set suffix indicates that the attribute is a manager rather than a model, and should be retained in order to minimize confusion.
| 0 | 0 | 0 | 0 |
2011-07-06T19:08:00.000
| 1 | 1.2 | true | 6,601,626 | 0 | 0 | 1 | 1 |
I am hoping to hear from some people with experience developing Django sites as part of some sort of team.
What is the naming convention most commonly seen for many-to-many relationships? Do most people stick with the entry_set default, or is it more common and easier to use a more symmetrical related_name='entries' approach?
|
How can I show a "Hello"+username Screen to All other users who are not superuser in Django admin?
| 6,606,681 | 0 | 1 | 85 | 0 |
python,django,django-admin,django-views
|
You should really restrict access to the admin.
You can create a decorator to do add the functionality of @superuser_required to the admin site, or simply just all people that are is_staff as otherwise.
| 0 | 0 | 0 | 0 |
2011-07-07T05:32:00.000
| 1 | 1.2 | true | 6,606,224 | 0 | 0 | 1 | 1 |
I want only superuser to see all models and edit/add them.But if a normal user like staff status user logs in i want him to show a screen saying "Hello"+username.
|
use .pyc files via Jython
| 6,609,145 | 3 | 0 | 1,308 | 0 |
java,python,jython,pyc
|
The 'compiled' python code '.pyc' files are implementation-specific. Even CPython (the standard Python implementation) is not able to import .pyc files generated by a different version of CPython. And is not supposed to. So, I would be surprised if Jython had an ability to run .pyc files created by any of CPython version.
'.pyc' files are not the same as Java bytecode (which is designed to be portable).
Decompilation seems the only way. I think there are some .pyc decompilers available, they should be able to generate Python code that could be run by Jython.
| 0 | 0 | 0 | 0 |
2011-07-07T08:34:00.000
| 2 | 0.291313 | false | 6,607,858 | 1 | 0 | 1 | 1 |
I am working on building a web interface for a Python tool. It's being designed using J2EE (Spring).
In the process, I need to make calls to Python functions and hence I am using Jython for the same.
But for some modules I don't have the Python source files, I only have the .pyc files, and a document listing the methods of that file. I need to know how I can call these functions inside the .pyc file using jython.
I have tried to de-compile the Python files but since they have been complied with Python 2.7, I am not able find a decompiler to do the job
|
Django: Context processors in views, bad practice?
| 6,622,725 | 1 | 0 | 2,056 | 0 |
python,django
|
Option 1. Delegate the context processor's work to another function, and call that function.
| 0 | 0 | 0 | 0 |
2011-07-08T09:40:00.000
| 3 | 0.066568 | false | 6,622,630 | 0 | 0 | 1 | 1 |
In my Django project, I have a context processor which returns a FacebookUser object based on the cookies present in the request (using Facebook's Python SDK). This means that when the user is logged in, their corresponding FacebookUser object is always available in my templates.
However, what should I do when I want to access this in views too?
Option 1: In each view where I want to access this FacebookUser object, call the context processor method, or a method that does exactly the same thing.
Option 2: Again, in each view, call RequestContext(request) in order to get access to the existing object added to the context by the context processor.
Which is better practice, and are there any recommended ways of working here?
|
Django view gets called twice... sometimes
| 6,632,307 | 0 | 1 | 876 | 0 |
python,django,http,request
|
It's possible your view function is called once, but you are logging twice. I've had problems before where my logging add_handler call was being run twice, so the log had the same handler twice, so one log.debug() call would result in the same message in the log twice.
| 0 | 0 | 0 | 0 |
2011-07-08T23:28:00.000
| 1 | 0 | false | 6,631,601 | 0 | 0 | 1 | 1 |
I have several logging.debug statements in the home view of a web site, mainly to log the start time and the end time of a bunch of sql queries.
However, sometimes, I see those sets of debug statements pop up twice in a row in the debug log. Of course, during that time only one user has made ONE request (as in refresh, hit enter, etc.)
Any ideas on what might cause this? We think this may be related to another problem we're having.
|
cannot import modules from subdirectories that are in the django app parent directory
| 16,939,968 | 0 | 1 | 2,142 | 0 |
python,django
|
put an import in the init file of your app (not subdirecotry).
I have this structure and the following import in init.py file
app/views_dir/group_views.py
The following line is the only line of code in my init file
from views_dir.group_views import test
| 0 | 0 | 0 | 0 |
2011-07-09T22:04:00.000
| 2 | 0 | false | 6,637,783 | 0 | 0 | 1 | 2 |
Django doesn't seem to be able to import modules from a subdirectory.
I've got a file structure like this:
->project_folder
---->app_folder
------->subdir
when I store a script in app_folder, i can import it, so the command 'from project_folder.app_folder.module import *' works, but I get a module not found error when I do 'from project_folder.app_folder.subdir.module import *'
How do I get around this? I just want to keep my files nicely organized
|
cannot import modules from subdirectories that are in the django app parent directory
| 6,637,800 | 9 | 1 | 2,142 | 0 |
python,django
|
have you got an __init__.py in your subdir? Python needs this file to treat a directory as a package.
| 0 | 0 | 0 | 0 |
2011-07-09T22:04:00.000
| 2 | 1 | false | 6,637,783 | 0 | 0 | 1 | 2 |
Django doesn't seem to be able to import modules from a subdirectory.
I've got a file structure like this:
->project_folder
---->app_folder
------->subdir
when I store a script in app_folder, i can import it, so the command 'from project_folder.app_folder.module import *' works, but I get a module not found error when I do 'from project_folder.app_folder.subdir.module import *'
How do I get around this? I just want to keep my files nicely organized
|
Live countdown clock with django and sql?
| 6,639,561 | 2 | 0 | 2,127 | 1 |
javascript,python,django,time,countdown
|
I don't think this question has anything to do with SQL, really--except that you might retrieve an expiration time from SQL. What you really care about is just how to display the timeout real-time in the browser, right?
Obviously the easiest way is just to send a "seconds remaining" counter to the page, either on the initial load, or as part of an AJAX request, then use Javascript to display the timer, and update it every second with the current value. I would opt for using a "seconds remaining" counter rather than an "end datetime", because you can't trust a browser's clock to be set correctly--but you probably can trust it to count down seconds correctly.
If you don't trust Javascript, or the client's clock, to be accurate, you could periodically re-send the current "seconds remaining" value to the browser via AJAX. I wouldn't do this every second, maybe every 15 or 60 seconds at most.
As for deleting/moving data when the clock expires, you'll need to do all of that in Javascript.
I'm not 100% sure I answered all of your questions, but your questions seem a bit scattered anyway. If you need more clarification on the theory of operation, please ask.
| 0 | 0 | 0 | 0 |
2011-07-10T04:52:00.000
| 2 | 0.197375 | false | 6,639,247 | 0 | 0 | 1 | 2 |
If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient.
Is this even a plausible strategy?
Or is this the way they do it:
When a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine?
If so, how do you do the countdown clock with javascript? And how would I be able to delete/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock?
|
Live countdown clock with django and sql?
| 6,639,878 | 0 | 0 | 2,127 | 1 |
javascript,python,django,time,countdown
|
I have also encountered the same problem a while ago.
First of all your problem is not related neither django nor sql. It is a general concept and it is not very easy to implement because of overhead in server.
One solution come into my mind is keeping start time of the process in the database.
When someone request you to see remaingn time, read it from database, subtract the current time and server that time and in your browser initialize your javascript function with that value and countdown like 15 sec. After that do the same operation with AJAX without waiting user's request.
However, there would be other implementations depending your application. If you explain your application in detail there could be other solutions.
For example, if you implement a questionnaire with limited time, then for every answer submit, you should pass the calculated javascript value for that second.
| 0 | 0 | 0 | 0 |
2011-07-10T04:52:00.000
| 2 | 0 | false | 6,639,247 | 0 | 0 | 1 | 2 |
If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient.
Is this even a plausible strategy?
Or is this the way they do it:
When a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine?
If so, how do you do the countdown clock with javascript? And how would I be able to delete/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock?
|
Feedparser Date parameter/Time-specific query
| 6,685,581 | 2 | 3 | 458 | 0 |
python,rss,feedparser
|
I don't understand the question as written. An RSS feed is an XML document. Feedparser retrieves and parses this entire document. It can't query just part of a document. It's up to you to write the code around feedparser to extract what you want (e.g., for each entry, you can look at d.entries[0].date and compare it with another date/time stamp or range to determine if you're interested in it or not).
I don't know what you mean by looking for entries newer then feed.updated, since there shouldn't be any (the newest entries would have been entered when the feed was last updated).
| 0 | 0 | 1 | 0 |
2011-07-10T17:16:00.000
| 1 | 0.379949 | false | 6,642,570 | 0 | 0 | 1 | 1 |
Is there an Option in feedparser to query only the new entries newer then feed.updated?
Or can you set a parameter to get only entries from a specific date/today/week etc.? (Safari´s RSS Reader provides this options...)
|
What's the best way to assemble Javascript modules server-side in Python?
| 6,648,471 | 0 | 1 | 197 | 0 |
javascript,python
|
For one of my projects I simply wrote a small build script which handles both CSS and JS files.
For JS it reads the list of JS files from a text file, concatenates all scripts and submits them to google for minification.
For CSS I take my central CSS file and recursively replace all @import statements with the referenced file (since i only use absolute paths no path rewriting is necessary); after this I have a single CSS file which is then run throuh cssmin.
Since you are using Django you could probably write such a script in a way to make it easily callable through manage.py
| 0 | 0 | 0 | 0 |
2011-07-11T09:42:00.000
| 3 | 1.2 | true | 6,648,312 | 1 | 0 | 1 | 1 |
I'm planning an extensible web app where base objects are extended by JavaScript modules which plug in. Each JavaScript object may just be an independent bit of code, or I might be building a large object to simulate a module namespace. I'll be using a Django back-end and Backbone.js, since you asked.
The system should be easily modular, so adding a new module should be a question of just dropping in another file or database row or whatever. The person installing the module shouldn't have to edit a large JavaScript file manually (or run a special script, in an ideal world).
I have two options: just serve each JS file separately or get the server to assemble them. The first option will get ungainly when a large number of plugins is reached.
I'm looking into the best way to implement the second option: assembling a JavaScript file from a lot of small JavaScript snippets. I could just do some blind string concatenation, but there may be a Python library that can do this and take account of problems I haven't foreseen.
|
MySQLdb to Excel
| 6,650,011 | 0 | 0 | 1,028 | 1 |
python,mysql,django,excel
|
phpMyAdmin has an Export tab, and you can export in CSV. This can be imported into Excel.
| 0 | 0 | 0 | 0 |
2011-07-11T12:20:00.000
| 4 | 0 | false | 6,649,990 | 0 | 0 | 1 | 1 |
I have a Django project which has a mysql database backend. How can I export contents from my db to an Excel (xls, xlsx) format?
|
Django: split forms.py in several files
| 34,145,881 | 0 | 2 | 1,774 | 0 |
python,django,django-forms
|
If you really want to make as few changes as possible, splitting forms.py into multiple files by function (forms_user.py, forms_product.py) is the easiest.
If you have time to refactor or are starting a new project, the best solution is to make a new django app for each component of your project (e.g. users, projects...) each with its own forms.py file. If a single django app gets too big lots of things stop scaling (models.py, views.py etc.) and maintenance gets harder across the board.
| 0 | 0 | 0 | 0 |
2011-07-12T08:47:00.000
| 3 | 0 | false | 6,661,592 | 0 | 0 | 1 | 1 |
I have a very long forms.py and I'd like to split it to smaller parts with as few as possible changes in the code.
Any ideas?
|
Does zc.buildout offer a lot more than pip when dealing with packages/eggs
| 6,675,801 | 7 | 4 | 354 | 0 |
python,django,pip,buildout,egg
|
Buildout does much more than pip; each part is a separate recipe which can run arbitrary python code to get your tasks accomplished. Coupled with dependencies between the parts and update detection, zc.buildout is more comparable to make than to pip.
For example, using the zc.recipe.cmmi recipe, you can download and compile arbitrary "configure; make; make install" packages. There are recipes to generate files from templates, or create symlinks, or install specific software packages and configure these with buildout-controlled settings all in one step.
Installing eggs according to dependencies is just one of the tricks that zc.buildout supports.
| 0 | 0 | 0 | 0 |
2011-07-13T06:34:00.000
| 1 | 1.2 | true | 6,674,946 | 1 | 0 | 1 | 1 |
We're all development team working on a Django site. Recently we've begun using zc.buildout inside a virtualenv. I can see how virtualenv helps you by making a sandboxed environment. After creating a sandbox, one can simply use pip to install the necessary packages/eggs. I've read that pip can load all the eggs from a requirements file. This has made me question the benefits and additional functionality of zc.buildout. zc.buildout downloads the eggs that you mention in the buildout.cfg file but as I wrote, you can already do that using pip and a requirements file. Does zc.buildout do something more that I'm missing? I can already do all my automation using Fabric.
|
Twill alternative for integration testing
| 6,681,186 | 2 | 3 | 1,072 | 0 |
python,google-app-engine,integration-testing,tipfy,twill
|
There are not many alternatives for headless JS testing, you could try selenium 2 web driver. Good luck :)
| 0 | 0 | 0 | 1 |
2011-07-13T12:02:00.000
| 1 | 1.2 | true | 6,678,523 | 0 | 0 | 1 | 1 |
I am using to twill to do integration testing for an AppEngine (using tipfy micro framework) application but unfortunately twill is not maintained and I cannot test PUT and DELETE requests.
Is there any similar solution?
I am thinking of using PhantomJS, there are some python bindings and it can execute JS (as it is a headless webkit but I have not found much).
|
How to directly publish only child items of my Container type in Plone?
| 6,680,001 | 4 | 8 | 304 | 0 |
python,workflow,plone
|
Create a new automatic transition in the workflow you're using that has the guard:
python:container.meta_type == 'ATFolder'
this will then only fire if the parent object is of the standard 'Folder' type (note the meta type and the type name are not the same).
The downside of this is that it will be fired relatively early in the creation process, so the user will see an error message if they haven't got enough permissions to finish creation on the published object.
If this doesn't meet what you want I think the Event is your closest bet.
| 0 | 0 | 0 | 0 |
2011-07-13T13:18:00.000
| 3 | 0.26052 | false | 6,679,530 | 0 | 0 | 1 | 1 |
I have a custom folderish Dexterity content-type in Plone. It can have only Documents as children. I want these documents to be directly published as they are created.
I can achieve this easily by setting an appropriate workflow for the Document type, but that would affect every document in my site. I want only the ones inside my container type to be directly published.
Two options come to my mind:
Custom page
Create basically just a copy of the stock Document type and set its workflow to something that has only published state.
Event
Add IObjectAdded event for Documents and check if the parent of the new Document is my container type and do manual publishing in python code.
Neither sounds too nice. Do I have other options?
|
Problem running PyDev-developed apps in terminal
| 6,681,756 | 2 | 2 | 408 | 0 |
python,eclipse,terminal,pydev
|
The key here is that PyDev and Eclipse manage a custom Python Path when you're launching within Eclipse. You can modify your environment variables to contain a more complete PYTHONPATH value that contains the locations where you're importing from, or you can use sys.path.append() to add directories to the path at run time so that the imports can be resolved.
| 0 | 1 | 0 | 1 |
2011-07-13T15:45:00.000
| 1 | 1.2 | true | 6,681,690 | 0 | 0 | 1 | 1 |
Im having some import problems with an application I developed in python with Eclipse/PyDev.
Running the app from within Eclipse is no problem but when I try running it through the linux terminal the imports (which are imported from other folders (packages in Eclipse)) are broken and I get an ImportError: No module named xxx..
From previous experiences developing Java-apps in Eclipse I always solved this through exporting the project to a runnable jar-file but this isn't an option with Python.
Is there a way of circumventing this? I'd rather not put all my .py-files in a single folder since I very much like the package-system (guess Java has damaged me). Can I change the import statement to make it work in both Eclipse and the terminal or do I have to abandon PyDev if I want this to work in the terminal?
Thanks for any help!
Slim
|
issuing dos commands from inside ruby on rails controller in the same dos session
| 6,685,611 | 1 | 0 | 95 | 0 |
python,ruby-on-rails,session,command-line,dos
|
What you are proposing could be possible if Rails was friendlier about forked processes. A cleaner and better solution would be to write a python daemon that you could query so that you don't incur the startup penalty. (This could be a web-service or a daemon you communicate with standard network sockets or whatever).
| 0 | 0 | 0 | 1 |
2011-07-13T20:54:00.000
| 1 | 1.2 | true | 6,685,530 | 0 | 0 | 1 | 1 |
Here is my scenario.
I have a ajax call in my web site to find the elevation at particular point. Once this point comes into an action of a controller in Ruby on rails, I have to use python on command line to find the elevation.
The following sequence of commands in DOS does that for me.
python (starts a python session)
import arcpy (takes a lot of time)
function call (very fast).
Now if I put this into a script and run it, I do get the result, but its very slow, because the 'import' step takes a lot of time. But the actual function takes less than a second.
As all this is suppose to happen behind an Ajax call on ror web site, such a large delay is unacceptable.
Question:
Is it possible for me in Ror to open a 'command line session' when the application loads and issue the first two commands, and then use this session every time a request comes in a controller's action, and issue the third command, and return its output?
If yes can someone please post some samples?
Thanks
Shaunak
|
How to preserve database connection in a python web server
| 6,698,054 | 0 | 21 | 11,438 | 1 |
python,mysql,flask
|
In my experience, it's often a good idea to close connections frequently. In particular, MySQL likes to close connections that have been idle for a while, and sometimes that can leave the persistent connection in a stale state that can make the application unresponsive.
What you really want to do is optimize the "dead connection time", the fraction of the time a connection is up but isn't doing any work. In the case of creating a new connection with every request, that dead time is really just the setup and teardown time. If only make a connection once (per thread), and it never goes bad, then the dead time is the idle time.
When your application is serving only a few requests, the number of connections that occur will also be small, and so there's not much advantage of keeping a connection open, but idle. On the other extreme, when the application is very busy, connections are almost never idle, and closing a connection that will just be reopened immediately is also wasted. In the middle, when new requests sometimes follow in flight requests, but sometimes not, you'll have to do some performance tuning on things like pool size, request timeout, and so on.
A very busy app, which uses a connection pool to keep connections open will only ever see one kind of dead time; waiting for requests that will never return because the connection has gone bad. A simple solution to this problem is to execute a known, good query (which in MySQL is spelled SELECT 1) before providing a connection from the pool to a request and recycle the connection if it doesn't return quickly.
| 0 | 0 | 0 | 0 |
2011-07-14T04:13:00.000
| 3 | 0 | false | 6,688,413 | 0 | 0 | 1 | 1 |
I am looking at the Flask tutorial, and it suggests to create a new database connection for each web request. Is it the right way to do things ? I always thought that the database connection should only once be created once for each thread. Can that be done, while maintaining the application as thread-safe, with flask, or other python web servers.
|
How to allow only selected user login in gae+python application?
| 6,697,664 | 2 | 3 | 755 | 0 |
python,google-app-engine,google-cloud-datastore
|
You basically need to do it in two steps:
Do what systempuntoout's answer said to only allow logged-in users to see your site.
On each of your routes (URL handlers), the first step should be to get their user object and check if they are on a list you're keeping of users allowed to see your app. For a first run, you could just have the list be a global variable, but this isn't very flexible (it makes you redeploy your app every time you want to update the list), so for a second run you should refactor it to perhaps read from the Datastore to see if a user is in the allowed list or not.
| 0 | 1 | 0 | 0 |
2011-07-14T14:17:00.000
| 2 | 1.2 | true | 6,694,662 | 0 | 0 | 1 | 1 |
I want to upload my application on google app engine
and want to use this by only selected user
so want to know how it possible.
want to use users gmail account.
|
Show only python function def in PyDev Eclipse/Aptana
| 6,700,834 | 2 | 0 | 1,098 | 0 |
python,eclipse,pydev,aptana,outline-view
|
Look in Window > Preferences > PyDev > Editor > Code Folding. Enable Folding for Function Definitions. Then hit Ctrl-9 and Ctrl-0 to fold and unfold your code.
Edit: You can also use Ctrl-- (minus) and Ctrl-= to fold and unfold single levels.
| 0 | 0 | 0 | 1 |
2011-07-14T21:56:00.000
| 2 | 0.197375 | false | 6,700,416 | 0 | 0 | 1 | 1 |
Is there a way in a huge python file , you just see the function def you are interested in ? I remember Eclipse has an option to do this in Java and it was pretty helpful.How about for Python in PyDev/Aptana Studio 3?
|
eclipse with reportlab
| 8,851,849 | 0 | 0 | 318 | 0 |
python,django,eclipse,eclipse-plugin,reportlab
|
rlextra is a proprietary addition on top of ReportLab. You only get the .pyc files but not the source .py files when you receive the package. I do remember a packaging issue with datacharts.pyc on version 2.4 that was reported by a client and fixed on the day. If you contact ReportLab they will be more than happy to help you.
| 0 | 0 | 0 | 1 |
2011-07-14T23:20:00.000
| 1 | 0 | false | 6,701,114 | 0 | 0 | 1 | 1 |
I have an application made in Django which finally uses Reportlab to generate the pdf file by accessing the data from some tables.
There are some imports like:
from rlextra.graphics.guiedit.datacharts import DataAwareDrawing, ODBCDataSource, DataAssociation
from reportlab.graphics.charts.barcharts import VerticalBarChart3D
from reportlab.graphics.shapes import _DrawingEditorMixin
in my reportlab file. I am using Eclipse for generating the Django application. When I included the reportlab py file in the application, its showing errors that these imports can not be resolved.
I included the rlextra and reportlab folders in the application by which the last 2 imports can be resolved by first one couldnt be.
I went inside diving through the rlextra folder to find that,folder guidedit does not contain any folder datacharts by has a pyc file of that name. My eclipse is not understanding what does DataAwareDrawing, ODBCDataSource, DataAssociation mean.
When I run the same reportlab py from outside eclipse separately as a python file it does the work properly. But Eclipse is not understanding what is required and from where to get it.
|
How To Transfer data from a python web application to a java desktop applaction
| 6,701,836 | 1 | 0 | 747 | 0 |
java,python,port,web-applications
|
Why not use sockets in python too and send it to the java server. Java does not know that the end client is python, what it reads is just data(bytes). I have done this, and it works seamlessly.
See the python's struct module for more details on converting datatypes
| 0 | 0 | 0 | 0 |
2011-07-15T01:12:00.000
| 3 | 0.066568 | false | 6,701,732 | 0 | 0 | 1 | 1 |
I want some help on a way to transfer data from a python web application to a java desktop application.
What I am doing is having java listen on a port and receive data. But I have no idea how I would send data from python to an open port on a server.
What my question is how would I send data from a python web app to an open port on a computer. And would there be any problems like data types and any other things?
|
Do i always need to start the new site in django by "django-admin"
| 6,706,799 | 3 | 2 | 53 | 0 |
python,django,django-admin
|
No you don't need to use startproject. You can copy files from another project or write the skeleton files yourself. You probably want to have a unique SECRET_KEY setting though.
| 0 | 0 | 0 | 0 |
2011-07-15T12:11:00.000
| 2 | 1.2 | true | 6,706,765 | 0 | 0 | 1 | 2 |
When i need to start new django i use django-admin startproject site1 and it then create skeltion files.
Then use syncdb
i want to know if just copy the files from previous site e,g site1 and then rename it to different folder and make necessary chnages in settings and URL and then use syncdb . Will my site work or
I have to use startproject always and it does some database inserting other than making skelton files
|
Do i always need to start the new site in django by "django-admin"
| 6,706,792 | 1 | 2 | 53 | 0 |
python,django,django-admin
|
Sure. You can definitely copy (or clone with mercurial, git, you name it) your project and make necessary changes and you are done.
Django-admin is just a nice utility to make your django development easier, but it doesn't setup or do anything ultra-special on your project location.
| 0 | 0 | 0 | 0 |
2011-07-15T12:11:00.000
| 2 | 0.099668 | false | 6,706,765 | 0 | 0 | 1 | 2 |
When i need to start new django i use django-admin startproject site1 and it then create skeltion files.
Then use syncdb
i want to know if just copy the files from previous site e,g site1 and then rename it to different folder and make necessary chnages in settings and URL and then use syncdb . Will my site work or
I have to use startproject always and it does some database inserting other than making skelton files
|
Ruby vs Python for Society Management System !
| 6,723,498 | 1 | 0 | 196 | 0 |
python,ruby-on-rails,content-management-system
|
I think learning Python is easier than learning Ruby (simpler syntax).
So in my opinion start with Python to learn the concepts of scripting languages and afterwards Ruby will be easier.
| 0 | 0 | 0 | 0 |
2011-07-17T10:51:00.000
| 1 | 0.197375 | false | 6,723,370 | 0 | 0 | 1 | 1 |
My Graduation Project is developing a web-based Society Management System and i sould deliver Final Product of it at May 2012. for sure the front-end side will be designed using javascript,HTML,CSS,AJAX,..etc. The Question at Back-end side, my choice will be between Ruby on Rails and Python, i don't need to explain me the difference between them; but i need what's the best of them in case i'm something new in web Development, in the end i should learn both but i want to know what first should i start. My knowlegde til now is Good at ASP.NET and at front-back Markup Languages and i currently learn PHP Programming Language.
|
Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb"
| 6,738,365 | 2 | 0 | 1,094 | 1 |
python,mysql,django,mysql-python,bitnami
|
You'll need to install MySQL for python as Django needs this to do the connecting, once you have the package installed you shouldn't need to configure it though as Django just needs to import from it.
Edit: from your comments there is a setuptools bundled but it has been replaced by the package distribute, install this python package and you should have access to easy_install which makes it really easy to get new packages. Assuming you've added PYTHONPATH/scripts to your environment variables, you can call easy_install mysql_python
| 0 | 0 | 0 | 0 |
2011-07-18T19:29:00.000
| 3 | 1.2 | true | 6,738,310 | 0 | 0 | 1 | 3 |
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb"
I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true?
I don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.
|
Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb"
| 6,981,742 | 0 | 0 | 1,094 | 1 |
python,mysql,django,mysql-python,bitnami
|
BitNami DjangoStack already includes the mysql-python components components. I guess you selected MySQL as the database when installing the BitNami Stack, right? (it also includes PostgreSQL and SQLite). Do you receive the error at installation time? Or later working with your Django project?
In which platform are you using the BitNami DjangoStack?
| 0 | 0 | 0 | 0 |
2011-07-18T19:29:00.000
| 3 | 0 | false | 6,738,310 | 0 | 0 | 1 | 3 |
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb"
I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true?
I don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.
|
Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb"
| 12,083,825 | 0 | 0 | 1,094 | 1 |
python,mysql,django,mysql-python,bitnami
|
So I got this error after installing Bitnami Django stack on Windows Vista. Turns out that I had all components installed, but easy_install mysql_python didn't unwrap the entire package... ?
I inst... uninst... inst... uninst multiple times, but no combination (using mysql for the startup Project) made any difference.
In the end, I simply renamed the egg file (in this case MySQL_python-1.2.3-py2.7-win32.egg) file to .zip and extracted the missing parts into a directory on my PYTHONPATH and everything worked like a charm.
| 0 | 0 | 0 | 0 |
2011-07-18T19:29:00.000
| 3 | 0 | false | 6,738,310 | 0 | 0 | 1 | 3 |
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb"
I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true?
I don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.
|
Can i code browser games in python in my website
| 6,742,643 | 0 | 0 | 1,443 | 0 |
python,browser,artificial-intelligence,pygame
|
As far as I know it is not possible to execute python scripts in the browser. What you can do is generate the set of actions to be taken on the server side using python and then send these commands to the browser and interpret them in javascript. Or if you don't want a server side, you can just write the game in actionscript, silverlight or javascript/html5.
| 0 | 0 | 1 | 0 |
2011-07-19T05:19:00.000
| 2 | 0 | false | 6,742,589 | 0 | 0 | 1 | 1 |
I am not much into gaming but i am learning and doing some practicles in Artifical Intelligence algorithms. Now as i can develop full fledge application so it means even if learn various techniques , i won't be having anything to show in interview.
I have seen that all AI techniques / algorithms are usually tested as simulation.
i saw one video from google where they showed their AI techniques in small games wherevery small characters were showing /doing things based on their learning. So i think by implementing them in small games , i can demonstrate what i have learned so that i can have small preactical applicatin.
But i want that on a website , so i want to know whats the best way to have simulations /games inside browser
|
Cross platform, stable, and with great feature web platform
| 6,743,097 | 2 | 1 | 262 | 0 |
c#,java,.net,python,linux
|
If you don't mind moving out of your comfort zone, I would recommend Ruby on Rails (on top of Django and JSP).
| 0 | 0 | 0 | 0 |
2011-07-19T06:23:00.000
| 3 | 0.132549 | false | 6,743,061 | 0 | 0 | 1 | 2 |
We are a group of 2 C# programmers, which have experience in wcf web services, asp.net MVC 3, different architectural patterns, Inversion of control etc.
We have to implement a web site, which will run on Linux with minimum of 10k online users.
The problem is that it will run on Linux, and we don't know what language to choose, which is stable, maintained, free, cross platform, and reusable in business/desktop applications.
We think about Java servlets and Python's Django framework, are there other good frameworks?
Which one to choose, we are confused.
Thanks in advance.
EDIT:
What about the performance of JSP vs Django vs Asp.net MVC on Mono vs Ruby On Rails?
|
Cross platform, stable, and with great feature web platform
| 6,743,485 | 1 | 1 | 262 | 0 |
c#,java,.net,python,linux
|
I love Python and Django, and even use it in every projects I have at school/uni before.
But it's just me, who have some experience with Python; everything you know have some real-world sample with large scale, so choose the one you like/know best.
You are going to deploy in Linux, so maybe ASP.NET is a acceptable choice, but not the best. I would recommend Django (because I love it); Ruby on Rails is another good choice, but I don't like it because of some bad experience before with its dependencies. Python and Ruby both has its advantages and dis-advantages.
There's another web development framework choice for Python like Pylons (Quora use it), webpy...
| 0 | 0 | 0 | 0 |
2011-07-19T06:23:00.000
| 3 | 0.066568 | false | 6,743,061 | 0 | 0 | 1 | 2 |
We are a group of 2 C# programmers, which have experience in wcf web services, asp.net MVC 3, different architectural patterns, Inversion of control etc.
We have to implement a web site, which will run on Linux with minimum of 10k online users.
The problem is that it will run on Linux, and we don't know what language to choose, which is stable, maintained, free, cross platform, and reusable in business/desktop applications.
We think about Java servlets and Python's Django framework, are there other good frameworks?
Which one to choose, we are confused.
Thanks in advance.
EDIT:
What about the performance of JSP vs Django vs Asp.net MVC on Mono vs Ruby On Rails?
|
Facebook Graph API: Have extended permission, need something odd
| 6,770,444 | 0 | 1 | 226 | 0 |
python,facebook-graph-api
|
We don't allow access to friend email addresses for obvious privacy reasons. You have to explicitly ask for this from each user.
| 0 | 0 | 1 | 0 |
2011-07-20T00:21:00.000
| 1 | 1.2 | true | 6,755,679 | 0 | 0 | 1 | 1 |
So, I have a FB app already, and when people connect I ask for the extended permission "email" and that is all well and good. I save the email and can get a list of their friends' ids and basic profiles.
I want to know if there is a way for me to figure out which of their friends have allowed their email to be visible to people/friends/public (not my app) and then get that email so I can give back a list of people they can connect to.
So, let's say you have 12 on facebook, and of those 12, 4 have allowed anyone to see their email. I want to give you those 4 emails because they have allowed them to be publicly/friend visible. Other than those, I'd have to set up a custom "request permission to know your email" kind of thing.
It seems as though it's not possible, but I just wanted to make sure.
|
Dynamically create a message
| 6,854,128 | 0 | 0 | 103 | 0 |
python,openerp
|
are you trying to raise any kind of error message or you want to have confirmation ?
if you want to raise exception, then it's possible , check in the addons/account.py
you'll find something that you are looking for.
| 1 | 0 | 0 | 0 |
2011-07-20T01:24:00.000
| 2 | 0 | false | 6,755,995 | 0 | 0 | 1 | 1 |
I have a button on a new form. When this button clicked I need a show some information on new form using label /or other component/. How to do it in OpenERP 6? I don't have any idea. Please give me a hand. thanks.
|
Stop Piston's error catching
| 9,088,385 | 5 | 1 | 174 | 0 |
python,django,django-piston
|
In your settings.py file, add PISTON_DISPLAY_ERRORS = False this will cause exceptions to be raised allowing them to be shown as expected in the Django debug error page when you are using DEBUG = True.
There are a few cases when the exception won't propagate properly. I've seen it happen when Piston says that the function definition doesn't match, but haven't looked to see why...
| 0 | 0 | 0 | 0 |
2011-07-20T02:22:00.000
| 2 | 0.462117 | false | 6,756,308 | 0 | 0 | 1 | 1 |
I'm using Piston with Django. Anytime there's an error in my handler code, I get a simplified, text-only description of the error in my http response, which gives me much less information that Django does when it's reporting errors. How can I stop Piston catching errors in this way?
|
Asynchronous WSGI with Twisted
| 6,761,019 | 5 | 8 | 3,732 | 0 |
python,asynchronous,twisted,wsgi
|
Why do you want to use WSGI and do asynchronous things? The benefit of WSGI is that you can deploy your application on any WSGI container. If you start using Twisted APIs to do asynchronous things, then you can only deploy your application in Twisted's WSGI container.
You should probably just use Twisted Web without WSGI for your asynchronous code.
| 0 | 1 | 0 | 0 |
2011-07-20T08:34:00.000
| 2 | 1.2 | true | 6,759,115 | 0 | 0 | 1 | 2 |
I'm building a web interface for a twisted application and would like to use WSGI rather than twisted.web directly (since the rest of the website is WSGI and I already have a substantial WSGI codebase).
The Twisted documentation page I found about WSGIResource (http://twistedmatrix.com/documents/current/web/howto/web-in-60/wsgi.html) states:
Like any other WSGI container, you can't do anything asynchronous in your WSGI applications, even though this is a Twisted WSGI container.
Does this have to be true? Is there some less-than-hacky way of doing twisted.web style asynchronous web request handling in WSGI - perhaps as part of another free software project? Supposing there isn't, my plan is to have WSGI threads do their asynchronous work in the reactor thread and block by polling until the data is available. It's not pretty.
If there's a reasonably uncomplicated way of asynchronously handling WSGI requests in twisted I'd love to hear it.
|
Asynchronous WSGI with Twisted
| 7,313,910 | 5 | 8 | 3,732 | 0 |
python,asynchronous,twisted,wsgi
|
In principle, WSGI is not intrinsically incompatible with asynchronous program design; in fact, PEP 333 goes to some considerable length to specify how servers, applications and middleware must behave to support that kind of thing.
At the heart of this is returning an iterator to the container. Every time an asynchronous wsgi app_iter is invoked, it would check on all of its pending asyncronous tasks (database connections, etcetera) and if any of them have data, the app_iter yields some data; otherwise it yields an empty string. To support this, a wsgi container would need to keep track of all of the in-flight requests, and iterate each of them in turn to get more data, in addition to servicing any other deferred work that it is responsible for.
In principle, very few wsgi apps or frameworks actually do this. almost invariably, wsgi frameworks block for all sorts of reasons; reading files from disk or loading data from a database for any reason at all (Most ORM's make this a tough problem to prevent.) Twisted's wsgi container operates under the assumption that since some wsgi apps block, that perhaps any wsgi app may block, and therefore always runs them in a thread.
There are two things you can do; either explore twisted's own web framework, which is fairly solid; or consider creating a wsgi wrapper for twisted outside of twisted's own container. Making sure the wsgi app is actually asyncronous is certainly a precondition of the latter, but wsgi itself is pretty simple, a thin wrapper over http, and so it should be easy enough.
| 0 | 1 | 0 | 0 |
2011-07-20T08:34:00.000
| 2 | 0.462117 | false | 6,759,115 | 0 | 0 | 1 | 2 |
I'm building a web interface for a twisted application and would like to use WSGI rather than twisted.web directly (since the rest of the website is WSGI and I already have a substantial WSGI codebase).
The Twisted documentation page I found about WSGIResource (http://twistedmatrix.com/documents/current/web/howto/web-in-60/wsgi.html) states:
Like any other WSGI container, you can't do anything asynchronous in your WSGI applications, even though this is a Twisted WSGI container.
Does this have to be true? Is there some less-than-hacky way of doing twisted.web style asynchronous web request handling in WSGI - perhaps as part of another free software project? Supposing there isn't, my plan is to have WSGI threads do their asynchronous work in the reactor thread and block by polling until the data is available. It's not pretty.
If there's a reasonably uncomplicated way of asynchronously handling WSGI requests in twisted I'd love to hear it.
|
how to create a downloadable csv file in appengine
| 6,766,276 | 6 | 2 | 1,482 | 0 |
python,google-app-engine,csv
|
Pass a StringIO object as the first parameter to csv.writer; then set the content-type and content-disposition on the response appropriately (probably "text/csv" and "attachment", respectively) and send the StringIO as the content.
| 0 | 1 | 0 | 0 |
2011-07-20T17:49:00.000
| 2 | 1 | false | 6,766,199 | 0 | 0 | 1 | 1 |
I use python Appengine. I'm trying to create a link on a webpage, which a user can click to download a csv file. How can I do this?
I've looked at csv module, but it seems to want to open a file on the server, but appengine doesn't allow that.
I've looked at remote_api, but it seems that its only for uploading or downloading using app config, and from account owner's terminal.
Any help thanks.
|
How to avoid packet loss on server application restart?
| 6,770,063 | 3 | 2 | 335 | 0 |
python,linux,web-services,tcp,network-programming
|
To avoid this, have multiple application servers behind a load balancer. Before bringing one down, ensure the load balancer is not sending it new clients. Bring it down, traffic will go to the other applications servers, and when it comes back up traffic will begin getting sent to it again.
If you have only one application server, simply 'buffering' network traffic is a poor solution. When the server comes back up, it has none of the TCP state information anymore and the old incoming connections have nowhere to go anyway.
| 0 | 1 | 0 | 0 |
2011-07-20T22:41:00.000
| 1 | 0.53705 | false | 6,769,405 | 0 | 0 | 1 | 1 |
A typical situation with a server/web application is that the application needs to be shut down and restarted to implement an upgrade.
What are the possible/common schemes (and available software) to avoid losing data that clients sent to the server during the short time the application was gone?
An example scheme that could work is: For a simple web server where the client connects to port 80, rather than the client connecting directly to the web server application, there could be a simple application in between that listens to port 80 and seamlessly forwards/returns data to/from the "Actual" web server application (on some other port). When the web server needs to be shut down and restarted, the relay app could detect this and buffer all incoming data until the webserver comes back to life. This way there is always an application listening to port 80 and data is never lost (within buffer-size and time reason, of course). Does such a simple intermediate buffer-on-recipient-unavailable piece of software exist already?
I'm mostly interested in solutions for a single application instance and not one where there are multiple instances (in which case a clever rolling update scheme could be used), but in the interests of having a full answer set, any response would be great!
|
Django-guardian on DB with shared (non-exclusive) access
| 7,011,483 | 0 | 1 | 159 | 1 |
python,django,database-permissions,django-permissions
|
I decided to go with manually checking the permissions, caching it whenever I can. I ended up with get_perms_from_cache(self, user) model method which helps me a lot.
| 0 | 0 | 0 | 0 |
2011-07-21T11:34:00.000
| 1 | 1.2 | true | 6,775,359 | 0 | 0 | 1 | 1 |
I am developing a Django app being a Web frontend to some Oracle database with another local DB keeping app's data such as Guardian permissions. The problem is that it can be modified from different places that I don't have control of.
Let's say we have 3 models: User, Thesis and UserThesis.
UserThesis - a table specifying relationship between Thesis and User (User being co-author of Thesis)
Scenario:
User is removed as an author of Thesis by removing entry in UserThesis table by some other app.
User tries to modify Thesis using our Django app. And he succeeds, because Guardian and Django do not know about change in UserThesis.
I thought about some solutions:
Having some cron job look for changes in UserThesis by checking the modification date of entry. Easy to check for additions, removals would require looking on all relationships again.
Modifying Oracle DB schema to add Guardian DB tables and creating triggers on UserThesis table. I wouldn't like to do this, because of Oracle DB being shared among number of different apps.
Manually checking for relationship in views and templates (heavier load on Oracle).
Which one is the best? Any other ideas?
|
manage.py runserver Error: [Errno 10013]
| 39,829,247 | 0 | 6 | 18,673 | 0 |
python,django,python-2.7
|
**
[Errno 10013]
**
that error comes when the port you want to use is already in use by the another program.
so what you do just choose for another port which of which you can use port 8080
use the following commond
python manage.py runserver 8080
| 0 | 0 | 0 | 0 |
2011-07-21T15:30:00.000
| 3 | 0 | false | 6,778,638 | 0 | 0 | 1 | 2 |
I am having some problems running django. When I use the command manage.py runserver I receive an error that says: Error: [Errno 10013] An attempt was made to access a socket in a way forbidden by access permissions
I use postgreSQL as my database.
Edit: I run Windows Vista
|
manage.py runserver Error: [Errno 10013]
| 6,778,708 | 22 | 6 | 18,673 | 0 |
python,django,python-2.7
|
If you don't have permission to bind to a socket, you can try sudo manage.py runserver to do it with root privileges.
With Windows Vista / 7 you need to run the shell with administrator privileges. You can right click on the icon and select "Run as administrator" or go to c:\windows\system32\ and right click on cmd.exe and select "Run as administrator".
Edit: OK, this error occurs when another process is already using the same port. To change the port, do manage.py runserver 8080 where the number at the end is the port you want.
| 0 | 0 | 0 | 0 |
2011-07-21T15:30:00.000
| 3 | 1.2 | true | 6,778,638 | 0 | 0 | 1 | 2 |
I am having some problems running django. When I use the command manage.py runserver I receive an error that says: Error: [Errno 10013] An attempt was made to access a socket in a way forbidden by access permissions
I use postgreSQL as my database.
Edit: I run Windows Vista
|
Dynamic per-request database connections in Django
| 6,782,234 | 2 | 1 | 1,027 | 1 |
python,django
|
rereading the file is a heavy penalty to pay when it's unlikely that the file has changed.
My usual approach is to use INotify to watch for configuration file changes, rather than trying to read a file on every request. Additionally, I tend to keep a "current" configuration, parsed from the file, and only replace it with a new value once i've finished parsing the config file and i'm certain it's valid. You could resolve some of your concerns about thread safety by setting the current configuration on each incoming request, so that the configuration can't change mid-way through a request.
| 0 | 0 | 0 | 0 |
2011-07-21T18:26:00.000
| 2 | 1.2 | true | 6,780,827 | 0 | 0 | 1 | 2 |
I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting files outside of the respective applications. Setting files could be created, altered or deleted without the django application being restarted.
For every request to the django application, there will be a http header or a url parameter which can be used to deduce which setting file to look at to determine which database credentials to use.
My first thought is to use a custom django middleware that would parse the settings files (possibly with caching) and create a new connection object on each request, patching it into django.db before any ORM activity.
Is there a more graceful method to handle this situation? Are there any thread safety issues I should consider with the middleware approach?
|
Dynamic per-request database connections in Django
| 6,780,942 | 0 | 1 | 1,027 | 1 |
python,django
|
You could start different instances with different settings.py files (by setting different DJANGO_SETTINGS_MODULE) on different ports, and redirect the requests to the specific apps. Just my 2 cents.
| 0 | 0 | 0 | 0 |
2011-07-21T18:26:00.000
| 2 | 0 | false | 6,780,827 | 0 | 0 | 1 | 2 |
I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting files outside of the respective applications. Setting files could be created, altered or deleted without the django application being restarted.
For every request to the django application, there will be a http header or a url parameter which can be used to deduce which setting file to look at to determine which database credentials to use.
My first thought is to use a custom django middleware that would parse the settings files (possibly with caching) and create a new connection object on each request, patching it into django.db before any ORM activity.
Is there a more graceful method to handle this situation? Are there any thread safety issues I should consider with the middleware approach?
|
Implement a feed stream in Django
| 9,685,570 | 0 | 1 | 374 | 0 |
python,django,django-models
|
I would suggest to create a new model (let's say UserActions) in which you will trace all desired actions by adding an item on each of your others models "save" action.
Then you can easily build a view for this model and generate a feed with all the actions in chronological order.
| 0 | 0 | 0 | 0 |
2011-07-21T19:26:00.000
| 2 | 0 | false | 6,781,595 | 0 | 0 | 1 | 1 |
I've a Django application that uses a Product model and a Comment model.
A User can add a Product to its favorite products list.
A User can leave a Comment to a Product.
I would implement a news feed in the home of my application, something like Facebook News feed.
Something like this:
user_1 just comments product_3: "this is beautiful!"
user_1 just added product_3 to its list
user_4 just added product_2 to its list
user_4 just added product_3 to its list
user_2 just comments product_1: "recommended!"
user_4 just added product_1
etc.
So it's a feed with various type of sentences.
Have you ideas to implement something like that in a good way?
|
Install BeautifulSoup for another Python version on Mac OS X
| 6,789,804 | 1 | 1 | 739 | 0 |
python,installation,beautifulsoup
|
Set your PYTHONPATH environmental variable to point to the installation you want to install it for, and make sure you're using that version of Python when you run python setup.py install. Something like PYTHONPATH=/usr/lib/python25 /usr/bin/python25 setup.py install.
| 0 | 1 | 0 | 0 |
2011-07-22T12:07:00.000
| 2 | 0.099668 | false | 6,789,757 | 1 | 0 | 1 | 1 |
I have three versions of Python on my Mac: 2.6.1 (built-in), 2.5.4 (Google App Engine development), and 2.7.2 (general Python programming).
I installed BeautifulSoup with python setup.py install. However, only 2.7.2 is able to work with it.
How do I install it for 2.5.4 as well?
|
Controlling serial port through a webapp(PHP, javascript) using MySQL and Python
| 9,482,670 | 2 | 1 | 743 | 0 |
javascript,php,python,mysql,serial-port
|
It seems there's a lot of places for things to go wrong.
Why not just cut out PHP all together and use python?
e.g. Use a python web framework & let your JavaScript communicate with that and while also reading the serial port and logging to MySQL.
That's just me though. I'd try and cut out as many points where it could fail as possible and keep it super simple.
| 0 | 0 | 0 | 1 |
2011-07-22T14:50:00.000
| 2 | 0.197375 | false | 6,791,799 | 0 | 0 | 1 | 2 |
Wanted to get some feedback on this implementation.
I'm developing an application on the PC to send and receive data to the serial port.
Some of the data received by the application will be solicited, while other data unsolicited.
Controlling the serial port and processing messages would be handled by a Python application that would reside between the serial port and the MySQL database. This would be a threaded application with one thread handling sending/receiving using the Queue library and other threads handling logic and the database chores.
They MySQL database would contain tables for storing data received from the serial port, as well as tables of outgoing commands that need to be sent to the serial port. A command sent out may or not be received, so some means of handling retries would be required.
The webapp using HTML, PHP, and javascript would provide the UI. Users can query data and send commands to change parameters, etc. All commands sent out would be written into an outgoing table in the database and picked up by the python app.
My question: Is this a reasonable implementation? Any ideas or thoughts would be appreciated. Thanks.
|
Controlling serial port through a webapp(PHP, javascript) using MySQL and Python
| 10,899,487 | 0 | 1 | 743 | 0 |
javascript,php,python,mysql,serial-port
|
You might also want to check out pySerial (http://pyserial.sourceforge.net/). You might also want to think about you sampling rates, i.e. how much data are you going to be generating and at what frequency. in other words how much data are you planning to store. Will give you some idea of system sizing.
| 0 | 0 | 0 | 1 |
2011-07-22T14:50:00.000
| 2 | 0 | false | 6,791,799 | 0 | 0 | 1 | 2 |
Wanted to get some feedback on this implementation.
I'm developing an application on the PC to send and receive data to the serial port.
Some of the data received by the application will be solicited, while other data unsolicited.
Controlling the serial port and processing messages would be handled by a Python application that would reside between the serial port and the MySQL database. This would be a threaded application with one thread handling sending/receiving using the Queue library and other threads handling logic and the database chores.
They MySQL database would contain tables for storing data received from the serial port, as well as tables of outgoing commands that need to be sent to the serial port. A command sent out may or not be received, so some means of handling retries would be required.
The webapp using HTML, PHP, and javascript would provide the UI. Users can query data and send commands to change parameters, etc. All commands sent out would be written into an outgoing table in the database and picked up by the python app.
My question: Is this a reasonable implementation? Any ideas or thoughts would be appreciated. Thanks.
|
Upload file to NodeJS server every 30 minutes
| 6,798,369 | 2 | 0 | 598 | 0 |
python,perl,file-upload,node.js
|
If this isn't part of other program logic, a simple curl --upload-file <file> <url> would do the job. If not, like Dan Grossman commented, any language capable of opening a socket and write HTTP headers and body would work (all assuming your node.js server is speaking http).
| 0 | 0 | 0 | 1 |
2011-07-23T04:50:00.000
| 2 | 1.2 | true | 6,798,305 | 0 | 0 | 1 | 2 |
I'm trying to figure out the best way to upload a file to a NodeJS(any server I guess, but just being specific) every 30 mins.
I was thinking about using perl or python to acheive this, or even NodeJS or a CGI script?
Would it be best to just create a multi-part form?
Trying to figure out the best practice.
Thanks.
|
Upload file to NodeJS server every 30 minutes
| 6,798,370 | 1 | 0 | 598 | 0 |
python,perl,file-upload,node.js
|
I might recommend crontab for a job like this. It's a sort of job-scheduler for the operating system, and is designed for 'do this job every so often' tasks.
| 0 | 0 | 0 | 1 |
2011-07-23T04:50:00.000
| 2 | 0.099668 | false | 6,798,305 | 0 | 0 | 1 | 2 |
I'm trying to figure out the best way to upload a file to a NodeJS(any server I guess, but just being specific) every 30 mins.
I was thinking about using perl or python to acheive this, or even NodeJS or a CGI script?
Would it be best to just create a multi-part form?
Trying to figure out the best practice.
Thanks.
|
Progress Update Between Python and jQuery.ajax Call
| 6,800,883 | 0 | 1 | 554 | 0 |
javascript,python,progress
|
You can cut the python part in 2 functions and call first, then call 2nd when first is finished all with jQuery.ajax(...)
Or use HTML5 Web Sockets to discuss with server...
| 0 | 0 | 0 | 0 |
2011-07-23T13:19:00.000
| 1 | 0 | false | 6,800,641 | 0 | 0 | 1 | 1 |
I have two functions that I need to run, both take quite some time to complete. The second is dependent on the first, so they need to be in order. The first MUST be done with an ajax call to Python from JavaScript (using jquery.ajax). The second is some parsing and HTML generation, so it can be done in either JavaScript or Python.
I would prefer the second to be in Python since this would mean the "heavy lifting" is done server-side. However, I would like to notify the user that the first function is complete and the second one is running. The problem is the amount of data between the functions would not make sense to transfer from Python to JavaScript (server to client), then back to Python (client to server). It'd be much faster to keep everything on the server side until the second function is complete, then transfer what information I do have back to JavaScript (containing the HTML code).
Again, I would like to notify the user (through JavaScript) that the first Python function is complete and the second one is running. Then once the second one completes, the data should be passed back to JavaScript to be displayed.
|
what's the difference between "python scriptname.py" and just "scriptname.py"
| 6,804,252 | 2 | 1 | 243 | 0 |
python
|
You mean when executing those statements on the command line? The difference is "python scriptname.py" explicitly invokes the program named "python" in your path (in Linux, typing "which python" will tell you where the program lives), wheras "scriptname.py" is just executing that file, feeding it to the shell for interpreting. If you give it to the shell, it should have something at the top defining what program to invoke when executed (in Linux, something like /usr/bin/python). The shell then executes that program with the rest of your script, effectively doing the same thing as the first statement.
| 0 | 0 | 0 | 0 |
2011-07-24T00:40:00.000
| 1 | 1.2 | true | 6,804,243 | 1 | 0 | 1 | 1 |
what's the difference between "python scriptname.py" and just "scriptname.py"?
I meet the problem when using commands in Django, and this really confuse me
|
Should one minify server code when it's in production?
| 6,811,003 | 74 | 36 | 3,747 | 0 |
php,python,ruby,perl,node.js
|
You're not going to have any improvement as the whitespaces and all formatting are lost when your server side code is translated to machine code (or interpreted). It's also not sent over the wire, it's read from the local filesystem, so while having less characters would lead to a faster startup, it would not make any difference on the long run and the startup speed gain would be marginal (or even unnoticeable).
So, no, minifying your server side code is basically useless, worse, it's probably going to make stack traces completely useless, as there's going to be a lot of code in the same line (and not necessarily with the same formatting you used).
| 0 | 0 | 0 | 1 |
2011-07-25T01:24:00.000
| 4 | 1.2 | true | 6,810,977 | 0 | 0 | 1 | 3 |
When it comes to the frontend code you always minify it (remove white spaces, comments etc) in production.
Should one do the same with server code? I usually have a lot of comments in my server files. But I have never heard about people doing so.
Wouldn't the server run faster if the code was optimized in the same way?
|
Should one minify server code when it's in production?
| 6,811,008 | 18 | 36 | 3,747 | 0 |
php,python,ruby,perl,node.js
|
I think that minification has more to do with reducing bytes on the wire than it does runtime efficiency.
| 0 | 0 | 0 | 1 |
2011-07-25T01:24:00.000
| 4 | 1 | false | 6,810,977 | 0 | 0 | 1 | 3 |
When it comes to the frontend code you always minify it (remove white spaces, comments etc) in production.
Should one do the same with server code? I usually have a lot of comments in my server files. But I have never heard about people doing so.
Wouldn't the server run faster if the code was optimized in the same way?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.