Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,671,004 |
2012-05-20T05:24:00.000
| 4 | 0 | 0 | 0 |
python,http,wsgi
| 10,671,041 | 1 | false | 1 | 0 |
HTTPd has the graceful-stop predicate for -k that will allow it to bring down any workers after they have completed their request. mod_wsgi is required to make it a WSGI container.
| 1 | 3 | 0 |
I've been experimenting with several WSGI servers and am unable to find a way for them to gracefully shut down. What I mean by graceful is that the server stops listen()'ing for new requests, but finishes processing all connections that have been accept()'ed. The server process then exits.
So far I have spent some time with FAPWS, Cherrypy, Tornado, and wsgiref. It seems like no matter what I do, some of the clients receive a "Connection reset by peer".
Can someone direct me to a WSGI server that handles this properly? Or know of a way to configure one of these servers to doing a clean shutdown? I think my next step is to mock up a simple http server that does what I want.
|
How to gracefully shutdown any WSGI server?
| 0.664037 | 0 | 0 | 1,133 |
10,671,709 |
2012-05-20T07:49:00.000
| 2 | 0 | 1 | 1 |
python,shell,command-line
| 10,671,869 | 1 | true | 0 | 0 |
Exit ipython cleanly with Ctrl+D and ipython should do this by default.
| 1 | 3 | 0 |
When I open a python or ipython from command-line, I don't have commands history from previous use, using up key
Is there a way to configure it, to remember commands, like a .bash_history?
|
Keep command history between (i)python sessions
| 1.2 | 0 | 0 | 310 |
10,672,939 |
2012-05-20T11:16:00.000
| 6 | 0 | 0 | 0 |
python,database,dynamic,sqlalchemy,redis
| 10,792,940 | 4 | false | 1 | 0 |
What you're asking about is a common requirement in many systems -- how to extend a core data model to handle user-defined data. That's a popular requirement for packaged software (where it is typically handled one way) and open-source software (where it is handled another way).
The earlier advice to learn more about RDBMS design generally can't hurt. What I will add to that is, don't fall into the trap of re-implementing a relational database in your own application-specific data model! I have seen this done many times, usually in packaged software. Not wanting to expose the core data model (or permission to alter it) to end users, the developer creates a generic data structure and an app interface that allows the end user to define entities, fields etc. but not using the RDBMS facilities. That's usually a mistake because it's hard to be nearly as thorough or bug-free as what a seasoned RDBMS can just do for you, and it can take a lot of time. It's tempting but IMHO not a good idea.
Assuming the data model changes are global (shared by all users once admin has made them), the way I would approach this problem would be to create an app interface to sit between the admin user and the RDBMS, and apply whatever rules you need to apply to the data model changes, but then pass the final changes to the RDBMS. So for example, you may have rules that say entity names need to follow a certain format, new entities are allowed to have foreign keys to existing tables but must always use the DELETE CASCADE rule, fields can only be of certain data types, all fields must have default values etc. You could have a very simple screen asking the user to provide entity name, field names & defaults etc. and then generate the SQL code (inclusive of all your rules) to make these changes to your database.
Some common rules & how you would address them would be things like:
-- if a field is not null and has a default value, and there are already existing records in the table before that field was added by the admin, update existing records to have the default value while creating the field (multiple steps -- add field allowing null; update all existing records; alter the table to enforce not null w/ default) -- otherwise you wouldn't be able to use a field-level integrity rule)
-- new tables must have a distinct naming pattern so you can continue to distinguish your core data model from the user-extended data model, i.e. core and user-defined have different RDBMS owners (dbo. vs. user.) or prefixes (none for core, __ for user-defined) or somesuch.
-- it is OK to add fields to tables that are in the core data model (as long as they tolerate nulls or have a default), and it is OK for admin to delete fields that admin added to core data model tables, but admin cannot delete fields that were defined as part of the core data model.
In other words -- use the power of the RDBMS to define the tables and manage the data, but in order to ensure whatever conventions or rules you need will always be applied, do this by building an app-to-DB admin function, instead of giving the admin user direct DB access.
If you really wanted to do this via the DB layer only, you could probably achieve the same by creating a bunch of stored procedures and triggers that would implement the same logic (and who knows, maybe you would do that anyway for your app). That's probably more of a question of how comfortable are your admin users working in the DB tier vs. via an intermediary app.
So to answer your questions directly:
(1) Yes, add tables and columns at run time, but think about the rules you will need to have to ensure your app can work even once user-defined data is added, and choose a way to enforce those rules (via app or via DB / stored procs or whatever) when you process the table & field changes.
(2) This issue isn't strongly affected by your choice of SQL vs. NoSQL engine. In every case, you have a core data model and an extended data model. If you can design your app to respond to a dynamic data model (e.g. add new fields to screens when fields are added to a DB table or whatever) then your app will respond nicely to changes in both the core and user-defined data model. That's an interesting challenge but not much affected by choice of DB implementation style.
Good luck!
| 2 | 20 | 0 |
I am thinking about creating an open source data management web application for various types of data.
A privileged user must be able to
add new entity types (for example a 'user' or a 'family')
add new properties to entity types (for example 'gender' to 'user')
remove/modify entities and properties
These will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me:
a) How should the data be stored in the database? Should I dynamically add/remove database tables and/or columns during runtime?
I am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add/remove tables (entities) and/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database.
Anyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management?
b) How to implement this in Python using an ORM or NoSQL?
If you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy?
If you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis?
Thanks for your suggestions!
Edit in response to some comments:
The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes.
Basically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property.
The datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a "Pythonic", state-of-the-art, scalable, and reliable web application.
I see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document/Collection model of Mongo/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction?
Edit 2 in response to some answers/comments:
From most of your answers, I conclude that the dynamic creation/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design.
As exactly this dynamic nature should be the main purpose/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine.
Expressed in an abstract way, the application needs to manage
the data layout, i.e. a "dynamic list" of valid entity types and a "dynamic list" of properties for each valid entity type
the data itself
I am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.
|
Which database model should I use for dynamic modification of entities/properties during runtime?
| 1 | 1 | 0 | 4,158 |
10,672,939 |
2012-05-20T11:16:00.000
| 3 | 0 | 0 | 0 |
python,database,dynamic,sqlalchemy,redis
| 10,707,420 | 4 | true | 1 | 0 |
So, if you conceptualize your entities as "documents," then this whole problem maps onto a no-sql solution pretty well. As commented, you'll need to have some kind of model layer that sits on top of your document store and performs tasks like validation, and perhaps enforces (or encourages) some kind of schema, because there's no implicit backend requirement that entities in the same collection (parallel to table) share schema.
Allowing privileged users to change your schema concept (as opposed to just adding fields to individual documents - that's easy to support) will pose a little bit of a challenge - you'll have to handle migrating the existing data to match the new schema automatically.
Reading your edits, Mongo supports the kind of searching/ordering you're looking for, and will give you the support for "empty cells" (documents lacking a particular key) that you need.
If I were you (and I happen to be working on a similar, but simpler, product at the moment), I'd stick with Mongo and look into a lightweight web framework like Flask to provide the front-end. You'll be on your own to provide the model, but you won't be fighting against a framework's implicit modeling choices.
| 2 | 20 | 0 |
I am thinking about creating an open source data management web application for various types of data.
A privileged user must be able to
add new entity types (for example a 'user' or a 'family')
add new properties to entity types (for example 'gender' to 'user')
remove/modify entities and properties
These will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me:
a) How should the data be stored in the database? Should I dynamically add/remove database tables and/or columns during runtime?
I am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add/remove tables (entities) and/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database.
Anyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management?
b) How to implement this in Python using an ORM or NoSQL?
If you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy?
If you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis?
Thanks for your suggestions!
Edit in response to some comments:
The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes.
Basically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property.
The datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a "Pythonic", state-of-the-art, scalable, and reliable web application.
I see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document/Collection model of Mongo/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction?
Edit 2 in response to some answers/comments:
From most of your answers, I conclude that the dynamic creation/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design.
As exactly this dynamic nature should be the main purpose/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine.
Expressed in an abstract way, the application needs to manage
the data layout, i.e. a "dynamic list" of valid entity types and a "dynamic list" of properties for each valid entity type
the data itself
I am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.
|
Which database model should I use for dynamic modification of entities/properties during runtime?
| 1.2 | 1 | 0 | 4,158 |
10,673,245 |
2012-05-20T12:06:00.000
| 0 | 1 | 0 | 0 |
python
| 10,674,012 | 1 | true | 1 | 0 |
either you choose zrxq's solution, or you can do that with a thread, if you take care of two things:
you don't tamper with objects from the main thread (be careful of iterators),
you take good care of killing your thread once the job is done.
something that would look like :
import threading
class TwitterThreadQueue(threading.Thread):
queue = []
def run(self):
while len(self.queue!=0):
post_on_twitter(self.queue.pop()) # here is your code to post on twitter
def add_to_queue(self,msg):
self.queue.append(msg)
and then you instanciate it in your code :
tweetQueue = TwitterThreadQueue()
# ...
tweetQueue.add_to_queue(message)
tweetQueue.start() # you can check if it's not already started
# ...
| 1 | 1 | 0 |
I'm writing an web app. Users can post text, and I need to store them in my DB as well as sync them to a twitter account.
The problem is that I'd like to response to the user immediately after inserting the message to DB, and run the "sync to twitter" process in background.
How could I do that? Thanks
|
Sync message to twitter in background in a web application
| 1.2 | 0 | 0 | 56 |
10,675,029 |
2012-05-20T16:08:00.000
| 7 | 0 | 0 | 0 |
python,ip-address
| 10,675,083 | 1 | false | 0 | 0 |
In general, you can't. If someone has a different computer make a request on behalf of their computer, then you only get network information about the machine you receive the connection from.
An HTTP proxy might add a X-Forwarded-For header.
| 1 | 0 | 0 |
How can I find IP address of clients in python?
|
How can I find real client ip address?
| 1 | 0 | 1 | 319 |
10,676,079 |
2012-05-20T18:36:00.000
| -1 | 0 | 1 | 0 |
python,abstract-syntax-tree
| 10,676,101 | 1 | false | 0 | 0 |
you can always store your parent node in a heap for later use.
| 1 | 1 | 0 |
I use AST module to parse a source code, and process it for something. But, when I need back a parent node from its child node, it's didn't exist.
|
why python AST module parse a source code to a tree of it's node don't have point of parent
| -0.197375 | 0 | 0 | 413 |
10,680,724 |
2012-05-21T06:59:00.000
| 1 | 1 | 1 | 0 |
python,py2exe
| 10,716,585 | 1 | true | 0 | 0 |
I solved my own problem (kinda), I was able to avoid this error and successfully 'compile' my code by consolidating all my modules in to a single file, so that no custom modules were imported. It resulted in some super messy code, but it worked!
| 1 | 1 | 0 |
I have a python program I wrote that I am trying to "compile" with py2exe, everything goes well and the executable is created. The first time I run the program I get this error:
Traceback (most recent call last):
File "IMGui.py", line 13, in
ImportError: No module named IMCrypt2
I found that if I manually add my custom modules to /lib/shared.zip and run the program again, I get THIS error:
Traceback (most recent call last):
File "IMGui.py", line 13, in
zipimport.ZipImportError: can't find module 'IMCrypt2'
I have been doing some extensive googling, 2 solutions I've found on the web were to delete the 'dist' and 'build' folders and try again, and to add "includes":"decimal" to my options, but neither of these solutions have worked for me D=
I'm using python 2.5 (I was using new version, but building with those were giving me other strange runtime errors, and the version I did successfully build on Windows 7 ONLY worked on Windows 7, so I'm trying again using Python 2.5 on Windows XP in an attempt to get a more 'universal' windows executable)
I'm completely stumped! Any help would be greatly appreciated!
|
zipimport.ZipImportError: can't find module from program made with py2exe
| 1.2 | 0 | 0 | 2,876 |
10,681,740 |
2012-05-21T08:23:00.000
| 2 | 0 | 0 | 0 |
python,c,gtk,pygobject,gtktreeview
| 10,690,046 | 2 | true | 0 | 1 |
At the risk of being too basic (perhaps I misunderstand the problem), to manipulate treeview selections, you use the GtkTreeSelection object returned from GtkTreeView.get_selection. You can attach to signals on this object, change the current selection,etc.
| 1 | 3 | 0 |
I'm using PyGObject but I think this is a question that could be adapted to all GTK, so if someone know how to do it using C or anything should work in python also.
I have two treeview, Active and Inactive, I load data from a Sqlite database and I can swap and drag & drop items from one to other.
This is just an aestetic thing, if I click on one item on one treeview I want that a previous selected item on the other be deselected.
It appears that nobody had to do something similar because I didn't found anything about it on the net.
|
Gtk.Treeview deselect row via signals and code
| 1.2 | 0 | 0 | 1,990 |
10,688,601 |
2012-05-21T16:01:00.000
| 2 | 0 | 1 | 0 |
python,ubuntu,numpy,virtualenv
| 10,688,691 | 1 | false | 0 | 0 |
You have to install it inside of your virtual environment. The easiest way to do this is:
source [virtualenv]/bin/activate
pip install numpy
| 1 | 0 | 1 |
I have successfully install NumPy on Ubuntu; however when inside a virtualenv, NumPy is not available. I must be missing something obvious, but I do not understand why I can not import NumPy when using python from a virtualenv. Can anyone help? I am using Python 2.7.3 as my system-wide python and inside my virtualenv. Thanks in advance for the help.
|
Installed NumPy successfully, but not accessible with virtualenv
| 0.379949 | 0 | 0 | 394 |
10,689,273 |
2012-05-21T16:49:00.000
| 3 | 1 | 1 | 0 |
python,cryptography,rsa,pycrypto
| 10,689,441 | 3 | false | 0 | 0 |
No, you can't compute e from d.
RSA is symmetric in d and e: you can equally-well interchange the roles of the public and the private keys. Of course, we choose one specially to be private and reveal the other -- but theoretically they do the same thing. Naturally, since you can't deduce the private key from the public, you can't deduce the public key from the private either.
Of course, if you have the private key that means that you generated the keypair, which means that you have the public key somewhere.
| 2 | 8 | 0 |
I am newbie in cryptography and pycrypto.
I have modulus n and private exponent d. From what I understand after reading some docs private key consists of n and d.
I need to sign a message and I can't figure out how to do that using pycrypto. RSA.construct() method accepts a tuple. But I have to additionally provide public exponent e to this method (which I don't have).
So here is my question. Do I have to compute e somehow in order to sign a message?
It seems I should be able to sign a message just by using n and d (that constitute private key). Am I correct? Can I do this with pycrypto?
Thanks in advance.
|
I have modulus and private exponent. How to construct RSA private key and sign a message?
| 0.197375 | 0 | 0 | 10,098 |
10,689,273 |
2012-05-21T16:49:00.000
| 2 | 1 | 1 | 0 |
python,cryptography,rsa,pycrypto
| 10,690,482 | 3 | false | 0 | 0 |
If you don't have the public exponent you may be able to guess it. Most of the time it's not a random prime but a static value. Try the values 65537 (hex 0x010001, the fourth number of Fermat), 3, 5, 7, 13 and 17 (in that order).
[EDIT] Simply sign with the private key and verify with the public key to see if the public key is correct.
Note: if it is the random prime it is as hard to find as the private exponent; which means you would be trying to break RSA - not likely for any key sizes > 512 bits.
| 2 | 8 | 0 |
I am newbie in cryptography and pycrypto.
I have modulus n and private exponent d. From what I understand after reading some docs private key consists of n and d.
I need to sign a message and I can't figure out how to do that using pycrypto. RSA.construct() method accepts a tuple. But I have to additionally provide public exponent e to this method (which I don't have).
So here is my question. Do I have to compute e somehow in order to sign a message?
It seems I should be able to sign a message just by using n and d (that constitute private key). Am I correct? Can I do this with pycrypto?
Thanks in advance.
|
I have modulus and private exponent. How to construct RSA private key and sign a message?
| 0.132549 | 0 | 0 | 10,098 |
10,689,523 |
2012-05-21T17:10:00.000
| 9 | 0 | 1 | 0 |
python,django,python-3.x,python-2.x
| 10,689,607 | 1 | true | 0 | 0 |
I don't think it's really possible, no. The same instance of the interpreter has to handle every module imported in a given app, so there's no obvious way to mix and match interpreters. If you need to accomplish a discrete task with a Python 3 module, you could try making a command-line script to accomplish your task and then calling that script as a subprocess from your Python 2 app, but that would be awkward to say the least.
Note that I don't think there are really a whole lot of Python 3-only modules -- most modules at this point either support both versions, or only Python 2.
| 1 | 5 | 0 |
Is there a way to import Python 3 modules into Python 2 scripts? I want to use some Python 3 modules in a Django application and haven't seen anything on the Internet. Any clues?
|
Import some python3 modules in Python2
| 1.2 | 0 | 0 | 4,187 |
10,689,738 |
2012-05-21T17:27:00.000
| 0 | 0 | 1 | 0 |
python,pip
| 10,690,031 | 2 | false | 0 | 0 |
Not tried it ever, but isn't that what the pip bundle command is for? From looking at the output of pip help bundle it looks like it'll even take an input file containing the list of packages. Having never used it, not sure just what it is that it produces.
I think the idea is that you'd run pip bundle on a system that's the same as the target machine (os and such) but which is connected, then transfer the bundle made by it to the unconnected machine.
| 1 | 1 | 0 |
I'm basically trying to get a Python app going in my office for taking care of a task, but one of the requirements doesn't play well right now with pybundle and for some reason that doesn't seem to install correctly on their machine.
Are there any other easy options to get all the requirements to other people?
|
How would I get all the reqs of a requirements.txt file using pip to someone with blocked internet access?
| 0 | 0 | 0 | 163 |
10,689,818 |
2012-05-21T17:32:00.000
| 0 | 0 | 0 | 1 |
python,gnuplot
| 10,690,113 | 2 | false | 0 | 0 |
Non-python answer would be to use `script' command.
| 1 | 0 | 0 |
I am new to gnuplot. I am using Unix.
I see the commands/error and their output on the terminal but I want to save them on a file too for storage purposes.
There is a save command in gnuplot but it only saves the last plot or splot command given by the module and the final settings.
Suppose I plot a line with settings 'A' and after doing some calculation I went and re-plotted another line with setting 'B'
gnuplot.save() command would only save the last command and the latest settings. How can I save all the issued commands?
Kindly help...
|
Save commands sent over to the gnuplot program by Gnuplot-py package
| 0 | 0 | 0 | 231 |
10,690,416 |
2012-05-21T18:14:00.000
| 0 | 0 | 0 | 0 |
python,macos,cocoa,tkinter,tk
| 11,246,589 | 1 | false | 0 | 1 |
Try some variant of this command:
self.createcommand('tkAboutDialog', self.aboutProgram)
and put your app "about" dialog code in the aboutProgram() function.
| 1 | 1 | 0 |
I wrote a diff and merge program in Python using the Tkinter UI framework.
Running it on OS-X there are two problems:
when starting it the window does not get displayed unless switching back and forth with other running apps. There has been a thread here recommending
top.call('wm', 'attributes', '.', '-topmost', '1')
which is no solution since it keeps the window on top of all - always.
Other say that when packaging with py2app this behavior goes away - I tried and it does not!
There is no way to change the Menu - The first entry is dictated by TK, so the first
"About xxx" does bring up the TCL credits and can not be replaced with my own apps about info.
So my Idea was to make a cocoa app window wich displays on start and on top and I can define what is in the menu - and integrate the Tk frame with my Python code somehow.
Is that possible?
|
use a Tkinter frame in a Mac Cocoa app
| 0 | 0 | 0 | 382 |
10,693,379 |
2012-05-21T22:14:00.000
| 0 | 0 | 0 | 0 |
python,django,tastypie
| 24,058,617 | 4 | false | 1 | 0 |
Can also use the dehydrate(self, bundle) method.
def dehydrate(self, bundle):
del bundle.data['attr-to-del]
return bundle
| 1 | 11 | 0 |
I would like for a particular django-tastypie model resource to have only a subset of fields when listing objects, and all fields when showing a detail. Is this possible?
|
Can django-tastypie display a different set of fields in the list and detail views of a single resource?
| 0 | 0 | 0 | 5,626 |
10,697,651 |
2012-05-22T07:32:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine
| 10,698,246 | 1 | false | 1 | 0 |
If your are running a unitest and using init_taskqueue_stub() you need to pass the path of the queue.yaml when calling it using the root_path parameter.
| 1 | 1 | 0 |
I've added a new queue to a python GAE app, and would like to add tasks to it, but always get an UnknownQueueError when I run my tests. On the other hand, I see the queue present in the GAE admin console (both local and remote). So the question is (1) do I miss something when I add a task to my queue? (2) if not, then how can I run custom queues in a test?
Here is my queue.yaml
queue:
- name: requests
rate: 20/s
bucket_size: 100
retry_parameters:
task_age_limit: 60s
and my python call is the following:
taskqueue.add(queue_name="requests", url=reverse('queue_request', kwargs={"ckey":ckey}))
any ideas?
|
queues remain unknown or just don't know how to call them
| 0.197375 | 0 | 0 | 178 |
10,697,843 |
2012-05-22T07:44:00.000
| 0 | 0 | 0 | 0 |
python,html,web2py
| 10,697,999 | 3 | false | 1 | 0 |
Yes it is possible in the incoming request is the name of the pressed button.
| 1 | 3 | 0 |
I am using web2py to write a search engine like app. Is it possible to implement two submit button for one form such as google has two buttons "search" and "I am feeling lucky". Thanks in advance.
|
is it possible to have two submit button for one form?
| 0 | 0 | 0 | 2,325 |
10,703,616 |
2012-05-22T14:00:00.000
| 3 | 0 | 0 | 0 |
python
| 10,703,762 | 1 | true | 1 | 0 |
Pystache is a template library not http server! If you want make webapp try to use ready-made webframeworks like Django or Pyramid.
| 1 | 1 | 0 |
This is really a newbie question, but I don't know how to search answers for this. I want to use pystache, and I am able to execute the .py file to print out some rendered output from .mustache file. but how exactly do I convert this into .html file? Specifically, how to put it on the server so that the browser would direct to the .html file like index.html?
|
Get started with pystache
| 1.2 | 0 | 0 | 617 |
10,705,572 |
2012-05-22T15:52:00.000
| 1 | 0 | 0 | 0 |
python,sql-server,bigdata
| 10,713,425 | 1 | false | 0 | 0 |
I think that the answer is that there is no general recipe for doing this. In fact, I don't even think it makes sense to have a general recipe ...
What you need to do is to analyse the SQL schemas and work out an appropriate mapping to BigData schemas. Then you figure out how to migrate the data.
| 1 | 0 | 0 |
I have a large SQLServer database on my current hosting site...
and
I would like to import it into Google BigData.
Is there a method for this?
|
Porting data from SQLServer to BigData
| 0.197375 | 1 | 0 | 108 |
10,706,735 |
2012-05-22T17:07:00.000
| 1 | 0 | 1 | 0 |
python,strptime
| 10,706,871 | 2 | false | 0 | 0 |
'1/12/07 00:07 AM' has incorrect format because in the 12-hour format the hour can be in range 1-12 and not 0.
| 1 | 3 | 0 |
My string format currently is datetime.strptime(date_as_string, '%d/%m/%y %I:%M %p')
this unfortunately does not work with input such as 1/12/07 00:07 AM
How I can get strptime to recogize this format ?
EDIT:
ValueError: time data '1/12/07 00:07 AM' does not match format '%d/%m/%y %I:%M %p'
|
python strptime wrong format with 12-hour hour
| 0.099668 | 0 | 0 | 1,657 |
10,707,259 |
2012-05-22T17:47:00.000
| 2 | 0 | 1 | 1 |
python,bash,shell,ipython
| 10,719,912 | 1 | true | 0 | 0 |
You will have to live with this.
If identifiers are handled across language boundaries (in this case bash/Python) you will have problems if the languages' rules for identifiers allow different things (in this case the - is allowed in bash but not in Python). One way to solve this is name mangling. Sometimes this is done, e. g. by replacing offending characters with allowed characters (e. g. xdg-open by xdg_open); to avoid name clashes (e. g. if there already is an xdg_open besides the xdg-open) the replacement often is escaped in some way, e.g. by the hex value of the character (e. g. - by _2d, _ by _5f etc.). You will probably know this from URL lines containing stuff like %20 and the like. This all becomes either unreadable very quickly, or the rules for the name mangling are very complicated (there's a trade-off).
| 1 | 1 | 0 |
I recently switched my default shell to IPython, rather than bash, by creating an IPython profile with automagic, autocall and other such features turned on. To make executables visible to the IPython environment, I've included %rehashx to run automatically in my config files. The trouble with this is that commands with dashes in their names, such as xdg-open, are not properly translated into magic commands, and thus require using the shell-escape syntax to run. Is there a way to automagic commands with dashes, so that I can more closely emulate bash-like calling of such commands?
|
IPython magic commands and dashes
| 1.2 | 0 | 0 | 429 |
10,707,815 |
2012-05-22T18:22:00.000
| 0 | 0 | 0 | 0 |
python,pygtk
| 10,715,877 | 2 | false | 0 | 1 |
If the specific strings are always the same, as in a programming language, then use GtkSourceView instead.
| 1 | 1 | 0 |
I have TextView with pages of text, but I want to be able to highlight (change the background color) of specific strings in different colors. Is this possible? If so, can anybody point me in the right direction? Thanks
|
Highlight certain text in a PyGTK TextView
| 0 | 0 | 0 | 411 |
10,711,030 |
2012-05-22T22:24:00.000
| 0 | 0 | 1 | 0 |
python,r,rpy2
| 10,711,224 | 1 | false | 0 | 0 |
I suspect you need to change the PYTHONPATH environment variable to include the directory containing rpy. Python knows where to search for modules when you import something by using the PYTHONPATH environment variable, much as the shell knows where to look for a program that you type the name of by using the PATH environment variable.
| 1 | 0 | 0 |
2ND question:
Thanks so much Ben! It works! I got at Error 13 message saying I couldn't make a temporary file in C:\Program Files so I movd the ARSER folder and put it under my user name. That took care of the Error 13 but now I get NameError: a global name 'RPyPException' is not defined. Is this because I moved the folder out of the Program Files folder where I have saved R, Python, and rpy? Thanks!
1ST question:
I am trying to analyze biorythm data with a program called ARSER (http://bioinformatics.cau.edu.cn/ARSER/) and when I try to run it I get the error:
File "C:\Program Files\ARSER\arser.py", line 9, in from rpy import * Import Error: no module named rpy
I am running WINDOWS 7 and have downloaded:
Python(x,y) running Python version 2.7.2.3
windows patch for Python 2.7 (pywin32-217.win32-py2.7.exe)
R version 2.8.1
rpy version 2.2.3
Under the My Computer Advanced Options I changed the environmental variable PATH to C:\Program Files\R\R-2.8.1\bin but this did not solve the above error. The help instructions I was reading were from an older version of R so maybe that's the problem?
I am new to all these programs and I appreciate any suggestions you have! Thanks so much!
|
NameError: a global name 'RPyPException' is not defined
| 0 | 0 | 0 | 391 |
10,711,918 |
2012-05-23T00:27:00.000
| 0 | 0 | 1 | 0 |
python,reduce
| 10,712,114 | 6 | false | 0 | 0 |
Why in the second case, the outcome is False
Because reduce(lambda x, y: x == y, (a, b, c, d)) does not mean (a == b) and (b == c) and (c == d); it means (((a == b) == c) == d). a == b will produce either True or False, which then gets compared to c.
| 2 | 10 | 0 |
Suppose a = [[1,2,3],[1,2,3]]
reduce(lambda x,y: x==y, a) returns True
But if a = [[1,2,3],[1,2,3],[1,2,3]]
reduce(lambda x,y: x==y, a) returns False
Why in the second case, the outcome is False?
please help
thanks
|
python reduce to check if all elements are equal
| 0 | 0 | 0 | 5,479 |
10,711,918 |
2012-05-23T00:27:00.000
| 4 | 0 | 1 | 0 |
python,reduce
| 10,711,976 | 6 | false | 0 | 0 |
You are not reducing the lists. The return value of your lambda is True or False, which is then used as input parameters to further calls to the same lambda function. So you end up comparing a boolean with a list. Therefore, the reducing function should return the same type as it input parameters.
You were probably looking for what other answers proposed instead: use all().
| 2 | 10 | 0 |
Suppose a = [[1,2,3],[1,2,3]]
reduce(lambda x,y: x==y, a) returns True
But if a = [[1,2,3],[1,2,3],[1,2,3]]
reduce(lambda x,y: x==y, a) returns False
Why in the second case, the outcome is False?
please help
thanks
|
python reduce to check if all elements are equal
| 0.132549 | 0 | 0 | 5,479 |
10,717,858 |
2012-05-23T10:15:00.000
| 0 | 0 | 0 | 0 |
python,copy
| 10,717,978 | 2 | false | 0 | 0 |
Stat the file to find the size
Divvy up the the start:end points that each reader will handle
Open your write file in binary mode
Open your readers in binary mode
Handle the merging/collating of data when writing it out
| 1 | 0 | 0 |
Python seems to have functions for copying files (e.g. shutil.copy) and functions for copying directories.This also works with network paths.
Is there a way to copy only part of the file from multiple sources and merge them afterwards
Like a download manager downloads parts of a single file from multiple sources increasing the overall download speed.
I want to achive the same over lan.
I have a file on more than two machines on my network.
How could i copy parts of file to a single destination from multiple sources ?
Can it be done with standard shutil libraries?
|
copy parts of a file from multiple sources over LAN using python
| 0 | 0 | 1 | 358 |
10,724,345 |
2012-05-23T16:44:00.000
| 2 | 0 | 0 | 0 |
python,copy,backup,neo4j
| 10,736,999 | 1 | true | 0 | 0 |
Yes,
you can copy the whole DB directory when you have cleanly shut down the DB for backup.
| 1 | 1 | 0 |
I need to copy an existing neo4j database in Python. I even do not need it for backup, just to play around with while keeping the original database untouched. However, there is nothing about copy/backup operations in neo4j.py documentation (I am using python embedded binding).
Can I just copy the whole folder with the original neo4j database to a folder with a new name?
Or is there any special method available in neo4j.py?
|
Copy neo4j database from python
| 1.2 | 1 | 0 | 310 |
10,727,140 |
2012-05-23T20:12:00.000
| 2 | 0 | 0 | 0 |
python,neural-network,gpu,pybrain
| 10,727,166 | 2 | false | 0 | 0 |
Unless PyBrain is designed for that, you probably can't.
You might want to try running your trainer under PyPy if you aren't already -- it's significantly faster than CPython for some workloads. Perhaps this is one of those workloads. :)
| 1 | 1 | 1 |
I was wondering if there is a way to use my GPU to speed up the training of a network in PyBrain.
|
How can I speed up the training of a network using my GPU?
| 0.197375 | 0 | 0 | 2,082 |
10,727,447 |
2012-05-23T20:36:00.000
| -1 | 0 | 1 | 0 |
python,macports
| 10,727,518 | 3 | false | 0 | 0 |
One way to find Python on your Mac is to type in the command line:
which python
When I type this, I get:
/usr/bin/python
You can see other pythons there by typing
ls /usr/bin/python*
For example, I see:
/usr/bin/python
/usr/bin/python2.7
/usr/bin/python-config
/usr/bin/python2.7-config
/usr/bin/python2.5
/usr/bin/pythonw
/usr/bin/python2.5-config
/usr/bin/pythonw2.5
/usr/bin/python2.6
/usr/bin/pythonw2.6
/usr/bin/python2.6-config
/usr/bin/pythonw2.7
Then, you can run v2.6 by typing
/usr/bin/python2.6
Or, since /usr/bin/ is probably in your path, just:
python2.6
This isn't exactly MacPorts-specific, sorry.
| 1 | 0 | 0 |
Sorry, I'm sure this is a stupid question.
I have successfully installed python 2.6 with macports. How do I use that version of python? The version that shows when I type python in term is python 2.7.
Thanks!
|
macports use python install
| -0.066568 | 0 | 0 | 176 |
10,728,309 |
2012-05-23T21:42:00.000
| 2 | 0 | 0 | 1 |
python,eclipse
| 15,859,992 | 2 | false | 0 | 0 |
I had the same issue, but it turned out that my text file was in fact in the wrong place, even though it was in the same directory as my python script. I had to move it into the same package as the script, not just the same directory (I did this by simply dragging the text file onto the package name in the sidebar in Eclipse).
So, for example, this is what my setup looked like:
Hello World (project)
helloworld (package)
__init__.py
hello_world.py
hello_world.txt
Here's what it should have looked like (by moving hello_world.txt into the helloworld package):
Hello World (project)
helloworld (package)
__init__.py
hello_world.py
hello_world.txt
| 2 | 0 | 0 |
I keep getting this error when running this python script (that I know runs and works since I ran it in VI) within eclipse.
Traceback (most recent call last):
File "/home/kt/Documents/workspace/Molly's Scripts/src/ProcessingPARFuMSData.py", line 181, in
annotations = open(sys.argv[1], 'r')
IOError: [Errno 2] No such file or directory: 'Tarr32_Lane2_Next34_FinalAnnotations.txt'
I double checked to see that all of the txt files that I need to run the script with are included in the specific directory and yet it is still giving me a bit of trouble. I know it has to be something with eclipse or PyDev because like I mentioned previously it works in the other editor. Any help would be appreciated and I can try a screen shot if one is needed.
Thanks,
KT
|
Eclipse Error IOError: [Errno 2] No such file or directory: 'Tarr32_Lane2_Next34_FinalAnnotations.txt'
| 0.197375 | 0 | 0 | 2,472 |
10,728,309 |
2012-05-23T21:42:00.000
| 0 | 0 | 0 | 1 |
python,eclipse
| 10,746,692 | 2 | false | 0 | 0 |
Seems you're launching in the wrong dir. You can configure your launch in run > run configurations.
| 2 | 0 | 0 |
I keep getting this error when running this python script (that I know runs and works since I ran it in VI) within eclipse.
Traceback (most recent call last):
File "/home/kt/Documents/workspace/Molly's Scripts/src/ProcessingPARFuMSData.py", line 181, in
annotations = open(sys.argv[1], 'r')
IOError: [Errno 2] No such file or directory: 'Tarr32_Lane2_Next34_FinalAnnotations.txt'
I double checked to see that all of the txt files that I need to run the script with are included in the specific directory and yet it is still giving me a bit of trouble. I know it has to be something with eclipse or PyDev because like I mentioned previously it works in the other editor. Any help would be appreciated and I can try a screen shot if one is needed.
Thanks,
KT
|
Eclipse Error IOError: [Errno 2] No such file or directory: 'Tarr32_Lane2_Next34_FinalAnnotations.txt'
| 0 | 0 | 0 | 2,472 |
10,728,333 |
2012-05-23T21:44:00.000
| 10 | 0 | 0 | 0 |
python,pyramid
| 10,730,436 | 1 | true | 1 | 0 |
The brackets depend on the templating engine you are using, but request.route_url('home') is the Python code you need inside.
For example, in your desired template file:
jinja2--> {{ request.route_url('home') }}
mako/chameleon--> ${ request.route_url('home') }
If your route definition includes pattern matching, such as config.add_route('sometestpage', '/test/{pagename}'), then you would do request.route_url('sometestpage', pagename='myfavoritepage')
| 1 | 3 | 0 |
For example in Django if I have a url named 'home' then I can put {% url home %} in the template and it will navigate to that url. I couldn't find anything specific in the Pyramid docs so I am looking to tou Stack Overflow.
Thanks
|
Is there feature in Pyramid to specify a route in the template like Django templates?
| 1.2 | 0 | 0 | 1,122 |
10,730,795 |
2012-05-24T03:34:00.000
| 0 | 0 | 0 | 0 |
python,svn
| 10,730,861 | 2 | false | 0 | 0 |
You could actually copy the package source code from site-packages to your project folder, and your project folder normally has a higher prority than site-packages.
Then you just need check-in library to your svn.
| 1 | 0 | 0 |
I'm trying to use SVN to manage my python project.
I installed many external Libs (the path is like:"C:\Python27\Lib\site-packages")on Computer A,then I upload the project to the SVN Server.
and then I use Computer B which just has python(v2.7) been installed.I checkout from the SVN server
:here comes the problem..there is no external Libs in computer B.Is there any solution to solve this problem,I don't want to install the external Libs on Computer B again!
Thanks advance!
|
external Libs pack to python project
| 0 | 0 | 0 | 109 |
10,732,171 |
2012-05-24T06:20:00.000
| 1 | 0 | 0 | 0 |
wxpython
| 10,733,286 | 1 | true | 0 | 1 |
try and use the "persist" library in the AGW package. This will allow you to save the state of (almost) any wxPython widget across sessions. See the PersistentControls demo in the AGW library.
| 1 | 0 | 0 |
I am using customtreectrl in wxpython with checkboxes. Once I submit I would like to save the state of the checkboxes in the customtreectrl. How can I save the checked state of a customtreectrl with checkboxes.Please help me.
Sushma
|
preserving the state of check boxex in customtreectrl
| 1.2 | 0 | 0 | 101 |
10,733,418 |
2012-05-24T07:59:00.000
| 1 | 0 | 1 | 1 |
python,gdb
| 56,772,866 | 2 | false | 0 | 0 |
In mingw installer you need to install a special package called mingw32-gdb-python.
Which is the gdb compiled with python enabled
| 1 | 10 | 0 |
I am using gdb 7.4 on a windows 7 machine
When I attempt to execute python script I get
"Python scripting is not supported in this version of GDB"
I thought that it was supported in 7.4?
Where can I get a version of gdb that is python enabled for windows?
|
python enabled gdb for windows
| 0.099668 | 0 | 0 | 8,234 |
10,734,668 |
2012-05-24T09:26:00.000
| 2 | 0 | 0 | 1 |
python,celery
| 10,739,006 | 1 | true | 0 | 0 |
Celery implies a daemon using a broker (some data hub used to queue tasks). The celeryd daemon and the broker (RabbitMQ, redis, MongoDB or else) should always run in the background.
Your tasks will be queued, this means they won't happen all at the same time. You can choose how many at the same time can be run as a maximum. The rest of them will wait for the others to finish before starting. This also means some concurrency is often expected, and that you must create tasks that play nice with others doing the same thing.
Celery is not meant to run scripts but tasks, written as python functions. You can of course execute external scripts from Python, but your entry point is always a Python function.
Celery uses Kombu, which uses a message broker to dispatch the tasks. This implies the data you pass to your tasks should be serializable.
| 1 | 0 | 0 |
My task is it to write a script using opencv which will later run as a Celery task. What consequences does this have? What do I have to pay attention to? Is it enough in the end to include two lines of code or could it be, that I have to rewrite my whole script?
I read, that Celery is a "asynchronous task queue/job queuing system based on distributed message passing", but I wont pretend to know completely what that all entails.
I try to update the question, as soon as I get more details.
|
Script needs to be run as a Celery task. What consequences does this have?
| 1.2 | 0 | 0 | 2,052 |
10,735,998 |
2012-05-24T10:50:00.000
| 7 | 0 | 0 | 0 |
python,ruby-on-rails,ruby,django,interop
| 10,736,225 | 2 | true | 1 | 0 |
I suggest you either:
Expose a ruby service using REST or XML-RPC.
or
Shell out to a ruby script from Django.
To transfer data between Python and Ruby I suggest you use JSON, XML or plain text (depending on what kind of data you need to transfer).
I would recommend to use option 2 (start a ruby script from the Python process), as this introduces fewer moving parts to the solution.
| 2 | 2 | 0 |
Lets say I have a few Ruby gems that I'd like to use from my Python (Django) application. I know this isn't the most straightforward question but let's assume that rewriting the Ruby gem in Python is a lot of work, how can I use it?
Should I create an XML-RPC wrapper around it using Rails and call it? Is there something like a ruby implementation in Python within which I could run my gem code?
Are there other methods that I may have missed? I've never tacked anything like this before I was a bit lost in this area.
Thanks
|
Using a Ruby gem from a Django application
| 1.2 | 0 | 0 | 333 |
10,735,998 |
2012-05-24T10:50:00.000
| 3 | 0 | 0 | 0 |
python,ruby-on-rails,ruby,django,interop
| 10,737,263 | 2 | false | 1 | 0 |
It depends a little on what you need to do. The XML-RPC suggestion has already been made.
You might actually be able to use them together in a JVM, assuming you can accept running Django with jython and use jruby. But that is a bit of work, which may or may not be worth the effort.
It would perhaps be easier if you described exactly what the Ruby gem is and what problem it is supposed to solve. You might get suggestions that could help you avoid the problem altogether.
| 2 | 2 | 0 |
Lets say I have a few Ruby gems that I'd like to use from my Python (Django) application. I know this isn't the most straightforward question but let's assume that rewriting the Ruby gem in Python is a lot of work, how can I use it?
Should I create an XML-RPC wrapper around it using Rails and call it? Is there something like a ruby implementation in Python within which I could run my gem code?
Are there other methods that I may have missed? I've never tacked anything like this before I was a bit lost in this area.
Thanks
|
Using a Ruby gem from a Django application
| 0.291313 | 0 | 0 | 333 |
10,738,919 |
2012-05-24T13:57:00.000
| 5 | 0 | 1 | 1 |
python,virtualenv
| 37,116,291 | 6 | false | 0 | 0 |
You can also try to put symlink to one of your virtualenv.
eg.
1) activate your virtualenv
2) run python
3) import sys and check sys.path
4) you will find python search path there. Choose one of those (eg. site-packages)
5) go there and create symlink to your package like:
ln -s path-to-your-package name-with-which-you'll-be-importing
That way you should be able to import it even without activating your virtualenv. Simply try: path-to-your-virtualenv-folder/bin/python
and import your package.
| 1 | 120 | 0 |
I am trying to add a path to the PYTHONPATH environment variable, that would be only visible from a particular virtualenv environment.
I tried SET PYTHONPATH=... under a virtualenv command prompt, but that sets the variable for the whole environment.
How do I achieve that?
|
How do I add a path to PYTHONPATH in virtualenv
| 0.16514 | 0 | 0 | 122,179 |
10,739,843 |
2012-05-24T14:47:00.000
| 54 | 0 | 0 | 0 |
python,pep8
| 25,034,769 | 8 | false | 0 | 0 |
You can use the # noqa at the end of the line to stop PEP8/Flake8 from running that check. This is allowed by PEP8 via:
Special cases aren't special enough to break the rules.
| 1 | 100 | 0 |
In a block comment, I want to reference a URL that is over 80 characters long.
What is the preferred convention for displaying this URL?
I know bit.ly is an option, but the URL itself is descriptive. Shortening it and then having a nested comment describing the shortened URL seems like a crappy solution.
|
How should I format a long url in a python comment and still be PEP8 compliant
| 1 | 0 | 1 | 24,654 |
10,740,067 |
2012-05-24T14:59:00.000
| 2 | 0 | 0 | 1 |
python,linux,ubuntu,keyboard,x11
| 10,769,704 | 3 | false | 0 | 0 |
The canonical way to do this is by grabbing the input. For this no window must be actually visible. A input only window usually does the trick. However you should give the user some sort of feedback, why his input no longer works. Doing this as a focus grab has the advantage that a crash of the program won't turn the system unresponsive.
BTW: I think forcibly interrupting the user, maybe in the middle of a critical operations is a huge No-Go! I never understood the purpose of those programs. The user will sit in front of the screen idling, maybe loosing his thoughts. Just my 2 cents.
| 1 | 4 | 0 |
I am writing an anti-RSI/typing break programme for Ubuntu Linux in python. I would like to be able to "lock the keyboard" so that all keypresses are ignored until I "unlock" it. I want to be able to force the user to take a typing break.
I would like some programmatic way to "turn off" the keyboard (near instantaneously) until my programme releases it later (which could be 0.1 sec → 10 sec later). While I have "turned off the keyboard", no key presses should be sent to any windows, window managers, etc. Preferably, the screen should still show the same content. The keyboard should be locked even if this programme is not at the forefont and does not have focus.
Some programmes are able to do this already (e.g. Work Rave)
How do I do this on Linux/X11? (Preferable in Python)
|
How do I 'lock the keyboard' to prevent any more keypresses being sent on X11/Linux/Gnome?
| 0.132549 | 0 | 0 | 5,730 |
10,741,339 |
2012-05-24T16:14:00.000
| 19 | 0 | 0 | 0 |
python,django,api,security
| 16,702,510 | 5 | false | 1 | 0 |
They do apply if you're also using your API to support a website.
In this case you still need some form of CSRF protection to prevent someone embedding requests in other sites to have drive-by effects on an authenticated user's account.
Chrome seems to deny cross-origin POST requests by default (other browsers may not be so strict), but allows GET requests cross-origin so you must make sure any GET requests in your API don't have side-effects.
| 2 | 69 | 0 |
I'm writing a Django RESTful API to back an iOS application, and I keep running into Django's CSRF protections whenever I write methods to deal with POST requests.
My understanding is that cookies managed by iOS are not shared by applications, meaning that my session cookies are safe, and no other application can ride on them. Is this true? If so, can I just mark all my API functions as CSRF-exempt?
|
Do CSRF attacks apply to API's?
| 1 | 0 | 0 | 32,350 |
10,741,339 |
2012-05-24T16:14:00.000
| 72 | 0 | 0 | 0 |
python,django,api,security
| 10,741,650 | 5 | true | 1 | 0 |
That's not the purpose of CSRF protection. CSRF protection is to prevent direct posting of data to your site. In other words, the client must actually post through an approved path, i.e. view the form page, fill it out, submit the data.
An API pretty much precludes CSRF, because its entire purpose is generally to allow 3rd-party entities to access and manipulate data on your site (the "cross-site" in CSRF). So, yes, I think as a rule any API view should be CSRF exempt. However, you should still follow best practices and protect every API-endpoint that actually makes a change with some form of authentication, such as OAuth.
| 2 | 69 | 0 |
I'm writing a Django RESTful API to back an iOS application, and I keep running into Django's CSRF protections whenever I write methods to deal with POST requests.
My understanding is that cookies managed by iOS are not shared by applications, meaning that my session cookies are safe, and no other application can ride on them. Is this true? If so, can I just mark all my API functions as CSRF-exempt?
|
Do CSRF attacks apply to API's?
| 1.2 | 0 | 0 | 32,350 |
10,742,188 |
2012-05-24T17:10:00.000
| 1 | 1 | 0 | 1 |
python,openoffice.org,libreoffice
| 10,743,591 | 2 | true | 0 | 0 |
Maybe a nice way to go is to get familiarized with Python setup tools itself (http://packages.python.org/an_example_pypi_project/setuptools.html), and write a proper setup.py script which would place all needed files in the appropriate dirs.
Your macros could them even be installable with the "easy_install" Python framework
| 1 | 2 | 0 |
When developing macros in python for LibreOffice / OpenOffice on Linux at least, I've read that you have to place your py scripts in a particular directory.
Is there a preferred method among Python LibreOffice/OOo developers for deploying these scripts, or is there another way to specify within LibreOffice/OOo to specify where you want these scripts to be?
|
Preferred method of "deploying" python scripts to LibreOffice during macro development?
| 1.2 | 0 | 0 | 733 |
10,742,317 |
2012-05-24T17:20:00.000
| 2 | 0 | 0 | 0 |
python,pygtk,glade
| 10,743,355 | 1 | true | 0 | 1 |
You just have to set the fill and expand parameters of the Buttons to False (uncheck them in the Glade interface).
You would also want to put each button at the center of a 3x3 GtkTable, so it will appear centered and not aligned at the top of the cell
| 1 | 0 | 0 |
I have a simple pygtk/glade window with a menu and a 3x3 grid. Each row of the grid consists on: two labels and a button.
When the Window is resized, the labels holds the same font size, but the buttons get resized, and they could become HUGE if the windows gets very big.
How could I manage to keep my buttons with the same size always (the "standar" size of a button, just like they are when the interface is just opened) no matter if the Window is resized?
|
PyGTK/Glade keep button size standard
| 1.2 | 0 | 0 | 787 |
10,742,820 |
2012-05-24T17:56:00.000
| 1 | 0 | 1 | 0 |
python,multithreading,multiprocess
| 10,743,293 | 2 | false | 1 | 0 |
First, profile your code to determine what is bottlenecking your performance.
If each of your threads are frequently writing to your MySQL database, the problem may be disk I/O, in which case you should consider using an in-memory database and periodically write it to disk.
If you discover that CPU performance is the limiting factor, then consider using the multiprocessing module instead of the threading module. Use a multiprocessing.Queue object to push your tasks. Also make sure that your tasks are big enough to keep each core busy for a while, so that the granularity of communication doesn't kill performance. If you are currently using threading, then switching to multiprocessing would be the easiest way forward for now.
| 1 | 2 | 0 |
I am working on a web backend that frequently grabs realtime market data from the web, and puts the data in a MySQL database.
Currently I have my main thread push tasks into a Queue object. I then have about 20 threads that read from that queue, and if a task is available, they execute it.
Unfortunately, I am running into performance issues, and after doing a lot of research, I can't make up my mind.
As I see it, I have 3 options:
Should I take a distributed task approach with something like Celery?
Should I switch to JPython or IronPython to avoid the GIL issues?
Or should I simply spawn different processes instead of threads using processing?
If I go for the latter, how many processes is a good amount? What is a good multi process producer / consumer design?
Thanks!
|
Python multiple processes instead of threads?
| 0.099668 | 0 | 0 | 624 |
10,742,911 |
2012-05-24T18:02:00.000
| 2 | 0 | 1 | 0 |
python
| 10,743,328 | 1 | true | 0 | 0 |
So it seems the answer is "threading". Seems promising.
| 1 | 0 | 0 |
I'm making a program that listens for commands both from a looping raw_input and repeatedly checking a file for updates. Is there a way to do this? Basically, there's a loop with a time.sleep(1) for checking the file, and a while loop with raw_input. Multiprocessing doesn't seem to be what I need.
|
loop and raw_input simultaneously?
| 1.2 | 0 | 0 | 176 |
10,743,158 |
2012-05-24T18:21:00.000
| 6 | 0 | 0 | 1 |
python,winapi,google-app-engine,licensing
| 10,743,291 | 1 | true | 1 | 0 |
Nope, App Engine's python runtime only supports pure python modules. Wrapped native code modules won't work.
| 1 | 0 | 0 |
BACKGROUND:
I work on a small team in a large company where I'm currently revamping the licensing system for a suite of mixed .Net and Win32 products that I update annually. Each product references a win32 .dll for product validation. I only have the binary file and the header file for the licensing module (so no hash algorithm). Somehow customers are able to purchase software on our website and receive a disk in the mail with a serial key. Keys or product specific and so disks and keys can be easily shared.
GOALS:
Modify the hash input so keys are now based on major version number (done).
Implement a web service using App Engine (it's just me so I don't want to maintain any hardware) whereby a user can purchase a serial that is automatically generated and delivered via email.
Use the existing licensing module or replicate the hash/API (I would like whoever is sending out serial keys to continue to do so except for maybe a minor change to their work flow, like adding the version number).
QUESTIONS:
Is there any way to write wrap this win32 library in a python module and use it on Google's App Engine?
Are there any tools to discover the hashing algorithm being used? The library exports a generatekey function?
Any other comments or suggestions are greatly appreciated.
Cheers,
Tom
|
Access Win32 dll on Google App Engine?
| 1.2 | 0 | 0 | 191 |
10,743,884 |
2012-05-24T19:14:00.000
| 1 | 0 | 0 | 0 |
iphone,python,html,ios,ipad
| 10,743,917 | 1 | true | 1 | 0 |
Use safari under developer mode as an iOS device to determine the root cause. After looking at what is happening, I bet your social loading code has changed something remotely, specifically, the fb-root tag that is warned about in the error console. Start there by disabling the social network stuff and start debugging.
update: I just disabled javascript on my phone and got the page up so it is definitely a JS bug somewhere.
| 1 | 0 | 0 |
After upgrading to ios 5.1.1 from 4.2.1, my page www.zolkan.com loads and immediately disappears showing a blank gray page. I did not change anything on my page for over a year.
Before upgrading my iPhone from 4.2.1 to 5.1.1, it loaded fine. Same was with my iPad running 5.0.1... After going to 5.1.1, the page loads and disappears.
It seems that only the dynamic page (generated by python CGI) is doing this... The rest of the static pages behave normally.
Any ideas?
|
My page loads fine on pc or Mac but totally disappears on iPad or iPhone with iOS 5.1.1
| 1.2 | 0 | 0 | 428 |
10,745,363 |
2012-05-24T21:13:00.000
| 2 | 1 | 0 | 1 |
python,linux,ubuntu,console,terminal
| 10,745,449 | 3 | false | 0 | 0 |
I'd also avoid doing this with a terminal, but to answer the question directly:
right click on the terminal window
profiles
profile preferences
scolling
scollback: unlimited
It's better though to redirect to a file, then access that file. "tail -f" is very helpful.
| 1 | 1 | 0 |
I have this python script that outputs the Twitter Stream to my terminal console. Now here is the interesting thing:
* On snowleopard I get all the data I want.
* On Ubuntu (my pc) this data is limited and older data is deleted.
Both terminal consoles operate in Bash, so it has to be an OS thing presumably.
My question is: how do I turn this off? I want to leave my computer on for a week to capture around 1 or 2 gigabytes of data, for my bachelor thesis!
|
Ubuntu Linux: terminal limits the output when I get the full Twitter Streaming API
| 0.132549 | 0 | 0 | 957 |
10,745,553 |
2012-05-24T21:29:00.000
| 0 | 0 | 1 | 0 |
python,virtualenv
| 10,796,908 | 2 | false | 0 | 0 |
You can run in trouble when running python scripts of a virtualenv as subprocesses of another virtualenv. I've found useful to remove from the environment of the subprocess PYTHONPATH and BUILDOUT_ORIGINAL_PYTHONPATH.
| 1 | 18 | 0 |
is it possible to nest 2 virtualenvs?
I would like to have a base virtualenv and then a more specific virtualenv that accesses all the packages from the base virtualenv and then has its own.
Any hint appreciated, thanks.
|
can I nest virtualenvs?
| 0 | 0 | 0 | 3,527 |
10,745,931 |
2012-05-24T22:03:00.000
| 1 | 0 | 1 | 0 |
python,performance,sockets,programming-languages,io
| 10,757,187 | 2 | true | 0 | 0 |
Your requirements are:
to work on windows;
the program has heavy threading and I/O
it heavily uses sockets in its I/O to send and receive data
it has some string manipulation using regular expressions.
The reason it is hard to say definitively which is the best language for this task is that almost all languages match your requirements.
Windows: all languages of notes
Heavy use of threads: C#, Java, C, C++, Haskell, Scala, Clojure, Erlang. Processed-based threads or other work arounds: Ruby, Python, and other interpreted languages without true fine-grained concurrency.
Sockets: all languages of note
Regexes: all languages of note
The most interesting constraint is the need to do massive concurrent IO. This means your bottleneck is going to be in context switching, cost of threads, and whether you can run thread pools on multiple cores. Depending on your scaling, you might want to use a compiled language, and one with lightweight threads, that can use multiple cores easily. That reduces the list to C++, Haskell, Erlang, Java, Scala. etc. You can probably work around the global interpreter lock in Python by using forked processes, it just won't be as fine grained.
| 2 | 2 | 0 |
I'm writing a python program, to work on windows, the program has heavy threading and I/O, it heavily uses sockets in its I/O to send and receive data from remote locations, other than that, it has some string manipulation using regular expressions.
My question is: performance wise, is python the best programming language for such a program, compared to for example Java, or C#? Is there another language that would better fit the description above?
|
Python socket I/O performance compared to other languages
| 1.2 | 0 | 1 | 1,715 |
10,745,931 |
2012-05-24T22:03:00.000
| 2 | 0 | 1 | 0 |
python,performance,sockets,programming-languages,io
| 10,746,007 | 2 | false | 0 | 0 |
Interesting question. The python modules that deal with sockets wrap the underlying OS functionality directly. Therefore, in a given operation, you are not likely to see any speed difference depending on the wrapper language.
Where you will notice speed issues with python is if you are involved in really tight looping, like looking at every character in a stream.
You did not indiciate how much data you are sending. Unless you are undertaking a solution that has to maintain a huge volume of I/O, then python will likely do just fine. Implementing nginx or memcached or redis in python... not as good of an idea.
And as always... benchmark. If it is fast enough, then why change?
PS. you the programmer will likely get it done faster in python!
| 2 | 2 | 0 |
I'm writing a python program, to work on windows, the program has heavy threading and I/O, it heavily uses sockets in its I/O to send and receive data from remote locations, other than that, it has some string manipulation using regular expressions.
My question is: performance wise, is python the best programming language for such a program, compared to for example Java, or C#? Is there another language that would better fit the description above?
|
Python socket I/O performance compared to other languages
| 0.197375 | 0 | 1 | 1,715 |
10,748,021 |
2012-05-25T03:12:00.000
| 1 | 1 | 1 | 0 |
java,python
| 10,748,050 | 3 | false | 1 | 0 |
Each byte is a number from 0 to 255. An array containing those numbers is, precisely, an array containing the contents of the file. I'm not at all clear on what you want to do with this array (or dictionary, etc) but making it is going to be easy.
| 2 | 0 | 0 |
I basically want to read a file (could be an mp3 file or whatever). Scan the file for all the used ASCII characters of the file and put them into an dictionary, array or list. And then from there assign each character a number value.
For example:
Let's say I load in the file blabla.mp3
(Obviously this type of file is encoded so it won't be just plain english characters.)
This is it's contents:
╤dìúúH»╓╒:φººMQ╤╤╤╤┤i↔↔←GGGΦ⌠i←E::2E┤tti←╙╤ΦΦ⌠·:::::%Fæ╤╤:6Å⌠tSN│èëåD¿╢ÄÄÄÄÄÄÄÄÅO^↔:::.ÄÄÄÄÄÄèHΦΦ■ï»ó⌐╙-↔→E┤tttttttt}▲î╤╤dì"Ü:::)ú$tm‼º╤╓q╤╙·:.ñǰ"V├╡ΦPa↨/úúúúúúΦ╞îHΦ║*ÄèúóΦΦΦΦ»DΦΦ·tΘ○_Nïúkî►"DëÜ)#ú»→·:4Äïúúúúúó¿║:( ·:ç↑PR"$RGH◄◘úúó¿ΦΦΦΦ┌&HΦΦ┌+⌠WºGG ╤m→GF╘±"¿ΦñïúúúóΦò↨FæTtt╓ìú⌠ΦΦΦ⌠z:::=:::::≥E╤╤╤╤╤╤╤Tm↔↔▬Hªèi⌠ztz:::tt
I want to figure out what characters are being used and assign each one a value from 0 - 255 and each value will be unique to that character.
So ╤ = 0; Φ = 56; ú = 25 etc etc etc
Now I've been searching the python and java docs and I'm not so sure I know what I'm searching for. And I don't know if I should be worrying about ASCII characters or HEX or the raw bytes of the file.
I just need someone to point me in the right direction. Any help?
|
How can I bring all the used ASCII characters of a file into a dictionary/array/list and assign each character a value?
| 0.066568 | 0 | 0 | 417 |
10,748,021 |
2012-05-25T03:12:00.000
| 0 | 1 | 1 | 0 |
java,python
| 10,748,060 | 3 | false | 1 | 0 |
Each byte you read in already is a value between 0 and 255 (thus a byte). Is there a reason you can't just use that?
| 2 | 0 | 0 |
I basically want to read a file (could be an mp3 file or whatever). Scan the file for all the used ASCII characters of the file and put them into an dictionary, array or list. And then from there assign each character a number value.
For example:
Let's say I load in the file blabla.mp3
(Obviously this type of file is encoded so it won't be just plain english characters.)
This is it's contents:
╤dìúúH»╓╒:φººMQ╤╤╤╤┤i↔↔←GGGΦ⌠i←E::2E┤tti←╙╤ΦΦ⌠·:::::%Fæ╤╤:6Å⌠tSN│èëåD¿╢ÄÄÄÄÄÄÄÄÅO^↔:::.ÄÄÄÄÄÄèHΦΦ■ï»ó⌐╙-↔→E┤tttttttt}▲î╤╤dì"Ü:::)ú$tm‼º╤╓q╤╙·:.ñǰ"V├╡ΦPa↨/úúúúúúΦ╞îHΦ║*ÄèúóΦΦΦΦ»DΦΦ·tΘ○_Nïúkî►"DëÜ)#ú»→·:4Äïúúúúúó¿║:( ·:ç↑PR"$RGH◄◘úúó¿ΦΦΦΦ┌&HΦΦ┌+⌠WºGG ╤m→GF╘±"¿ΦñïúúúóΦò↨FæTtt╓ìú⌠ΦΦΦ⌠z:::=:::::≥E╤╤╤╤╤╤╤Tm↔↔▬Hªèi⌠ztz:::tt
I want to figure out what characters are being used and assign each one a value from 0 - 255 and each value will be unique to that character.
So ╤ = 0; Φ = 56; ú = 25 etc etc etc
Now I've been searching the python and java docs and I'm not so sure I know what I'm searching for. And I don't know if I should be worrying about ASCII characters or HEX or the raw bytes of the file.
I just need someone to point me in the right direction. Any help?
|
How can I bring all the used ASCII characters of a file into a dictionary/array/list and assign each character a value?
| 0 | 0 | 0 | 417 |
10,749,222 |
2012-05-25T06:01:00.000
| 0 | 0 | 1 | 0 |
python,events,data-structures,asynchronous
| 10,749,364 | 2 | false | 0 | 0 |
Your gevent and threading are on the right track, because a function does what it is programmed to do, either accepting 1 var at a time or taking a set and returning either a set or a var. The function has to be called to return either result, and the continuous stream of processing is probably taking place already or else you are asking about a loop over a kernel pointer or something similar, which you are not, so ...
So, your calling code which encapsulates your function is important, the function, any function, eg, even a true/false boolean function only executes until it is done with its vars, so there muse be a calling function which listens indefinitely in your case. If it doesn't exist you should write one ;)
Calling code which encapsulates is certainly very important.
Folks aren't going to have enough info to help much, except in the super generic sense that we can tell you that you are or should be within in some framework's event loop, or other code's loop of some form already- and that is what you want to be listening to/ preparing data for.
I like "functional programming's," "map function," for this sort of thing. I think. I can't comment at my rep level or I would restrict my speculation to that. :)
To get a better answer from another person post some example code and reveal your API if possible.
| 1 | 0 | 0 |
I have got stuck with a problem.
It goes like this,
A function returns a single result normally. What I want is it to return continuous streams of result for a certain time frame(optional).
Is it feasible for a function to repeatedly return results for a single function call?
While browsing through the net I did come across gevent and threading. Will it work if so any heads up how to solve it?
I just need to call the function carry out the work and return results immediately after every task is completed.
|
Return continuous result from a single function call
| 0 | 0 | 0 | 2,597 |
10,752,434 |
2012-05-25T10:06:00.000
| 11 | 0 | 1 | 0 |
python
| 10,752,532 | 3 | true | 0 | 0 |
sets are optimized for this creation. Unless you want to roll out your own decimal-to-string conversion (and that would take more than one line), it's the way to go.
range only allocates memory in Python 2.x. For small numbers like 623562, the memory shouldn't be a problem. For larger numbers, use xrange in Python 2.x or simply switch to Python 3.x, where range generates the numbers just in time.
| 1 | 0 | 0 |
I'm searching for a way to count unique digits efficiently with a one liner.
For example: given the integer 623562, return value would be 4.
My current way of doing it is, given integer i, im using len(set(str(i))).
Creating a set is time consuming. I'm going over alot of numbers so I need an efficient way.
Also, if someone can find a way to go over all the numbers with x digits without using
range() (and in a one liner..), I'll be glad. Memory limits me when using range because a list is created (I assume).
|
Count unique digits one liner (efficiently)
| 1.2 | 0 | 0 | 2,448 |
10,754,496 |
2012-05-25T12:32:00.000
| 0 | 0 | 0 | 1 |
python,eclipse,pydev
| 10,803,047 | 2 | false | 1 | 0 |
The only one so far I found available is PyFlakes, it does some level of dependency check and import validations.
| 1 | 0 | 0 |
Is there any eclipse plugin for python dependency management? just like what M2Eclipse does for maven project? so I can resolve all the dependencies and get ride off all the errors when I develop python using pydev.
If there is no such plugin, how do I resolve the dependencies, do I have to install the dependency modules locally?
|
python eclipse dependency plugin - m2eclipse like
| 0 | 0 | 0 | 195 |
10,758,774 |
2012-05-25T17:21:00.000
| 2 | 0 | 1 | 0 |
python,regex,match
| 10,758,828 | 2 | false | 0 | 0 |
One, you are using match when it looks like you want findall. It won't grab the enclosing capital triplets, but re.findall('[A-Z]{3}([a-z])(?=[A-Z]{3})', search_string) will get you all single lower case characters surrounded on both sides by 3 caps.
| 1 | 1 | 0 |
Given a text, I need to check for each char if it has exactly (edited) 3 capital letters on both sides and if there are, add it to a string of such characters that is retured.
I wrote the following: m = re.match("[A-Z]{3}.[A-Z]{3}", text)
(let's say text="AAAbAAAcAAA")
I expected to get two groups in the match object: "AAAbAAA" and "AAAcAAA"
Now, When i invoke m.group(0) I get "AAAbAAA" which is right. Yet, when invoking m.group(1), I find that there is no such group, meaning "AAAcAAA" wasn't a match. Why?
Also, when invoking m.groups(), I get an empty tuple although I should get a tuple of the matches, meaning that in my case I should have gotten a tuple with "AAAbAAA". Why doesn't that work?
|
Matching an object and a specific regex with Python
| 0.197375 | 0 | 0 | 946 |
10,759,765 |
2012-05-25T18:40:00.000
| 2 | 0 | 1 | 0 |
python,c,gcc,mingw,cython
| 10,760,117 | 2 | false | 0 | 1 |
The compiled file isn't an executable, it's a library (dll).
python modules on windows usually have a .pyd extension, so either rename your file to helloworld.pyd or use -o helloworld.pyd as argument for the compiler.
then you should be able to import helloworld from python.
| 1 | 1 | 0 |
I have a problem with python to C code translation and further compilation.
First, I installed MinGW, wrote `setup.py? script and translated python code (simplest helloworld) to C with Cython:
python setup.py build_ext --inplace
Then I tried to compile generated .c file:
gcc.exe helloworld.c -mdll -IC:\Python27\include -IC:\Python27\PC -LC:\Python27\libs -LC:\Python27\PCbuild -lpython27 -lmsvcr90
No error occurred during compilation, but when I tried to launch generated a.exe file, I got the following error:
a.exe is not a valid Win32 application
I have no idea how to fix this problem.
I'm running 32-bit Vista.
P.S. Sorry for my poor English.
|
Cython and gcc: can't run compiled program
| 0.197375 | 0 | 0 | 1,230 |
10,760,968 |
2012-05-25T20:27:00.000
| 0 | 1 | 1 | 0 |
python,python-2.7,omniorb
| 10,761,033 | 1 | false | 0 | 0 |
you can't and shouldn't. it is compiled specifically for 2.7. that's why "2.7" appears in the download file name.
if you want to use a different python, download the source package and build it yourself.
| 1 | 0 | 0 |
I have downloaded the omniORB4.1.6 pre-compiled with msvc10. I have python 2.7 and everything seems to work fine. I want to know how i can tell my omniidl to use my python 2.6 installation instead of 2.7. Can anyone help me? Thanks.
|
Changing python from 2.7 to 2.6 for omniidl
| 0 | 0 | 0 | 123 |
10,761,413 |
2012-05-25T21:11:00.000
| 5 | 0 | 1 | 1 |
python,shell,ipython,interactive
| 10,761,540 | 2 | true | 0 | 0 |
cat is one of the pre-defined system command aliases. Type %alias to see the list of aliases in your current ipython session.
| 1 | 5 | 0 |
I noticed that using cat on a file works in ipython. It doesn't appear to be listed as a magic command... so I am confused how/why it works. What lets cat work in ipython interactive shell?
|
How does `cat` work in ipython interactive shell?
| 1.2 | 0 | 0 | 2,575 |
10,762,199 |
2012-05-25T22:53:00.000
| 0 | 0 | 1 | 0 |
python
| 10,762,273 | 9 | false | 0 | 0 |
the STRING.count method should work just fine for the first problem. If you look carefully, there actually aren't two non-overlapping 'sses' strings in assesses.
You either have a- sses -ses, or asse- sses. Do you see the issue? Calling "trans-Panamanian banana".count("an") produces the correct number.
I think using eval() is probably ok. Your other option is to split on the + and then iterate over the resulting list, doing type conversion and accumulation as you go. It sounds like your doing a string module, so that might be the better solution for your gpa ;).
EDIT: F.G. beat me to posting essentially the same answer by mere seconds. Gah!
| 1 | 3 | 0 |
I'm following a python website for my schoolwork. It's really neat, it gives you tasks to complete and compiles the code in the browser. Anyway, I came to a challenge that I'm not really sure how to go about.
One of the questions was:
The same substring may occur several times inside the same string: for example "assesses" has the substring "sses" 2 times, and
"trans-Panamanian banana" has the substring "an" 6 times. Write a
program that takes two lines of input, we call the first needle and
the second haystack. Print the number of times that needle occurs as a
substring of haystack.
I'm not too sure how I should start this, I know I have to compare the two strings but how? I used the count method, but it didn't recognize the second occurrence of sses in assesses.
My second question is one I solved but I cheated a little.
The question was:
Write a program that takes a single input line of the form «number1»+«number2», where both of these represent positive integers,
and outputs the sum of the two numbers. For example on input 5+12 the
output should be 17.
I used the eval() method and it worked, I just think that this wasn't what the grader had in mind for this.
Any insight would be greatly appreciated.
EDIT: Second question was solved.
|
Substring Counting in Python and Adding 2 Numbers From One Line of Input
| 0 | 0 | 0 | 4,889 |
10,763,814 |
2012-05-26T05:24:00.000
| 1 | 0 | 0 | 0 |
python,user-interface
| 10,763,834 | 2 | false | 0 | 1 |
The easiest GUI to make without "module/library" is a web-based one. I.e. generate HTML with Javascript from your Python code, and let the Javascript interact via AJAX with your Python app. This can be implemented without too much effort with just the standard Python library (and some JS code, of course), or with modules that don't require "heavy" installation of platform-specific extensions.
| 1 | 1 | 0 |
I googled and search stackoverflow before asking this question
Answers that I don't expect:
wxWidgets is the best Python GUIUse TkInter (BIM) for GUI development.
Q. How to make a GUI without using any module/library? i.e make a GUI from scratch. Modules like tkinter not allowed.
|
Python custom GUI
| 0.099668 | 0 | 0 | 2,003 |
10,764,025 |
2012-05-26T06:03:00.000
| 2 | 0 | 1 | 0 |
python,python-idle
| 10,764,052 | 1 | false | 0 | 0 |
According to the doc,
On Windows, HOME and USERPROFILE will be used if set, otherwise a
combination of HOMEPATH and HOMEDRIVE will be used. An initial ~user
is handled by stripping the last directory component from the created
user path derived above.
You can try run 'set' in command prompt to see if these two environment variables are set or not. If yes, remove the setting.
| 1 | 1 | 0 |
hi i am having trouble in running python IDLE.
once i have installed EMACS and uninstalled it, whenever i try to run python IDLE it gives me:
Warning: os.path.expanduser("~") points to
C:\Program Files\Emacs\,
but the path does not exist
the IDLE does work, but i can't launch IDLE by simply clicking on "open with IDLE".
i guess i need to change the path of os.path.expanduser to fix this error?
but i can't find it.
where should i look for and which path does it originally point?
thank you.
|
how do i change the value of os.path.expanduser("~") in python?
| 0.379949 | 0 | 0 | 858 |
10,766,318 |
2012-05-26T12:44:00.000
| 2 | 0 | 1 | 0 |
python,regex,string,search,match
| 10,766,341 | 3 | false | 0 | 0 |
In normal mode, you don't need ^ if you are using match.
But in multiline mode (re.MULTILINE), it can be useful because ^ can match not only the beginning of the whole string, but also beginning of every line.
| 1 | 2 | 0 |
From what I figured,
match: given a string str and a pattern pat, match checks if str matches the pattern from str's start.
search: given a string str and a pattern pat, search checks if str matches the pattern from every index of str.
If so, is there a meaning using '^' at the start of a regex with match?
From what I understood, since match already checks from the start, there isn't. I'm probably wrong; where is my mistake?
|
Python regex - understanding the difference between match and search
| 0.132549 | 0 | 0 | 3,894 |
10,768,584 |
2012-05-26T18:01:00.000
| 55 | 1 | 0 | 1 |
python
| 30,690,444 | 2 | false | 0 | 0 |
You can use -c to get Python to execute a string. For example:
python3 -c "print(5)"
However, there doesn't seem to be a way to use escape characters (e.g. \n). So, if you need them, use a pipe from echo -e or printf instead. For example:
$ printf "import sys\nprint(sys.path)" | python3
| 1 | 46 | 0 |
Is it possible to execute python commands passed as strings using python -c? can someone give an example.
|
Execute python commands passed as strings in command line using python -c
| 1 | 0 | 0 | 42,043 |
10,768,817 |
2012-05-26T18:36:00.000
| 0 | 0 | 0 | 0 |
python,math,numpy,scipy
| 10,768,872 | 2 | false | 0 | 0 |
If the sequence does not have a lot of noise, just use the latest point, and the point for 1/3 of the current, then estimate your line from that. Otherwise do something more complicated like a least squares fit for the latter half of the sequence.
If you search on Google, there are a number of code samples for doing the latter, and some modules that may help. (I'm not a Python programmer so I can't give a meaningful recommend for the best one.)
| 1 | 1 | 1 |
I have a monotonically growing sequence of integers. For example
seq=[(0, 0), (1, 5), (10, 20), (15, 24)].
And a integer value greater than the largest argument in the sequence (a > seq[-1][0]). I want to estimate value corresponding to the given value. The sequence grows nearly linearly, and earlier values are less important than later. Nevertheless I can't simply take 2 last points and calculate new value, because mistakes are very likely and the curve may change the angle.
Can anyone suggest a simple solution for this kind of task in Python?
|
How to calculate estimation for monotonically growing sequence in python?
| 0 | 0 | 0 | 254 |
10,771,973 |
2012-05-27T05:55:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,webapp2
| 10,780,404 | 1 | true | 1 | 0 |
As you observe, the default User model doesn't provide any way to customize the hash function being used. You could subclass it and redefine the problematic methods to take a hash parameter, or file a feature request with the webapp2 project.
Webapp2's password hashing has much bigger issues, though, as it doesn't do password stretching. While it optionally(!) salts the hash, it doesn't iterate it, making brute force attacks more practical than they should be for an attacker. It should implement a proper password primitive such as PBKDF2, SCrypt, or BCrypt.
To answer your question about relative strengths of hash functions, while SHA1 is showing some weakness, nobody has successfully generated a collision, much less a preimage. Further, the HMAC construction can result in secure HMACs even with a hash function that's weak against collision attacks; arguably even MD5 would work here.
Of course, attacks only ever get better, never worse, so it's a good idea to prepare for the future. If you're concerned about security, though, you should be much more concerned about the lack of stretching than the choice of hash function. And if you're really concerned about security, you shouldn't be doing authentication yourself - you should be using the Users API or OAuth, so someone else can have the job of securely storing passwords.
| 1 | 1 | 0 |
If you are using webapp2 with Google App Engine you can see there is only one way to create an user with the "create_user" method [auth/models.py line:364]
But that method call to "security.generate_password_hash" method where in not possible use SHA 512
Q1: I would like to know what is the best way to create a SHA 512 Password with webapp2 and App Engine Python?
Q2: Is good idea use SHA 512 instead of encryption offered by webapp2 (SHA1), or it's enough?
|
SHA 512 Password with webapp2 and App Engine?
| 1.2 | 0 | 0 | 827 |
10,772,132 |
2012-05-27T06:35:00.000
| 10 | 0 | 1 | 0 |
python,file,module,project
| 10,772,218 | 1 | true | 0 | 0 |
I usually do it under these circumstances:
You could run parts of the application on thir own and running them would be useful (so they could be reused)
A part of the application is abstract and the rest is concrete (The abstract parts could be reused)
I want to divide it into 'plugins'
A single script would get insanely large (then I divide e.g. by class or put the unittests into a separate file).
In general I try to go for reusability. If I cannot divide it into reusable parts I don't divide except it would get too large.
| 1 | 8 | 0 |
To date I had been developing only small Python scripts. They were not longer than 500 lines per each. Now I'm going to write something bigger - I think it will have about 1000 lines. Is it good idea to handle it in one file or is it good time to organize code in subdirectories? I found some advices on how to modularize code, but I can't find any information about when to do that (or rather when it isn't waste of time).
|
When to split code into files/modules?
| 1.2 | 0 | 0 | 1,134 |
10,773,301 |
2012-05-27T10:36:00.000
| 0 | 0 | 1 | 0 |
python,windows
| 10,773,380 | 1 | false | 0 | 0 |
Due to how the Windows loader works, this is impossible; DLLs loaded by the app (including .pyd files) do not have access to the symbol table used/provided by the executable by default, and so must read the symbols from the pythonX.Y.dll instead.
| 1 | 3 | 0 |
I need a python interpreter to be statically linked to my app (Windows app) that means, that I need no dlls with my app. My app will not use any third-party python modyles, only text scripts.
How can I do this? Or may be there are already compiled libs?
A need a 3.2 version of python
|
Statically linked python on Windows
| 0 | 0 | 0 | 255 |
10,773,667 |
2012-05-27T11:33:00.000
| 0 | 0 | 0 | 1 |
python,eclipse,google-app-engine,pydev
| 10,773,714 | 1 | true | 1 | 0 |
If you create a new project, you get all the new libs. Move your existing (imported) sources to this new project.
| 1 | 0 | 0 |
I have a project which I created 2 years ago. I need to work on it again, and didn't have it in my Eclipse Workspace so I downloaded it from git and did an import existing projects into workspace. All worked well, except I notice the External Libraries do not contain all the new libraries added to the SDK since I created the project (and there's loads now compared to then). It would be useful if I could select the GAE root dir and let Eclipse automatically pull in all the libs for me, as it does when you create a new project. I don't see a way of doing this other than adding them 1 by 1. Does anyone have any tips?!
|
PyDev for App Engine - re-import External Libs
| 1.2 | 0 | 0 | 149 |
10,775,351 |
2012-05-27T16:03:00.000
| 7 | 0 | 1 | 0 |
python,node.js,ipc
| 10,775,437 | 7 | false | 0 | 0 |
If you arrange to have your Python worker in a separate process (either long-running server-type process or a spawned child on demand), your communication with it will be asynchronous on the node.js side. UNIX/TCP sockets and stdin/out/err communication are inherently async in node.
| 1 | 134 | 0 |
Node.js is a perfect match for our web project, but there are few computational tasks for which we would prefer Python. We also already have a Python code for them.
We are highly concerned about speed, what is the most elegant way how to call a Python "worker" from node.js in an asynchronous non-blocking way?
|
Combining node.js and Python
| 1 | 0 | 1 | 116,462 |
10,779,244 |
2012-05-28T04:10:00.000
| 1 | 0 | 0 | 0 |
python,mysql,sql
| 10,779,681 | 3 | false | 0 | 0 |
HighCharts have awesome features you can also build pivot charts using that one but they will charge you .You can look over Py Chart also
| 1 | 0 | 0 |
Let's say I get sales data every 15 minutes. The sales transactions are stored in a mysql database. I need to be able to graph this data, and allow the user to re-size the scale of time. The info would be graphed on a django website.
How would I go about doing this, and are there any open source tools that I could look into?
|
How to graph mysql data in python?
| 0.066568 | 1 | 0 | 3,292 |
10,780,165 |
2012-05-28T06:22:00.000
| 1 | 0 | 1 | 0 |
python
| 10,780,293 | 1 | true | 0 | 0 |
If you explicitly call your Python 2.6 Binary when installing the package it will install to that instance instead. So instead of python setup.py install you would do /path/to/python26 setup.py install.
| 1 | 1 | 0 |
I have python 2.7 by default and 2.6 but I need some modules installed on python 2.6 .But by default it is installing on 2.7.Any idea how to do it.
|
how to install library modules on python version which is not default
| 1.2 | 0 | 0 | 100 |
10,780,523 |
2012-05-28T06:53:00.000
| 2 | 0 | 0 | 0 |
python,wxpython
| 10,780,744 | 1 | false | 0 | 0 |
wx.lib.agw.persist is new in 2.8.12.1.
| 1 | 1 | 0 |
I am using fedora and wxpython version 2.8.12 .While trying to import wx.lib.agw.persist
I am getting an error saying
Import Error: No module persist.
Will the module not be there by default with wxPython, if not how do I get this module installed? please help me.
|
Error importing persist module
| 0.379949 | 0 | 0 | 58 |
10,781,201 |
2012-05-28T07:57:00.000
| 2 | 0 | 0 | 0 |
python,import,module
| 10,781,228 | 1 | true | 0 | 1 |
Why not have a list of all "system" modules that you need to have loaded which will be imported first, before then looking in sub-folders for all your mods and importing those ?
That way you still maintain your base system and only afterwards do you load up subsequent user mods.
| 1 | 0 | 0 |
I've added support for mods for my game. Any Python module put in a specific folder is automatically imported on startup. All is fine, except now I've written an official mod which allows mod makers to easily add GUI settings of their mods to a single toggable GUI frame. They are not forced to use it, but it greatly simplifies adding GUI settings with helper functions and makes things more organized and simpler for the players.
The problem is since its a mod itself, its imported on startup with the rest of the mods (Python modules), so if there is another mod which has a name which comes before it, it cannot use my mod. I know I could add "0_" or something to my module's name, but that wouldn't be very clean and you can't be very sure someone won't name his own mod's modules like that.
So I'm wondering if there's any way to tell Python to import a module first, by specifying it inside the module itself? I'm pretty sure there isn't, but who knows?
One solution I thought of is to make a subfolder which would be searched for modules first.
Another one might be merging the mod's code with the game's. Don't want to do that as to not give the impression that it's the only way to add mod settings.
|
Telling a module to be imported first by the module itself?
| 1.2 | 0 | 0 | 68 |
10,781,998 |
2012-05-28T09:04:00.000
| 0 | 1 | 0 | 0 |
python-3.x
| 10,809,754 | 1 | false | 0 | 0 |
Ask your Web host:
1) for a simple "hello, world" script (in Python, I assume) that will work on their server 2) where you need to store the file (in the server file system) 3) what you need to name it (ex: index.py or .py or whatever) 4) what permissions the script file needs to have (ask them for a "setfacl" command you can use), and 5) what the URL will be that calls it
If any one of those things is even slightly wrong, your script will fail to run. If you get them all right, then your CGI script will return a "Hello, world" web page.
At that point, you modify your script a little bit at a time, testing it every step of the way, until it is doing what you want it to do.
| 1 | 0 | 0 |
I have written cgi script for the login page in python but I am not familiar with running the CGI script?Please help me by giving the steps to run the script?
|
Running the CGI script
| 0 | 0 | 0 | 122 |
10,785,105 |
2012-05-28T13:00:00.000
| 5 | 1 | 1 | 0 |
python,bytecode
| 10,785,135 | 2 | false | 0 | 0 |
The bytecode is in memory while running the interpreter. The .pyc files are a cache for the next import of the code, so that python will not have to parse the code if it has not changed.
| 1 | 1 | 0 |
If I import a module1.py from the python command line in windows 7 I see the corresponding module1.pyc file appear in the Python32/pycache/ folder. My understanding was that it is this bytecode which is executed by the Python interpreter, however I can delete the module1.pyc file and my module functions (module1.func1() etc...) can still be called from the command line. What is running when the functions are called but the .pyc file is not there? When the bytecode is compiled is it also copied to runtime memory for the Python shell?
|
How does Python run module code when there's no matching .pyc file?
| 0.462117 | 0 | 0 | 905 |
10,788,211 |
2012-05-28T17:09:00.000
| 1 | 0 | 0 | 0 |
python,gtk3
| 11,008,840 | 2 | false | 0 | 1 |
Put the label in a container like GtkScrolledWindow.
| 2 | 0 | 0 |
I want to fit some text in a label. I almost managed to do it: when I enlarge the window, I add some <big> tags to the label and the text enlarges with the window. But if I try to shrink the window, the size-request of the label doesn't let me do it.
The label is updated many times a second, so if I try to set a custom size request every time I apply a new label, the window will shrink and enlarge if i try to shrink it.
What I want to do is removing the size request, without word wrapping and other things like this: I just want let the label go out of the window.
|
Remove the size request of a widget (Python, Gtk3)
| 0.099668 | 0 | 0 | 291 |
10,788,211 |
2012-05-28T17:09:00.000
| 1 | 0 | 0 | 0 |
python,gtk3
| 33,259,216 | 2 | true | 0 | 1 |
Set the "ellipsize" property of label to True.
| 2 | 0 | 0 |
I want to fit some text in a label. I almost managed to do it: when I enlarge the window, I add some <big> tags to the label and the text enlarges with the window. But if I try to shrink the window, the size-request of the label doesn't let me do it.
The label is updated many times a second, so if I try to set a custom size request every time I apply a new label, the window will shrink and enlarge if i try to shrink it.
What I want to do is removing the size request, without word wrapping and other things like this: I just want let the label go out of the window.
|
Remove the size request of a widget (Python, Gtk3)
| 1.2 | 0 | 0 | 291 |
10,789,834 |
2012-05-28T19:56:00.000
| 3 | 0 | 0 | 0 |
python,nlp,nltk
| 10,790,083 | 2 | false | 0 | 0 |
In this case, the work not modifies the meaning of the phrase expecteed to win, reversing it. To identify this, you would need to POS tag the sentence and apply the negative adverb not to the (I think) verb phrase as a negation. I don't know if there is a corpus that would tell you that not would be this type of modifier or not, however.
| 1 | 5 | 1 |
Good day,
I'm attempting to write a sentimental analysis application in python (Using naive-bayes classifier) with the aim to categorize phrases from news as being positive or negative.
And I'm having a bit of trouble finding an appropriate corpus for that.
I tried using "General Inquirer" (http://www.wjh.harvard.edu/~inquirer/homecat.htm) which works OK but I have one big problem there.
Since it is a word list, not a phrase list I observe the following problem when trying to label the following sentence:
He is not expected to win.
This sentence is categorized as being positive, which is wrong. The reason for that is that "win" is positive, but "not" does not carry any meaning since "not win" is a phrase.
Can anyone suggest either a corpus or a work around for that issue?
Your help and insight is greatly appriciated.
|
Phrase corpus for sentimental analysis
| 0.291313 | 0 | 0 | 1,434 |
10,790,381 |
2012-05-28T21:01:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine
| 10,791,742 | 2 | false | 1 | 0 |
No. get_or_insert is syntactic sugar for a transactional function that fetches or inserts a record. You can implement it yourself trivially, but that will only work if the record you're operating on is in the same entity group as the rest of the entities in the current transaction, or if you have cross-group transactions enabled.
| 1 | 2 | 0 |
In google app engine, can I call "get_or_insert" from inside a transaction?
The reason I ask is because I'm not sure if there is some conflict with having this run its own transaction inside an already running transaction.
Thanks!
|
In app engine, can I call "get_or_insert" from inside a transaction?
| 0.197375 | 1 | 0 | 308 |
10,791,157 |
2012-05-28T22:40:00.000
| 0 | 0 | 1 | 0 |
python,eclipse,pydev
| 10,791,205 | 2 | false | 0 | 0 |
You can press the Restart the current launch icon in the console
| 1 | 2 | 0 |
This is a simple one.
I'm using PyDev on OS X 10.6. I have a certain module that I run to start my application. Whenever I make changes in other modules I need to switch the view to the starter module to launch or select it from the Run drop down menu. I'm curious, is there a shortcut or setting to set the default module to run whenever you press Run/launch.
|
PyDev: Always run the same module
| 0 | 0 | 0 | 106 |
10,792,748 |
2012-05-29T03:30:00.000
| 2 | 0 | 0 | 1 |
python,fabric
| 10,807,419 | 2 | false | 0 | 0 |
It's just python, so you can print whatever you'd like, as well as making your own decorator to wrap the task and spit that out. As it stands though there isn't anything in core nor contrib that does that.
| 1 | 5 | 0 |
When I'm administering dozens of servers with Fabric, I often don't care about the specifics of the commands being run on each server, instead I want to collate small bits of information from each host and present it in summary at the end.
Does Fabric support this functionality itself? (I've searched the documentation to no avail, but perhaps I missed something).
Otherwise I suppose one could aggregate this information manually and then add an exit handler, but this feels like something that could be a common use case.
As an example, I have a some scripts that do some basic security checks on a number of servers, and I'd like to create a report at the end instead of scrolling through the output for each server. I don't want to restrict Fabric's output, since if there is an issue I want to scroll back to pinpoint it.
|
Is there a way to make Fabric summarise results across a number of hosts?
| 0.197375 | 0 | 0 | 391 |
10,793,042 |
2012-05-29T04:23:00.000
| 4 | 0 | 0 | 0 |
python,sqlite,fastcgi
| 10,796,243 | 1 | true | 0 | 0 |
Ffinally I found the answer:
the sqlite3 library needs write permissions also on the directory that contains it, probably because it needs to create a lockfile.
Therefor when I use sql to insert data there is no problem, but when I do it through web cgi,fastcgi etc)to insert data there would be an error.
Just add write permission to the directory.
| 1 | 2 | 0 |
In db.py,I can use a function(func insert) insert data into sqlite correctly.
Now I want to insert data into sqlite through python-fastcgi, in
fastcgi (just named post.py ) I can get the request data correctly,but
when I call db.insert,it gives me internal server error.
I already did chmod 777 slqite.db. Anyone know whats problem?
|
sqlite3 insert using python and python cgi
| 1.2 | 1 | 0 | 1,289 |
10,793,272 |
2012-05-29T04:59:00.000
| 1 | 0 | 0 | 0 |
python,ajax,django,web-applications
| 10,795,821 | 1 | true | 1 | 0 |
Simply return the rendered template fragment. You don't need to do anything special. Your Javascript can then just insert it into the DOM at the relevant point.
| 1 | 0 | 0 |
My website have submenus for sections. What I want to do is, when users click the submenu, the content changes accordingly. For example, if user clicks "Pen", the contents of the shall be list of pens, clicks "Eraser" , contents shall be eraser list.
How can I achieve this by using Django template and ajax? I know that I could retrieve the information as JSON data and parse it to update the div, but that requires a lot of work and I cannot use the Django template functionality.
I managed to pass the AJAX request to the server and process the list, but how can I return the rendered template as AJAX result?
|
Django & AJAX Changing Div Contents
| 1.2 | 0 | 0 | 522 |
10,795,095 |
2012-05-29T07:51:00.000
| 3 | 0 | 0 | 0 |
python,flask
| 10,798,159 | 1 | true | 1 | 0 |
The atexit module allows you to register program termination callbacks. Its callbacks won't be called however if the application is terminated by a signal. If you need to handle those cases, you can register the same callbacks with the signal module (for instance you might want to handle the SIGTERM signal).
I may have misunderstood what exactly you want to cleanup, but resources such as file handles or database connections will be closed anyway at interpreter shutdown, so you shouldn't have to worry about those.
| 1 | 8 | 0 |
I'm new to web development in Python and I've chosen Flask to start my web application. I have some resources to free before application shutdown, but I couldn't find where to put my cleanup code.
Flask provides some decorators like before_request and teardown_request to register callbacks before and after request processing. Is there something similar to register a callback to be called before the application stops?
Thanks.
|
Where do I put cleanup code in a Flask application?
| 1.2 | 0 | 0 | 4,055 |
10,795,682 |
2012-05-29T08:35:00.000
| 5 | 0 | 1 | 0 |
python
| 10,795,715 | 3 | true | 0 | 0 |
AFAIK, memory addresses in Cpython are - by design - static. The memory address of an object can be seen with id(). The name of the function is a tell-tale of the fact it doesn't change...
See however the comment below, where other SO users pointed out that id() being the memory address is a detail implementation of Cpython.
HTH!
| 2 | 5 | 0 |
If the same object is invoked multiple times in python, will the memory location always be the same when I print 'self'?
|
Do python objects move in memory during execution?
| 1.2 | 0 | 0 | 664 |
10,795,682 |
2012-05-29T08:35:00.000
| 1 | 0 | 1 | 0 |
python
| 10,796,065 | 3 | false | 0 | 0 |
As mac noticed, memory addresses in Cpython are - by design - static
But even on Cpython you can't relay on this if you are using some c extensions.
Some of them can move objects and manually drive garbage collector.
And if you are using other Python implementations, as PyPy, you are certain not guaranteed that the objects memory location always be the same, and high probably the will move.
| 2 | 5 | 0 |
If the same object is invoked multiple times in python, will the memory location always be the same when I print 'self'?
|
Do python objects move in memory during execution?
| 0.066568 | 0 | 0 | 664 |
10,796,821 |
2012-05-29T09:56:00.000
| 21 | 1 | 1 | 0 |
python,exception-handling
| 10,796,924 | 2 | true | 0 | 0 |
sys.exit raises a SystemExit itself so from a purely technical point of view there's no difference between raising that exception yourself or using sys.exit. And yes you can catch SystemExit exceptions like any other exception and ignore it.
So it's just a matter of documenting your intent better.
PS: Note that this also means that sys.exit is actually a pretty bad misnomer - because if you use sys.exit in a thread only the thread is terminated and nothing else. That can be pretty annoying, yes.
| 1 | 14 | 0 |
What is the difference between calling sys.exit() and throwing an exception in Python?
Let's say I have a Python script which does the following:
open a file
read lines
close it
If the file doesn't exist or an IOException gets thrown at runtime, which of the options below makes more sense?
no except/catch of the exception, if exception occurs, it fails out (which is expected behaviour anyway)
except/catch the exception, logs the error message, throw customized exception by myself, fails out.
in an except IOException block, exit with an error message e.g. sys.exit("something is wrong")
Does option 3 kill the process while 1 and 2 do not? What's the best way to handle the Python exceptions given that Python doesn't have a checked exception like Java (I am really a Java developer ^_^)?
|
Difference between calling sys.exit() and throwing exception
| 1.2 | 0 | 0 | 8,832 |
10,798,448 |
2012-05-29T11:42:00.000
| 0 | 0 | 1 | 0 |
python,zip
| 10,906,300 | 1 | false | 0 | 0 |
No. The python ZipFile module does not provide any way to do that.
You can extract the archive into memory, and then rewrite it with the new filename for the file in question, but that is memory intensive and not what you want to do.
You may be able to edit the zipfile's header and information fields, but the standard Python interface doesn't have an easy way to do so.
| 1 | 3 | 0 |
Is there is a way to rename the file name inside the compressed file ,but without extract it, in python ??
|
How I can rename the contents of compressed file befor extract in python?
| 0 | 0 | 0 | 282 |
10,800,039 |
2012-05-29T13:20:00.000
| 0 | 0 | 0 | 0 |
python,database,arrays,save,hdf5
| 10,802,164 | 3 | false | 0 | 0 |
I would use a single file with fixed record length for this usecase. No specialised DB solution (seems overkill to me in that case), just plain old struct (see the documentation for struct.py) and read()/write() on a file. If you have just millions of entries, everything should be working nicely in a single file of some dozens or hundreds of MB size (which is hardly too large for any file system). You also have random access to subsets in case you will need that later.
| 2 | 3 | 1 |
I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used / manipulated later.
I have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point where I have to deal with tens of millions of data points. The data has got so large now that the processing time is excessive and inefficient---the way the current code is written the entire data set has to be reprocessed every time some new data is added.
What I want to do is this:
Read in all of the existing data to python arrays
Save the variable arrays to some kind of database/file
Then, the next time more data is added I load my database, append the new data, and resave it. This way only a small number of data need to be processed at any one time.
I would like the saved data to be accessible to further python scripts but also to be fairly "human readable" so that it can be handled in programs like OriginPro or perhaps even Excel.
My question is: whats the best format to save the data in? HDF5 seems like it might have all the features I need---but would something like SQLite make more sense?
EDIT: My data is single dimensional. I essentially have 30 arrays which are (millions, 1) in size. If it wasn't for the fact that there are so many points then CSV would be an ideal format! I am unlikely to want to do lookups of single entries---more likely is that I might want to plot small subsets of data (eg the last 100 hours, or the last 1000 hours, etc).
|
Saving large Python arrays to disk for re-use later --- hdf5? Some other method?
| 0 | 0 | 0 | 1,123 |
10,800,039 |
2012-05-29T13:20:00.000
| 2 | 0 | 0 | 0 |
python,database,arrays,save,hdf5
| 10,817,026 | 3 | true | 0 | 0 |
HDF5 is an excellent choice! It has a nice interface, is widely used (in the scientific community at least), many programs have support for it (matlab for example), there are libraries for C,C++,fortran,python,... It has a complete toolset to display the contents of a HDF5 file. If you later want to do complex MPI calculation on your data, HDF5 has support for concurrently read/writes. It's very well suited to handle very large datasets.
| 2 | 3 | 1 |
I'm currently rewriting some python code to make it more efficient and I have a question about saving python arrays so that they can be re-used / manipulated later.
I have a large number of data, saved in CSV files. Each file contains time-stamped values of the data that I am interested in and I have reached the point where I have to deal with tens of millions of data points. The data has got so large now that the processing time is excessive and inefficient---the way the current code is written the entire data set has to be reprocessed every time some new data is added.
What I want to do is this:
Read in all of the existing data to python arrays
Save the variable arrays to some kind of database/file
Then, the next time more data is added I load my database, append the new data, and resave it. This way only a small number of data need to be processed at any one time.
I would like the saved data to be accessible to further python scripts but also to be fairly "human readable" so that it can be handled in programs like OriginPro or perhaps even Excel.
My question is: whats the best format to save the data in? HDF5 seems like it might have all the features I need---but would something like SQLite make more sense?
EDIT: My data is single dimensional. I essentially have 30 arrays which are (millions, 1) in size. If it wasn't for the fact that there are so many points then CSV would be an ideal format! I am unlikely to want to do lookups of single entries---more likely is that I might want to plot small subsets of data (eg the last 100 hours, or the last 1000 hours, etc).
|
Saving large Python arrays to disk for re-use later --- hdf5? Some other method?
| 1.2 | 0 | 0 | 1,123 |
10,801,911 |
2012-05-29T15:12:00.000
| 0 | 0 | 1 | 0 |
python,nosetests
| 10,804,538 | 1 | false | 0 | 0 |
Drop the nosetests as it's too hard to make them work on different versions of python and switch to tox eventually calling py.test from tox.
| 1 | 2 | 0 |
I do have several python packages that I do test using nosetest, and as expected one of the steps is to run the tests using several versions of Python.
The main problem is that most nose extensions are not compatible with all versions of python and not having them install will prevent you from running the tests (nose will stop if it finds any unknown option inside [nosetest] from setup.cfg.
Example of extensions: yanc, xtraceback,machineout,'nose_exclude`
I do have to run the tests with Python 2.5, 2.6, 2.7, 3.2
I do not need to run all of these for all versions of python, but still how should I reconfigure the execution of the tests in order not to loose them?
|
How to deal with nose extensions that are not available under certain versions of python?
| 0 | 0 | 0 | 102 |
10,803,012 |
2012-05-29T16:22:00.000
| 0 | 0 | 0 | 0 |
python,mysql,odbc,pyodbc
| 10,803,049 | 3 | false | 0 | 0 |
As long as you use the same connection, the database should show you a consistent view on the data, e.g. with all changes made so far in this transaction.
Once you commit, the changes will be written to disk and be visible to other (new) transactions and connections.
| 2 | 2 | 0 |
I have run a few trials and there seems to be some improvement in speed if I set autocommit to False.
However, I am worried that doing one commit at the end of my code, the database rows will not be updated. So, for example, I do several updates to the database, none are committed, does querying the database then give me the old data? Or, does it know it should commit first?
Or, am I completely mistaken as to what commit actually does?
Note: I'm using pyodbc and MySQL. Also, the table I'm using are InnoDB, does that make a difference?
|
Does setting autocommit to true take longer than batch committing?
| 0 | 1 | 0 | 1,170 |
10,803,012 |
2012-05-29T16:22:00.000
| 1 | 0 | 0 | 0 |
python,mysql,odbc,pyodbc
| 10,803,230 | 3 | false | 0 | 0 |
The default transaction mode for InnoDB is REPEATABLE READ, all the read will be consistent within a transaction. If you insert rows and query them in the same transaction, you will not see the newly inserted row, but they will be stored when you commit the transaction. If you want to see the newly inserted row before you commit the transaction, you can set the isolation level to READ COMMITTED.
| 2 | 2 | 0 |
I have run a few trials and there seems to be some improvement in speed if I set autocommit to False.
However, I am worried that doing one commit at the end of my code, the database rows will not be updated. So, for example, I do several updates to the database, none are committed, does querying the database then give me the old data? Or, does it know it should commit first?
Or, am I completely mistaken as to what commit actually does?
Note: I'm using pyodbc and MySQL. Also, the table I'm using are InnoDB, does that make a difference?
|
Does setting autocommit to true take longer than batch committing?
| 0.066568 | 1 | 0 | 1,170 |
10,803,329 |
2012-05-29T16:44:00.000
| 3 | 0 | 1 | 0 |
javascript,python,oop,class
| 10,803,565 | 2 | false | 0 | 0 |
What is the best approach to this kind of problem?
Some game developers would say that OOP is not the type of programming you would use for games. They would have a global data store, and use procedural code to access the global data store.
You're the only one who can really answer this question. Did your classes help or hinder your game programming?
How specific should I be, and should I use something completely different?
As specific as you need to be to model the game. In my opinion, since you finished the games, you had a good model. As you get more development experience, you'll have seen more models that you can use.
| 1 | 2 | 0 |
I made two games so far, one was a simple 2D MMO, and another one was a 2D portal clone with some more features.
Anyway, I noticed that my class design in those two games varied a bit:
In the MMO, I'd create a class called "Enemy", and this class would take some arguments such as image and attack_power.
In the Portal-like game, I'd create classes for every kind of object specifically, for example "Box" or "Wall" or "Doors". Then, these would take only position as an argument, and inside I'd flag them for movable, physics with true/false, and then I would have an update function which would act upon these objects.
What is the best approach to this kind of problem? How specific should I be, and should I use something completely different?
I made those games in Python and Javascript.
|
How specific should my classes in a game be? (Or anywhere else for that matter)
| 0.291313 | 0 | 0 | 107 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.