Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
13,239,279 |
2012-11-05T19:49:00.000
| 5 | 0 | 1 | 0 |
python,dictionary
| 13,239,324 | 5 | false | 0 | 0 |
No. Check the OrderedDict from collections module.
| 2 | 10 | 0 |
When I introduce new pair it is inserted at the beginning of dictionary. Is it possible to append it at the end?
|
Is it possible to add pair at the end of the dictionary in python
| 0.197375 | 0 | 0 | 20,298 |
13,239,279 |
2012-11-05T19:49:00.000
| 7 | 0 | 1 | 0 |
python,dictionary
| 13,239,328 | 5 | false | 0 | 0 |
A dict in Python is not "ordered" - in Python 2.7+ there's collections.OrderedDict, but apart from that - no... The key point of a dictionary in Python is efficient key->lookup value... The order you're seeing them in is completely arbitrary depending on the hash algorithm...
| 2 | 10 | 0 |
When I introduce new pair it is inserted at the beginning of dictionary. Is it possible to append it at the end?
|
Is it possible to add pair at the end of the dictionary in python
| 1 | 0 | 0 | 20,298 |
13,241,503 |
2012-11-05T22:34:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,google-cloud-datastore
| 13,250,634 | 2 | false | 1 | 0 |
A repeated string property is your best option.
| 1 | 0 | 0 |
I want to have a property on a database model of mine in Google App Engine and I am not sure which category works the best. I need it to be a tag cloud similar to the Tags on SO. Would a text property be best or should I use a string property and make it repeated=True.
The second seems best to me and then I can just divide the tags up with a comma as a delimiter. My goal is to be able to search through these tags and count the total number of each type of tag.
Does this seem like a reasonable solution?
|
Which GAE Database Property Fits a Tag Property?
| 0.099668 | 0 | 0 | 72 |
13,241,827 |
2012-11-05T23:05:00.000
| 5 | 1 | 1 | 0 |
python,python-import
| 13,241,883 | 2 | true | 0 | 0 |
Just import them where they're needed. After a module has been imported once, it is cached so that any subsequent imports will be quick. If you import the same module 20 times, only the first one will be slow.
| 1 | 1 | 0 |
I have a Python program that has several slow imports. I'd like to delay importing them until they are needed. For instance, if a user is just trying to print a help message, it is silly to import the slow modules. What's the most Pythonic way to do this?
I'll add a solution I was playing with as a answer. I know you all can do better, though.
|
What's the best way to do a just-in-time import of Python libraries?
| 1.2 | 0 | 0 | 421 |
13,243,581 |
2012-11-06T02:43:00.000
| 1 | 1 | 0 | 0 |
java,python,ruby,email,clojure
| 13,243,729 | 1 | false | 0 | 0 |
Almost every mail server has some form of extensibility where you can insert logic in the mail-flow process, it's how some spam filters were implemented before they were built directly in to the servers. Personally, I use Exchange server which has a variety of points and APIs to extend it, such as SMTP Sinks.
However, this question is off-topic and shouldn't be on StackOverflow.
I suggest you build your own server - implementing a server-side version of SMTP and IMAP can be done by a single person, or use an existing library, it shouldn't take you more than a year if you put in a couple of hours each day.
| 1 | 0 | 0 |
I am interested to build a mail service that allows you to incorporate custom logic in the your mail server.
For example, user A can reply to [email protected] once and subsequent emails from user A to [email protected] will not go through until certain actions are taken.
I am looking for something simple and customizable, preferably open-sourced. I am fluent in most modern languages.
What email servers do you guys recommend for this?
|
Customizable mail server - what are my options?
| 0.197375 | 0 | 0 | 92 |
13,245,772 |
2012-11-06T06:43:00.000
| 0 | 0 | 0 | 0 |
python,sockets
| 13,245,863 | 3 | false | 0 | 0 |
Sockets are handled and controlled by OS. Any programming language what do is just put the data to buffer in OS. So in order to check the open sockets you just have to read them by operating system.
| 1 | 0 | 0 |
In Python 2.7 is there a way to get information on all open sockets similar to what netstat/ss does in linux?
I am interested in writing a small program (similar to EtherApe) that tracks when my computer opens a connection to a server.
|
Show all open sockets in Python
| 0 | 0 | 1 | 3,146 |
13,248,020 |
2012-11-06T09:25:00.000
| 7 | 0 | 1 | 0 |
python
| 13,248,068 | 3 | false | 0 | 0 |
One difference is for r+ if the files does not exist, it'll not be created and open fails. But in case of a+ the file will be created if it does not exist.
| 1 | 46 | 0 |
I have try r+ and a+ to open file and read and write, but 'r+' and 'a+' are all append the str to the end of the file.
So, what's the difference between r+ and a+ ?
Add:
I have found the reason:
I have read the file object and forgot to seek(0) to set the location to the begin
|
What's the difference between 'r+' and 'a+' when open file in python?
| 1 | 0 | 0 | 137,317 |
13,248,848 |
2012-11-06T10:15:00.000
| 1 | 0 | 0 | 0 |
python-2.7,wxpython,wxwidgets
| 13,254,838 | 1 | true | 0 | 1 |
ObjectListView is just a nice wrapper around the ListCtrl. To my knowledge, the ListCtrl does not support the embedding of other widgets. You could create a popup dialog when double-clicking a cell and do it that way. Otherwise, you would have to use the UltimateListCtrl. That widget DOES allow widget embedding because it's a custom widget rather than a native one.
| 1 | 0 | 0 |
If this is possible, can you show some code example? Thanks in advance.
|
Can we embedd a ComboBox or a TextCtrl on the ObjectListView of wxPython?
| 1.2 | 0 | 0 | 203 |
13,249,147 |
2012-11-06T10:34:00.000
| 0 | 0 | 0 | 1 |
python,google-apps-script
| 13,249,247 | 1 | false | 0 | 0 |
Yes. You would need to authorize it the first time and implement oAuth from the script though. I strongly suggest that you switch to the Google Drive API.
| 1 | 0 | 0 |
Would it be possible for some type of Python script to check services running on a linux box, and integrate with a google app script, so it would then populate a google doc spreadsheet stating whther a service is running or not ?
|
Python/Google Apps Script integration
| 0 | 0 | 0 | 164 |
13,250,046 |
2012-11-06T11:27:00.000
| 3 | 0 | 0 | 0 |
python,pandas,csv,types
| 58,968,554 | 6 | false | 0 | 0 |
You Can do This , Works On all Versions of Pandas
pd.read_csv('filename.csv', dtype={'zero_column_name': object})
| 1 | 76 | 1 |
I am importing study data into a Pandas data frame using read_csv.
My subject codes are 6 numbers coding, among others, the day of birth. For some of my subjects this results in a code with a leading zero (e.g. "010816").
When I import into Pandas, the leading zero is stripped of and the column is formatted as int64.
Is there a way to import this column unchanged maybe as a string?
I tried using a custom converter for the column, but it does not work - it seems as if the custom conversion takes place before Pandas converts to int.
|
How to keep leading zeros in a column when reading CSV with Pandas?
| 0.099668 | 0 | 0 | 62,070 |
13,251,192 |
2012-11-06T12:34:00.000
| 0 | 0 | 1 | 0 |
python,coding-style
| 13,251,219 | 3 | false | 0 | 0 |
If a function succeeds or fails return a boolean True or False.
If it causes an error, throw an exception.
If it mutates something don't put a return.
| 2 | 0 | 0 |
Suppose I'm writing a function in python, this function can either be success or raise exception.
So which one shall I use:
return
return True
no return
Edit:
Thanks for response. In this case I mean to ask if return Value have no further meaning, shall I still do a return True or something.
The case I'm working on is to write thrift server-side function, where I hesitated whether to use void or boolean type in the service api.
|
About python function return value best practice
| 0 | 0 | 0 | 2,748 |
13,251,192 |
2012-11-06T12:34:00.000
| 2 | 0 | 1 | 0 |
python,coding-style
| 13,251,230 | 3 | false | 0 | 0 |
Don't bother returning anything, unless you explicitly specify that the function intends to return something useful.
This way you can assume that everything has succeeded if no exception was thrown,
| 2 | 0 | 0 |
Suppose I'm writing a function in python, this function can either be success or raise exception.
So which one shall I use:
return
return True
no return
Edit:
Thanks for response. In this case I mean to ask if return Value have no further meaning, shall I still do a return True or something.
The case I'm working on is to write thrift server-side function, where I hesitated whether to use void or boolean type in the service api.
|
About python function return value best practice
| 0.132549 | 0 | 0 | 2,748 |
13,252,683 |
2012-11-06T14:03:00.000
| 3 | 0 | 0 | 1 |
python,google-app-engine
| 13,252,901 | 2 | false | 1 | 0 |
There's still a cursor, even if the last result is retrieved. The query class doesn't know that, in any case: it knows what you've had already, but it doesn't know what else is still to come. The cursor doesn't represent any actual result, it's simply a way of resuming the query later. In fact, it's possible to use a cursor even in the case where you reach the end of the data set on your initial query, but later updates mean that new items are now found on a subsequent request: for example, if you're ordering by last update time.
(Good username, btw: gotta love some PKD.)
| 1 | 2 | 0 |
From the Google App Engine documentation:
"cursor() returns a base64-encoded cursor string denoting the position in the query's result set following the last result retrieved."
What does it return if the the last result retrieved IS the last result in the query set? Wouldn't this mean that there is no position that can 'follow' the last result retrieved? Therefore, is 'None' returned?
|
what does the cursor() method of GAE Query class return if the last result was already retrieved?
| 0.291313 | 0 | 0 | 119 |
13,253,510 |
2012-11-06T14:50:00.000
| 0 | 0 | 1 | 1 |
python,django,windows,vim,ide
| 19,894,858 | 5 | false | 0 | 0 |
One possible compromise is to use your favorite IDE with a vim emulator plugin. For example, in Eclipse you can use Vrapper, PyCharm has IdeaVim and so forth. Lighttable also has vim key-bindings. The plug-ins (or key-binding options) give you some of the benefits of editing in Vim while still having the powerful debugging / navigation features, etc. of a full-blown IDE. BTW, Vrapper works with PyDev.
Using an emulator in an IDE allows you to gain the "muscle-memory" necessary for effective vim editing, without getting bogged down in "configuration hell" associated with turning an editor into an IDE (which auto-complete plugin do I use?..etc.?). Once you have mastered the vim keystrokes for normal and visual mode, used along with insert mode, you may decide to continue on into pure Vim and face those issues.
| 1 | 1 | 0 |
I am turning to Python from .NET world. And Visual Studio was something a great tool i used.
In python world we do have basic IDLE and another one is VIM. I have seen that a lot of developers have configured VIM to a great IDE. Using basic VIM in Windows 7 seems of less use.
So i want to moderate my VIM to a level which has file explorer, syntax highlighting, search, error highlighting etc. So that it gives feel of Visual Studio and more productive.
But all hacks/tips available are for Linux/Ubuntu users mostly, which i may use later but as of now i need to make my VIM in Windows more productive, visual.
Please Suggest some Tips/Hacks/Resources to look around for VIM configuration?
Thanks
|
Using VIM for Python IDE in Windows?
| 0 | 0 | 0 | 4,165 |
13,254,044 |
2012-11-06T15:19:00.000
| 0 | 0 | 1 | 0 |
python,interactive-shell,python-interactive
| 13,254,202 | 2 | false | 0 | 0 |
I believe the pickle package should work for you. You can use pickle.dump or pickle.dumps to save the state of most objects. (then pickle.load or pickle.loads to get it back)
| 1 | 0 | 0 |
I am trying to build an online Python Shell. I execute commands by creating an instance of InteractiveInterpreter and use the command runcode. For that I need to store the interpreter state in the database so that variables, functions, definitions and other values in the global and local namespaces can be used across commands. Is there a way to store the current state of the object InteractiveInterpreter that could be retrieved later and passed as an argument local to InteractiveInterpreter constructor or If I can't do this, what alternatives do I have to achieve the mentioned functionality?
Below is the pseudo code of what I am trying to achieve
def fun(code, sessionID):
session = Session()
# get the latest state of the interpreter object corresponding to SessionID
vars = session.getvars(sessionID)
it = InteractiveInterpreter(vars)
it.runcode(code)
#save back the new state of the interpreter object
session.setvars(it.getState(),sessionID)
Here, session is an instance of table containing all the necessary information.
|
How to store the current state of InteractiveInterpreter Object in a database?
| 0 | 1 | 0 | 136 |
13,258,448 |
2012-11-06T19:59:00.000
| 0 | 0 | 0 | 0 |
python-3.x,clipboardmanager
| 13,261,057 | 1 | true | 0 | 1 |
Clipboards are framework specific. So a good place would be to select a GUI framework, and look at it's documentation for how to deal with cut and paste.
| 1 | 0 | 0 |
Where would be a good place start learning to program a clipboard manager in python. i want to be able to copy selected text with a keyboard macro and set it to a slot identified ny the key combo "ctrl+alt+c+1", "ctrl+alt+c+2" storing more than one thing, also so would be got to be able to contaminate on to what is in a slot, if one would want. Any way that is the project i want to write, i write in python 3.2, python 2.7, and java 7. what libraries should i start learning i guess is what i am asking.
|
Python clipboard manager?
| 1.2 | 0 | 0 | 964 |
13,261,858 |
2012-11-07T01:07:00.000
| 0 | 1 | 0 | 0 |
python,twitter,twython
| 16,578,360 | 2 | false | 0 | 0 |
considering the case of similar tweets and retweets, I would recommend making a semantic note of the whole tweet, extracting the text part of each tweet and doing a dictionary lookup.
but tweet id is more simpler with significant loss, usage as noted above.
| 1 | 1 | 0 |
I'm working on a project which requires counting the number of tweets that meet the parameters of a query. I'm working in Python, using Twython as my interface to Twitter.
A few questions though, how do you record which tweets have already been accounted for? Would you simply make a note of the last tweet ID and ignore it plus all previous? --What is the easiest implementation of this?
As another optimizations question, I want to make sure that the amount of tweets missed by the counter is minimal, is there any way to make sure of this?
Thanks so much.
|
How to count tweets from query without double counting?
| 0 | 0 | 0 | 245 |
13,262,047 |
2012-11-07T01:34:00.000
| 3 | 0 | 1 | 0 |
java,python,security,encryption,aes
| 13,262,092 | 1 | true | 1 | 0 |
Typically, you'd generate the IV randomly, and send it along with the encrypted message. The IV doesn't need to be secret--it just needs to be different for every message you send.
There are a wide variety of concerns to worry about when implementing crypto. Your block cipher mode matters, for instance--if you're using an IV you probably aren't using ECB, but that leaves quite a few other options open. Padding attacks and other subtle things are also a concern.
Generally, you don't want to implement crypto yourself if you can possibly avoid it. It's much too easy to get wrong, and usually quite important to get right. You may want to ask for more help on the Security StackExchange.
| 1 | 1 | 0 |
I'm making a project in Java and Python that includes sending an encrypted string from one to the other. I can get the languages to understand each other and fully de-crypt / encrypt strings. However I was talking to somebody and was told that I am not being totally secure. I am using AES encryption for the project. Part of the problem is that I am distributing the software and need to come up with an effective way and secure way of making sure both the server side know the IV and 'Secret Key'. Right now the same string will always encrypt to be the same result. If I could change those two factors they would be different, so 2 users with the same password won't have the same encrypted password. Please do keep in mind that the server only needs to manage one account.
I appreciate your responses, and thank you very much ahead of time!
|
AES Encryption (Python and Java)
| 1.2 | 0 | 0 | 631 |
13,267,912 |
2012-11-07T10:36:00.000
| 1 | 1 | 0 | 0 |
c++,python,boost-python
| 13,363,934 | 2 | true | 0 | 1 |
The from python conversion is in fact done in builtin_converters.cpp and not in the header part of the library. I Copied this file and deleted everything except the converter for long double, which I was then able to modify.
| 1 | 2 | 0 |
Where does boost python register from python converters for builtin types such as from PyLong_Type to double?
I want to define a converter that can take a numpy.float128 from python and returns a long double for functions in C++. I already did it the other way round, the to_python converter. For that I tweaked builtin_converters.hpp but I didn't find how boost python does the from python conversion.
|
From python converter for builtin types
| 1.2 | 0 | 0 | 300 |
13,274,197 |
2012-11-07T16:42:00.000
| 9 | 0 | 0 | 0 |
python,amazon-web-services,boto,amazon-glacier
| 13,275,014 | 1 | true | 1 | 0 |
The AWS Glacier service does not provide a way to delete a job. You can:
Initiate a job
Describe a job
Get the output of a job
List all of your jobs
The Glacier service manages the jobs associated with an vault.
| 1 | 7 | 0 |
I have started a retrival job for an archive stored in one of my vaults on
Glacier AWS.
It turns out that I do not need to resurrect and download that archive any more.
Is there a way to stop and/or delete my Glacier job?
I am using boto and I cannot seem to find a suitable function.
Thanks
|
AWS glacier delete job
| 1.2 | 1 | 0 | 1,164 |
13,280,680 |
2012-11-08T00:31:00.000
| 3 | 0 | 1 | 0 |
python,python-3.x,python-2.x
| 24,463,654 | 8 | false | 0 | 0 |
Well, not discounting the problems cautioned about at the start. But it can be useful in certain cases.
First of all, the reason I am looking this post up is because I did just this and __slots__ doesn't like it. (yes, my code is a valid use case for slots, this is pure memory optimization) and I was trying to get around a slots issue.
I first saw this in Alex Martelli's Python Cookbook (1st ed). In the 3rd ed, it's recipe 8.19 "Implementing Stateful Objects or State Machine Problems". A fairly knowledgeable source, Python-wise.
Suppose you have an ActiveEnemy object that has different behavior from an InactiveEnemy and you need to switch back and forth quickly between them. Maybe even a DeadEnemy.
If InactiveEnemy was a subclass or a sibling, you could switch class attributes. More exactly, the exact ancestry matters less than the methods and attributes being consistent to code calling it. Think Java interface or, as several people have mentioned, your classes need to be designed with this use in mind.
Now, you still have to manage state transition rules and all sorts of other things. And, yes, if your client code is not expecting this behavior and your instances switch behavior, things will hit the fan.
But I've used this quite successfully on Python 2.x and never had any unusual problems with it. Best done with a common parent and small behavioral differences on subclasses with the same method signatures.
No problems, until my __slots__ issue that's blocking it just now. But slots are a pain in the neck in general.
I would not do this to patch live code. I would also privilege using a factory method to create instances.
But to manage very specific conditions known in advance? Like a state machine that the clients are expected to understand thoroughly? Then it is pretty darn close to magic, with all the risk that comes with it. It's quite elegant.
Python 3 concerns? Test it to see if it works but the Cookbook uses Python 3 print(x) syntax in its example, FWIW.
| 5 | 35 | 0 |
Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
|
How dangerous is setting self.__class__ to something else?
| 0.07486 | 0 | 0 | 8,582 |
13,280,680 |
2012-11-08T00:31:00.000
| 30 | 0 | 1 | 0 |
python,python-3.x,python-2.x
| 13,280,789 | 8 | true | 0 | 0 |
Here's a list of things I can think of that make this dangerous, in rough order from worst to least bad:
It's likely to be confusing to someone reading or debugging your code.
You won't have gotten the right __init__ method, so you probably won't have all of the instance variables initialized properly (or even at all).
The differences between 2.x and 3.x are significant enough that it may be painful to port.
There are some edge cases with classmethods, hand-coded descriptors, hooks to the method resolution order, etc., and they're different between classic and new-style classes (and, again, between 2.x and 3.x).
If you use __slots__, all of the classes must have identical slots. (And if you have the compatible but different slots, it may appear to work at first but do horrible things…)
Special method definitions in new-style classes may not change. (In fact, this will work in practice with all current Python implementations, but it's not documented to work, so…)
If you use __new__, things will not work the way you naively expected.
If the classes have different metaclasses, things will get even more confusing.
Meanwhile, in many cases where you'd think this is necessary, there are better options:
Use a factory to create an instance of the appropriate class dynamically, instead of creating a base instance and then munging it into a derived one.
Use __new__ or other mechanisms to hook the construction.
Redesign things so you have a single class with some data-driven behavior, instead of abusing inheritance.
As a very most common specific case of the last one, just put all of the "variable methods" into classes whose instances are kept as a data member of the "parent", rather than into subclasses. Instead of changing self.__class__ = OtherSubclass, just do self.member = OtherSubclass(self). If you really need methods to magically change, automatic forwarding (e.g., via __getattr__) is a much more common and pythonic idiom than changing classes on the fly.
| 5 | 35 | 0 |
Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
|
How dangerous is setting self.__class__ to something else?
| 1.2 | 0 | 0 | 8,582 |
13,280,680 |
2012-11-08T00:31:00.000
| 5 | 0 | 1 | 0 |
python,python-3.x,python-2.x
| 13,281,013 | 8 | false | 0 | 0 |
On arbitrary classes, this is extremely unlikely to work, and is very fragile even if it does. It's basically the same thing as pulling the underlying function objects out of the methods of one class, and calling them on objects which are not instances of the original class. Whether or not that will work depends on internal implementation details, and is a form of very tight coupling.
That said, changing the __class__ of objects amongst a set of classes that were particularly designed to be used this way could be perfectly fine. I've been aware that you can do this for a long time, but I've never yet found a use for this technique where a better solution didn't spring to mind at the same time. So if you think you have a use case, go for it. Just be clear in your comments/documentation what is going on. In particular it means that the implementation of all the classes involved have to respect all of their invariants/assumptions/etc, rather than being able to consider each class in isolation, so you'd want to make sure that anyone who works on any of the code involved is aware of this!
| 5 | 35 | 0 |
Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
|
How dangerous is setting self.__class__ to something else?
| 0.124353 | 0 | 0 | 8,582 |
13,280,680 |
2012-11-08T00:31:00.000
| 17 | 0 | 1 | 0 |
python,python-3.x,python-2.x
| 13,281,122 | 8 | false | 0 | 0 |
Assigning the __class__ attribute is useful if you have a long time running application and you need to replace an old version of some object by a newer version of the same class without loss of data, e.g. after some reload(mymodule) and without reload of unchanged modules. Other example is if you implement persistency - something similar to pickle.load.
All other usage is discouraged, especially if you can write the complete code before starting the application.
| 5 | 35 | 0 |
Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
|
How dangerous is setting self.__class__ to something else?
| 1 | 0 | 0 | 8,582 |
13,280,680 |
2012-11-08T00:31:00.000
| 0 | 0 | 1 | 0 |
python,python-3.x,python-2.x
| 13,280,788 | 8 | false | 0 | 0 |
How "dangerous" it is depends primarily on what the subclass would have done when initializing the object. It's entirely possible that it would not be properly initialized, having only run the base class's __init__(), and something would fail later because of, say, an uninitialized instance attribute.
Even without that, it seems like bad practice for most use cases. Easier to just instantiate the desired class in the first place.
| 5 | 35 | 0 |
Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
|
How dangerous is setting self.__class__ to something else?
| 0 | 0 | 0 | 8,582 |
13,280,743 |
2012-11-08T00:40:00.000
| 3 | 0 | 1 | 0 |
python,time,precision
| 13,280,845 | 1 | true | 0 | 0 |
How much precision do you want? While it's true that there are finite decimal fractions that can't be represented as finite binary fractions, the nearest approximate value is going to round to the correct number of integer milliseconds as long as you aren't timing a program running for 143 millenia (2**52 milliseconds).
In short: I don't think you need to worry about floating-point precision for this. You might need to worry about system timer accuracy, precision, or monotonicity, though.
| 1 | 3 | 0 |
I want to capture timestamps with sub-second precision in python. It looks like the standard answer is int(time.time() * 1000)
However, if time.time() returns a float, won't you have precision problems? There will be some values that won't represent accurately as a float.
I'm worried about some fractional times that don't represent correctly as a float, and the timestamp jumping forward or backward in those cases.
Is that a valid concern?
If so, what's the work-around?
|
In Python, is time.time() * 1000 precise enough?
| 1.2 | 0 | 0 | 2,199 |
13,281,377 |
2012-11-08T01:59:00.000
| 6 | 0 | 1 | 0 |
python,dictionary
| 13,281,387 | 4 | false | 0 | 0 |
[d['key'][2]] should do the trick ...
Breaking it down:
d['key'] retieves the tuple from the dictionary
[2] subscripts the list and gets the desired item out of it
The outer brackets put the final object into a list
| 1 | 0 | 0 |
If I have d = {"key": (5,4,"val1","val2",2)} How would I grab val1 out of the tuple and turn it into a list by itself?
|
Python: Converting a dictionary value to a list
| 1 | 0 | 0 | 79 |
13,282,190 |
2012-11-08T03:59:00.000
| 2 | 0 | 0 | 0 |
python
| 13,282,258 | 2 | true | 0 | 0 |
Calculate the angle from the door hinge to each of the points; whichever is closest to the current angle of the door itself (hinge to door edge) will be hit first when rotating.
If the cycling is giving you trouble: notice that for any given angle, you can subtract it from 360 to get its complement; whichever is the smaller of the two is the closer way to get to it. So:
Calculate all angles for the points a1 ... aN
Subtract them all from the door angle to get difference angles d1...dN
Replace each dN with min( dN, 360 - dN ) to get the "shorter" approach
Pick the minimum
| 1 | 0 | 0 |
I'm trying to some some code in python. Basically what it does is simulates a door (viewed from above) on an (x,y) coordinate system. The task is given a list of points, determine which the door will hit first, if any.
Determining if a point is within range to be hit by the door is simple enough, determining which point gets hit first is proving to be difficult, as the door can swing clockwise or counter clockwise, and has a rather large, and variable range of swing (in terms of radians/degrees). The issue is mostly that I'm not sure what conditions need to be true for the point to be hit first.
Update:
I do have the angles calculated, but concerned about special cases such as when the door is at 1 degree, and swinging clockwise towards points at angles 180, 190, and 300 for example.
|
Point wrapping algorithm - A blocked Swinging door
| 1.2 | 0 | 0 | 382 |
13,283,451 |
2012-11-08T06:08:00.000
| 1 | 0 | 1 | 0 |
python,calculator,operation,operand
| 13,283,479 | 3 | false | 0 | 0 |
If you're allowed to, I'd recommend checking out the ast module. It's designed to do stuff like this for you using Python's own parser.
For an actual application, you'd probably use a parser generator like Ply.
For a simple homework assignment like this, you're probably expected to handcode a parser. First tokenize it (str.split), find the parenthesis, and then use precedence to group the other operations.
| 1 | 2 | 0 |
So I was trying to figure out how to detect the number of operands and operations in a mathematical expression
Ex: 1+2*9/2
I was trying to separate the operands and the operations into their own form using functions because we have to check how many operations must be done and in the correct order (PEDMAS) as well.
I've taken the equation and taken out all the spaces already, now i have to find the number of operands and operations in the equation and then later use return to find the answer of the given mathematical expression by the user.
Any tips?
|
How to find the operand and the operation in a string
| 0.066568 | 0 | 0 | 4,720 |
13,284,566 |
2012-11-08T07:38:00.000
| 0 | 0 | 1 | 0 |
javascript,python,json,storage,offlineapps
| 13,284,691 | 6 | false | 0 | 0 |
My suggestion would be something like WampServer (Windows, Apache, MySQL, PHP). I've seen a few tutorials about adding Python to that mix.
You would have access to reading and writing JSON data to the local storage or placing your data in a local database.
| 2 | 1 | 0 |
Problem
I need a way to store and collect JSON data in an entirely offline(!) web application, hosted on a local (shared) machine. Several people will access the app but it will never actually be online.
I'd like the app to:
Read and write JSON data continuously and programmatically (i.e. not using a file-upload type schema)
Preferably not require any software installation other than the browser, specifically I'd like not to use local server. (edit: I may be willing to learn a bit of Python if that helps)
The amount of data I need to store is small so it is very much overkill to use some sort of database.
Solution?
My first thought was to let the html5 file API, just read/parse and write my JSON object to a local txt file, but this appears not to be possible?!
Local storage is not applicable here right, when several people - each with their own browser - need to access the html?
Any ideas?
note
I know this topic is not entirely novel, but I think my situation may be slightly different than in other threads. And I've spent the better part of the last couple hours googling this and I'm none the wiser..
|
How to read and write JSON offline on local machine?
| 0 | 0 | 1 | 2,518 |
13,284,566 |
2012-11-08T07:38:00.000
| 0 | 0 | 1 | 0 |
javascript,python,json,storage,offlineapps
| 13,285,068 | 6 | false | 0 | 0 |
I know you said you don't want to opt for local server, but nodejs could be the solution. If you know JavaScript, then it's very simple to set one server up and let everybody access to the server from any browser. Since it's entirely JavaScript you don't even have conversion issues with the JSON format.
For storing the JSON you can use the FileSystem built-in library of nodejs which lets you read and write from a file, so you don't even need a database.
| 2 | 1 | 0 |
Problem
I need a way to store and collect JSON data in an entirely offline(!) web application, hosted on a local (shared) machine. Several people will access the app but it will never actually be online.
I'd like the app to:
Read and write JSON data continuously and programmatically (i.e. not using a file-upload type schema)
Preferably not require any software installation other than the browser, specifically I'd like not to use local server. (edit: I may be willing to learn a bit of Python if that helps)
The amount of data I need to store is small so it is very much overkill to use some sort of database.
Solution?
My first thought was to let the html5 file API, just read/parse and write my JSON object to a local txt file, but this appears not to be possible?!
Local storage is not applicable here right, when several people - each with their own browser - need to access the html?
Any ideas?
note
I know this topic is not entirely novel, but I think my situation may be slightly different than in other threads. And I've spent the better part of the last couple hours googling this and I'm none the wiser..
|
How to read and write JSON offline on local machine?
| 0 | 0 | 1 | 2,518 |
13,286,049 |
2012-11-08T09:22:00.000
| 1 | 0 | 0 | 0 |
python,scrapy,web-crawler
| 13,641,429 | 2 | true | 1 | 0 |
After some time we found the solution - response.meta['depth']
| 1 | 0 | 0 |
Subj.I want to get page (nested) level in scrapy on each page(url, request) in spider, is there any way to do that?
|
Get page (nested) level scrapy on each page(url, request) in spider
| 1.2 | 0 | 1 | 294 |
13,288,013 |
2012-11-08T11:17:00.000
| 1 | 0 | 1 | 0 |
python,virtualenv,mysql-python
| 43,866,023 | 3 | false | 0 | 0 |
source $ENV_PATH/bin/activate
pip uninstall MySQL-python
pip install MySQL-python
this worked for me.
| 2 | 9 | 0 |
I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run "import MySQLdb" in the python prompt from outside of the virtualenv, it works, inside it says "ImportError: No module named MySQLdb".
I'm trying to learn Python and Linux web development. I know that it's easiest to use SQLLite, but I want to learn how to develop larger-scale applications comparable to what I can do in .NET. I've read every blog post on Google and every post here on StackOverflow and they all suggest that I run "sudo pip install mysql-python" but it just says "Requirement already satisfied: mysql-python in /usr/lib/pymodules/python2.7"
Any help would be appreciated! I'm stuck over here and don't want to throw in the towel and just go back to doing this on Microsoft technologies because I can't even get a basic dev environment up and running.
|
Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?
| 0.066568 | 1 | 0 | 6,817 |
13,288,013 |
2012-11-08T11:17:00.000
| 14 | 0 | 1 | 0 |
python,virtualenv,mysql-python
| 13,288,095 | 3 | true | 0 | 0 |
If you have created the virtualenv with the --no-site-packages switch (the default), then system-wide installed additions such as MySQLdb are not included in the virtual environment packages.
You need to install MySQLdb with the pip command installed with the virtualenv. Either activate the virtualenv with the bin/activate script, or use bin/pip from within the virtualenv to install the MySQLdb library locally as well.
Alternatively, create a new virtualenv with system site-packages included by using the --system-site-package switch.
| 2 | 9 | 0 |
I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run "import MySQLdb" in the python prompt from outside of the virtualenv, it works, inside it says "ImportError: No module named MySQLdb".
I'm trying to learn Python and Linux web development. I know that it's easiest to use SQLLite, but I want to learn how to develop larger-scale applications comparable to what I can do in .NET. I've read every blog post on Google and every post here on StackOverflow and they all suggest that I run "sudo pip install mysql-python" but it just says "Requirement already satisfied: mysql-python in /usr/lib/pymodules/python2.7"
Any help would be appreciated! I'm stuck over here and don't want to throw in the towel and just go back to doing this on Microsoft technologies because I can't even get a basic dev environment up and running.
|
Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?
| 1.2 | 1 | 0 | 6,817 |
13,290,514 |
2012-11-08T14:04:00.000
| 0 | 0 | 0 | 0 |
python,django
| 66,928,557 | 3 | false | 1 | 0 |
Sometimes this can happen if you haven't added your app to
INSTALLED_APPS = [ 'app', ]
in settings.py
| 1 | 2 | 0 |
I've create a command in app/management/commands and this command was working fine. I'm unable to run this command now. I'm getting the following error:
Unknown command: 'my_custom_command_name'
I'm using a virtual env. I don't see this in list of commands when I type pythong manage.py. I've this app installed in my settings and It was working previously
|
Unable to run django custom command
| 0 | 0 | 0 | 2,913 |
13,294,968 |
2012-11-08T18:00:00.000
| 1 | 0 | 1 | 0 |
python,large-data
| 13,295,051 | 4 | false | 0 | 0 |
Well if you just need to store it, why keep it in memory, use some kind of database.
| 1 | 2 | 0 |
This is a task I could have used a dict for, if it weren't for the fact that I will need to store much more data than can fit in my 4 GBs of RAM. I'm also doing other memory-demanding stuff in the same program, so the lower mem-requirements, the better.
I just want to
store many strings
check whether a string is included or not in the collection
Is there a Python way of doing this? I'm using 3.3 so berkelydbs are out.
It also needs to give exact answers, so no Bloom-filters.
|
Memory efficient way of checking inclusion
| 0.049958 | 0 | 0 | 137 |
13,295,331 |
2012-11-08T18:25:00.000
| 1 | 0 | 1 | 0 |
python,google-maps,tkinter
| 13,311,950 | 2 | false | 0 | 1 |
Most GUI Frameworks have a way to embed a web browser frame, with a way to execute javascript from the python code. If this is available to you, you could use the Google Maps JavaScript API v3 to display a map.
If you let us know which GUI framework you're using, we might be able to help more.
| 1 | 5 | 0 |
I'm working on a project in Python using tkinter that will allow for geolocation of IP addresses. I have the raw conversions down, and I can take an IP address and know city, state, country, longitude, latitude, etc. I'm wondering if there's any way to embed Google Maps or something similar into my program to offer a visual representation.
|
How can I embed google maps into my Python program?
| 0.099668 | 0 | 0 | 7,845 |
13,296,320 |
2012-11-08T19:29:00.000
| 2 | 0 | 0 | 1 |
python,web-applications,deployment
| 13,296,458 | 1 | true | 0 | 0 |
It all depends on your application.
You can:
use Puppet to deploy servers,
use Fabric to remotely connect to the servers and execute specific tasks,
use pip for distributing Python modules (even non-public ones) and install dependencies,
use other tools for specific tasks (such as use boto to work with Amazon Web Services APIs, eg. to start new instance),
It is not always that simple and you will most likely need something customized. Just take a look at your system: it is not so "standard", so do not expect it to be handled in a "standard" way.
| 1 | 4 | 0 |
We are developing a distributed application in Python. Right now, we are about to re-organize some of our system components and deploy them on separate servers, so I'm looking to understand more about deployment for an application such as this. We will have several back-end code servers, several database servers (of different types) and possibly several front-end servers.
My question is this: what / which are good deployment patterns for distributed applications (in Python or in general)? How can I manage pushing code to several servers (whose IP's should be parameterized in the deployment system), static files to several front ends, starting / stopping processes in the servers, etc.? We are looking for possibly an easy-to-use solution, but mostly, something that once set-up will get out of our way and let us deploy as painlessly as possible.
To clarify: we are aware that there is no one standard solution for this particular application, but this question is rather more geared towards a guide of best practices for different types / parts of deployment than a single, unified solution.
Thanks so much! Any suggestions regarding this or other deployment / architecture pointers will be very appreciated.
|
Python deployment for distributed application
| 1.2 | 0 | 0 | 821 |
13,297,219 |
2012-11-08T20:26:00.000
| 4 | 0 | 1 | 0 |
python,ipython
| 13,297,236 | 3 | false | 0 | 0 |
To use ipython, just go to the command line, and run the command ipython.
| 1 | 5 | 0 |
I've installed ipython, but I don't know how to use it. Where could I find ipython shell?
|
Where to use ipython and where is ipthon shell?
| 0.26052 | 0 | 0 | 5,087 |
13,298,480 |
2012-11-08T21:54:00.000
| -1 | 0 | 1 | 0 |
python-2.7,pymongo,couchdbkit
| 13,641,512 | 1 | false | 0 | 0 |
bson
Try LogoDb from 1985 logo programming language for trs-80
| 1 | 0 | 0 |
We are developing application for which we going to use a NoSql database. We have evaluated couchdb and mongodb. Our application is in python and read-speed is most critical for our application. And application is reading a large number of documents.
I want ask:
Is reading large number of documents is faster in bson than json?
Which is better when we want to read say 100 documents, parse them & print result: python+mongodb+pymongo or python+couchdb+couchdbkit (database going to be on ec2 & accessible over internet)?
|
CouchDB vs mongodb
| -0.197375 | 1 | 0 | 468 |
13,298,630 |
2012-11-08T22:04:00.000
| 1 | 1 | 1 | 0 |
python,eclipse
| 46,827,826 | 11 | false | 0 | 0 |
First of all make sure that you have the same Python interpreter configured as the project has. You can change it under:
Window > Preferences > PyDev > Interpreters > Python Interpreters
As long the project was created using Eclipse you can use import functionality.
Go to:
File > Import... > General > Existing Projects into Workspace
Choose Select root directory: and browse to your project location. Click Finish and you are done.
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 0.01818 | 0 | 0 | 46,104 |
13,298,630 |
2012-11-08T22:04:00.000
| 10 | 1 | 1 | 0 |
python,eclipse
| 25,244,825 | 11 | false | 0 | 0 |
At time of writing none of the given answers worked.
This is how it's done:
Locate the directory containing the Pydev project
Delete the PyDev project files (important as Eclipse won't let you create a new project in the same location otherwise)
In Eclipse, File->New->Pydev Project
Name the project the same as your original project
For project contents, browse to location containing Pydev project
Select an interpreter
Follow rest of the menu through
Other answers using Eclipse project importing result in Pydev loosing track of packages, turning them all into folders only.
This does loose any project settings previously set, please edit this answer if it can be avoided. Hopefully Pydev devs will add project import functionality some time.
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 1 | 0 | 0 | 46,104 |
13,298,630 |
2012-11-08T22:04:00.000
| 0 | 1 | 1 | 0 |
python,eclipse
| 16,728,953 | 11 | false | 0 | 0 |
I just suffered through this problem for a few hours. My issue may have been different than yours...Pydev did not show up as an import option (as opposed to C projects). My solution is to drag and drop. Just create a new project (name it the same as your old) and then drop your old project into the new project folder as displayed in eclipse...3 hours later and it's drag and drop...
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 0 | 0 | 0 | 46,104 |
13,298,630 |
2012-11-08T22:04:00.000
| 3 | 1 | 1 | 0 |
python,eclipse
| 13,299,322 | 11 | true | 0 | 0 |
Following are the steps
Select pydev Perspective
right click on the project pan and click "import"
From the list select the existing project into workspace.
Select root directory by going next
Optionally you can select to copy the project into
thanks
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 1.2 | 0 | 0 | 46,104 |
13,298,630 |
2012-11-08T22:04:00.000
| 9 | 1 | 1 | 0 |
python,eclipse
| 22,244,064 | 11 | false | 0 | 0 |
make sure pydev interpreter is added, add otherwise
windows->preferences->Pydev->Interpreter-Python
then create new pydev project,
give the same name
then don't use default location, browse to point the project location.
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 1 | 0 | 0 | 46,104 |
13,298,630 |
2012-11-08T22:04:00.000
| 14 | 1 | 1 | 0 |
python,eclipse
| 31,423,129 | 11 | false | 0 | 0 |
In my case when i am trying to import my existing perforce project , it gives error no project found on windows machine. On linux i was able to import project nicely.
For Eclipse Kepler, i have done like below.
Open eclipse in pydev perspective.
Create a new pydev project in your eclipse workspace with the same name which project you want to import.
By now in your eclipse workspace project dir , you must be having .project and .pydevproject files.
Copy these two files and paste it to project dir which you want to import.
Now close and delete the pydev project you created and delete it from local disk as well.
Now you can use import utility to import project in eclipse.
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 1 | 0 | 0 | 46,104 |
13,298,630 |
2012-11-08T22:04:00.000
| 15 | 1 | 1 | 0 |
python,eclipse
| 13,298,723 | 11 | false | 0 | 0 |
New Project
Dont use default Location
Browse to existing project location ...
if its an existing eclipse project with project files that have correct paths for your system you can just open the .proj file ...
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 1 | 0 | 0 | 46,104 |
13,298,630 |
2012-11-08T22:04:00.000
| 0 | 1 | 1 | 0 |
python,eclipse
| 28,258,101 | 11 | false | 0 | 0 |
After following steps outlined by @Shan, if the folders under the root folder are not shown as packages,
Right-click on the root folder in PyDev Package Explorer
Select PyDev > Set as source-folder
It will add the root folder to the PYTHONPATH and now the folders will appear as packages
| 8 | 29 | 0 |
I am using eclipse for python.
How do I import an existing project into eclipse in the current workspace.
Thanks
|
How do I import a pre-existing python project into Eclipse?
| 0 | 0 | 0 | 46,104 |
13,298,788 |
2012-11-08T22:16:00.000
| 2 | 0 | 0 | 0 |
python,web-crawler,scrapy
| 13,299,332 | 1 | false | 1 | 0 |
If the site you are scraping does IP based detection, your only option is going to be to change your IP somehow. This means either using a different server (I don't believe EC2 operates in India) or proxying your server requests. Perhaps you can find an Indian proxy service?
| 1 | 1 | 0 |
I am trying to scrape a website which serves different page depending upon the geolocation of the IP sending the request. I am using an amazon EC2 located in US(which means it serves up a page meant for US) but I want the page that will be served in India. Does scrapy provide a way to work around this somehow?
|
fake geolocation with scrapy crawler
| 0.379949 | 0 | 1 | 722 |
13,299,023 |
2012-11-08T22:31:00.000
| 6 | 1 | 0 | 1 |
php,python,django,nginx,tornado
| 13,304,821 | 1 | true | 1 | 0 |
I'll go point by point:
Yes. It's ok to run tornado and nginx on one server. You can use nginx as reverse proxy for tornado also.
Haproxy will give you benefit, if you have more than one server instances. Also it will allow you to proxy websockets directly to tornado.
Actually, nginx can be used for redirects, with no problems. I haven't heard about using redis for redirects - it's key/value storage... may be you mean something else?
Again, you can write blocking part in django and non-blocking part in tornado. Also tornado has some non-blocking libs for db queries. Not sure that you need powers of django here.
Yes, it's ok to run apache behind nginx. A lot of projects use nginx in front of apache for serving static files.
Actually question is very basic - answer also. I can be more detailed on any of the point if you wish.
| 1 | 5 | 0 |
Our website has developed a need for real-time updates, and we are considering various comet/long-polling solutions. After researching, we have settled on nginx as a reverse proxy to 4 tornado instances (hosted on Amazon EC2). We are currently using the traditional LAMP stack and have written a substantial amount of code in PHP. We are willing to convert our PHP code to Python to better support this solution. Here are my questions:
Assuming a quad-core processor, is it ok for nginx to be running on the same server as the 4 tornado instances, or is it recommended to run two separate servers: one for nginx and one for the 4 tornado processes?
Is there a benefit to using HAProxy in front of Nginx? Doesn't Nginx handle load-balancing very well by itself?
From my research, Nginx doesn't appear to have a great URL redirecting module. Is it preferred to use Redis for redirects? If so, should Redis be in front of Nginx, or behind?
A large portion of our application code will not be involved in real-time updates. This code contains several database queries and filesystem reads, so it clearly isn't suitable for a non-blocking app server. From my research, I've read that the blocking issue is mitigated simply by having multiple Tornado instances, while others suggest using a separate app server (ex. Gunicorn/Django/Flask) for blocking calls. What is the best way to handle blocking calls when using a non-blocking server?
Converting our code from PHP to Python will be a lengthy process. Is it acceptable to simultaneously run Apache/PHP and Tornado behind Nginx, or should we just stick to on language (either tornado with gunicorn/django/flask or tornado by itself)?
|
Apache/PHP to Nginx/Tornado/Python
| 1.2 | 0 | 0 | 2,544 |
13,301,469 |
2012-11-09T02:56:00.000
| 14 | 0 | 1 | 0 |
python,pycharm
| 16,985,124 | 2 | false | 0 | 0 |
I just came across the same problem. It was because it had a class called TestClass in the file. I changed the name of the class and then I was able to run the file as normal.
| 1 | 19 | 0 |
I'm doing small time project development using PyCharm. I use Pycharm for its intellisense features. As I develop each piece of code, I like to run it occasionally to test it. All I need at the point of development is to be able to run the file. However, when I right click and try to run a standalone file, PyCharm tries to be intelligent and shows me options to run my code with unit-tests and other fancy testing gimmicks. I don't want to deploy any testing framework at this point.
All I want is to be able to run any file as it is. But somehow, PyCharm is not allowing me to do that for every file.
I will appreciate if someone can provide a workaround for this. I'm using Python 273
|
How to run standalone files in PyCharm
| 1 | 0 | 0 | 18,880 |
13,303,464 |
2012-11-09T06:51:00.000
| 0 | 0 | 0 | 0 |
python,crystal-reports,flask,jinja2
| 68,141,752 | 3 | false | 1 | 0 |
In my case loaders.py had a hardcode "utf-8" in several places which I replaced with "windows-1251" and for me everything worked!
| 1 | 3 | 0 |
I write a simple frontend for pretty old reporting system, which uses Crystal Reports 8 Web Component Server.
And I need to make a 'POST' request to this Web Component. When I'm making request from page encoded using standard UTF-8, all form data is passed in UTF-8 too. And that's the problem, because CR8 Web Component Server doesn't understand UTF-8 (or does it and I'm wrong?).
I've tried to put accept-charset="ISO-8859-5" and accept-charset="windows-1251" in parameters and had no luck with it.
Here's more info, that can be usefull:
This frontend will be working on Windows Server 2003 with IIS6,
Only suitable browser is IE, because CR8 Web Component Server uses ActiveX component. (There's also a java plugin, but for some reason it doesn't work at all).
So I need flask (jinja2) to render templates using 'windows-1251' encoding, because parameter names and values can contain cyrillic characters. It there any way I can achieve this?
|
Can flask (using jinja2) render templates using 'windows-1251' encoding?
| 0 | 0 | 0 | 3,341 |
13,304,136 |
2012-11-09T07:49:00.000
| 1 | 0 | 0 | 0 |
python,qt4,python-2.7,pyside,qtreewidget
| 15,232,509 | 2 | false | 0 | 1 |
You need to override the click behaviour. Check the event if it's a double click or not and then you can redirect the event to the appropriate call. You should check the state if it's already clicked or not to prevent a second animation which might happen.
| 1 | 4 | 0 |
I have created a QTreeWidget and set animation to true (setAnimated(true)).
When I'm clicking on a mark (triangle) at the left of item it expands smoothly, but when I'm double clicking on the item it expands too fast (almost like there is no "animated" flag set).
I want smooth animation on double click too. How can I solve this problem?
QTreeView calls QTreeViewPrivate::expandOrCollapseItemAtPos on mark click and QTreeViewPrivate::expand on double click, so I have no access to these methods.
I'm using PySide for creating Qt application (but I've tried C++ and the problem is the same).
|
QTreeWidget expand animation on double click
| 0.099668 | 0 | 0 | 1,404 |
13,306,359 |
2012-11-09T10:32:00.000
| 4 | 0 | 0 | 1 |
python,linux,twisted,sigkill
| 13,306,625 | 2 | false | 0 | 0 |
From the signal(2) man page:
The signals SIGKILL and SIGSTOP cannot be caught or ignored.
So there is no way the process can run any cleanup code in response to that signal. Usually you only use SIGKILL to terminate a process that doesn't exit in response to SIGTERM (which can be caught).
| 2 | 5 | 0 |
I have a python application that uses twisted framework.
I make use of value stored in the pidfile generated by twistd. A launcher script checks for it's presence and will not spawn a daemon process if the pidfile already exists.
However, twistd does not remove the .pidfile when it gets SIGKILL signal. That makes the launcher script think that the daemon is already running.
I realize the proper way to stop the daemon would be to use SIGTERM signal, but the problem is that when user who started the daemon logs out, the daemon never gets a SIGTERM signal, so apparently it's killed with SIGKILL. That means once a user logs out, he will never be able to start the daemon again, because the pidfile still exists.
Is there any way I could make that file disappear in such situations?
|
python-twisted and SIGKILL
| 0.379949 | 0 | 0 | 574 |
13,306,359 |
2012-11-09T10:32:00.000
| 0 | 0 | 0 | 1 |
python,linux,twisted,sigkill
| 13,310,880 | 2 | false | 0 | 0 |
You could change your launcher (or wrap it up in another launcher) and remove the pid file before trying to restart twistd.
| 2 | 5 | 0 |
I have a python application that uses twisted framework.
I make use of value stored in the pidfile generated by twistd. A launcher script checks for it's presence and will not spawn a daemon process if the pidfile already exists.
However, twistd does not remove the .pidfile when it gets SIGKILL signal. That makes the launcher script think that the daemon is already running.
I realize the proper way to stop the daemon would be to use SIGTERM signal, but the problem is that when user who started the daemon logs out, the daemon never gets a SIGTERM signal, so apparently it's killed with SIGKILL. That means once a user logs out, he will never be able to start the daemon again, because the pidfile still exists.
Is there any way I could make that file disappear in such situations?
|
python-twisted and SIGKILL
| 0 | 0 | 0 | 574 |
13,311,732 |
2012-11-09T16:12:00.000
| 1 | 1 | 0 | 0 |
python,c,performance
| 13,311,964 | 1 | false | 0 | 0 |
You'll want the python calls to your C function to be as little as possible. If you can call the C function once from python and get it to do most/all of the work, that would be better.
| 1 | 0 | 0 |
I wrote a python script to do some experiment with the Mandelbrot set. I used a simple function to find Mandelbrot set points. I was wondering how much efficiency I can achieve by calling a simple C function to do this part of my code? Please consider that this function should call many times from Python.
What is the effect of run time? And maybe other factors that should I aware?
|
Efficiency of calling C function from Python
| 0.197375 | 0 | 0 | 117 |
13,312,043 |
2012-11-09T16:29:00.000
| 46 | 0 | 1 | 0 |
python,dictionary
| 17,347,421 | 7 | false | 0 | 0 |
any(d)
This will return true if the dict. d contains at least one truelike key, false otherwise.
Example:
any({0:'test'}) == False
another (more general) way is to check the number of items:
len(d)
| 2 | 41 | 0 |
How to check if dictionary is empty or not? more specifically, my program starts with some key in dictionary and I have a loop which iterates till there are key in dictionary. Overall algo is like this:
Start with some key in dict
while there is key in dict
do some operation on first key in dict
remove first key
Please note that some operation in above loop may add new keys to dictionary.
I've tried
for key,value in d.iteritems()
but it is failing as during while loop some new key are added.
|
Python:Efficient way to check if dictionary is empty or not
| 1 | 0 | 0 | 145,967 |
13,312,043 |
2012-11-09T16:29:00.000
| 8 | 0 | 1 | 0 |
python,dictionary
| 13,315,913 | 7 | false | 0 | 0 |
I would say that way is more pythonic and fits on line:
If you need to check value only with the use of your function:
if filter( your_function, dictionary.values() ): ...
When you need to know if your dict contains any keys:
if dictionary: ...
Anyway, using loops here is not Python-way.
| 2 | 41 | 0 |
How to check if dictionary is empty or not? more specifically, my program starts with some key in dictionary and I have a loop which iterates till there are key in dictionary. Overall algo is like this:
Start with some key in dict
while there is key in dict
do some operation on first key in dict
remove first key
Please note that some operation in above loop may add new keys to dictionary.
I've tried
for key,value in d.iteritems()
but it is failing as during while loop some new key are added.
|
Python:Efficient way to check if dictionary is empty or not
| 1 | 0 | 0 | 145,967 |
13,312,588 |
2012-11-09T17:01:00.000
| 2 | 0 | 1 | 1 |
python,eclipse,pydev
| 13,312,787 | 1 | false | 0 | 0 |
Python does not allow dashes in identifiers. Module names need to be valid identifiers, so any module file or package directory name with a dash in it is not importable.
On the other hand, script files (python files executed directly by Python, not imported) have no such restrictions. I'd say what you encountered is a bug in PyDev and you should report it as such.
| 1 | 3 | 0 |
I had this problem for a while, and finally understanding what caused it was a good relief.
So basically, python files with a dash ('-') in their name are not fully analyzed by PyDev. I only get the errors but not the warnings... (ie: unused variables, unused imports etc...)
Is this a feature? a known bug? Is there any work around?
I know that the dash is not allowed for python folder but does this apply for python files? (in my case, those are python scripts, without the .py extension for convenience).
For instance, in my bin project subfolder:
commit or release script files are analysed A-OK
add-input, select-files: warning are not reported.
Thanks for any hint on that.
|
code analysis incomplete for filename with a dash
| 0.379949 | 0 | 0 | 566 |
13,313,118 |
2012-11-09T17:37:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,full-text-search,gae-search
| 13,315,587 | 1 | false | 1 | 0 |
This depends on whether or not you have any globally consistent indexes. If you do, then you should migrate all of your data from those indexes to new, per-document-consistent (which is the default) indexes. To do this:
Loop through the documents you have stored in the global index and reindexing them in the new index.
Change references from the global index to the new per-document index.
Ensure everything works, then delete the documents from your global index (not necessary to complete the migration, but still a good idea).
You then should remove any mention of consistency from your code; the default is per-document consistent, and eventually we will remove the ability to specify a consistency at all.
If you don't have any data in a globally consistent index, you're probably getting the warning because you're specifying a consistency. If you stop specifying the consistency it should go away.
Note that there is a known issue with the Python API that causes a lot of erroneous deprecation warnings about consistency, so you could be seeing that as well. That issue will be fixed in the next release.
| 1 | 0 | 0 |
I've been using the appengine python experimental searchAPI. It works great. With release 1.7.3 I updated all of the deprecated methods. However, I am now getting this warning:
DeprecationWarning: consistency is deprecated. GLOBALLY_CONSIST
However, I'm not sure how to address it in my code. Can anyone point me in the right direction?
|
Appengine Search API - Globally Consistent
| 0 | 0 | 0 | 174 |
13,313,609 |
2012-11-09T18:09:00.000
| 0 | 0 | 0 | 0 |
python,django
| 13,314,802 | 2 | false | 1 | 0 |
This looks like it's caused by files being collected by collectstatic having outrageously inaccurate last modified timestamps (like before 1970). Try searching google for tools that allow you to modify your files' last modified dates and change them to something reasonable.
| 2 | 0 | 0 |
Error:
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue, or 'no' to cancel: yes
Traceback (most recent call last):
File "manage.py", line 14, in <module>
execute_manager(settings)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 459, in execute_manager
utility.execute()
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 232, in execute
output = self.handle(*args, **options)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 371, in handle
return self.handle_noargs(**options)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 163, in handle_noargs
collected = self.collect()
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 113, in collect
handler(path, prefixed_path, storage)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 287, in copy_file
if not self.delete_file(path, prefixed_path, source_storage):
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 219, in delete_file
self.storage.modified_time(prefixed_path)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\files\storage.py", line 264, in modified_time
return datetime.fromtimestamp(os.path.getmtime(self.path(name)))
ValueError: timestamp out of range for platform localtime()/gmtime() function
(env) D:\CODE\wamp\www\lezcheung\lezcms>
Anyone know help me?
|
Django collectstatic error
| 0 | 0 | 0 | 1,413 |
13,313,609 |
2012-11-09T18:09:00.000
| 0 | 0 | 0 | 0 |
python,django
| 14,785,611 | 2 | false | 1 | 0 |
I discoreved.
This is just cause I put some fonts in /static/fonts/ and the django don't accept fonts on Static Folder. So, i changed this files to /media/fonts/.
Worked! :D
| 2 | 0 | 0 |
Error:
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue, or 'no' to cancel: yes
Traceback (most recent call last):
File "manage.py", line 14, in <module>
execute_manager(settings)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 459, in execute_manager
utility.execute()
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 232, in execute
output = self.handle(*args, **options)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\management\base.py", line 371, in handle
return self.handle_noargs(**options)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 163, in handle_noargs
collected = self.collect()
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 113, in collect
handler(path, prefixed_path, storage)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 287, in copy_file
if not self.delete_file(path, prefixed_path, source_storage):
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 219, in delete_file
self.storage.modified_time(prefixed_path)
File "D:\CODE\wamp\www\AMBIENTES\env\Lib\site-packages\django\core\files\storage.py", line 264, in modified_time
return datetime.fromtimestamp(os.path.getmtime(self.path(name)))
ValueError: timestamp out of range for platform localtime()/gmtime() function
(env) D:\CODE\wamp\www\lezcheung\lezcms>
Anyone know help me?
|
Django collectstatic error
| 0 | 0 | 0 | 1,413 |
13,313,730 |
2012-11-09T18:18:00.000
| 2 | 0 | 0 | 0 |
python,quickfix
| 13,313,820 | 2 | true | 0 | 0 |
QuickFix will connect and send a Logon automatically when you invoke start from your initiator. As for not being able to get through to your broker, ask them to confirm that they can see your Logon request. Also, make sure they don't require extra fields, like a password or a SubID.
| 1 | 3 | 0 |
When want to send a Quickfix message (Logon, for example) do I need to go and fill in every field manually, or will data from the Settings file get automatically added as necessary.
Currently, I can connect but not log into my broker's FIX server and I'm having trouble getting any idea of what I'm doing wrong.
|
Does Quickfix automatically fill in header, body and trailer fields?
| 1.2 | 0 | 0 | 1,837 |
13,317,087 |
2012-11-09T22:41:00.000
| 0 | 0 | 1 | 0 |
python,python-3.x
| 13,317,159 | 2 | false | 0 | 0 |
You need some logic that states in words: "if not yet seen, add to list, otherwise increment". Or just be pythonic and follow @Tim's suggestion
| 1 | 0 | 0 |
Let's say each letter in one of the list is called 'letter'. The problem I am having is when the string == letter for the first time in a particular list, I have to append a value to the new list. After that, if the string == letter again in that particular list, I only need to update the value. So yeah, it would be really great if any of you experiences ones can help me out.
Thanks
|
list string comparison
| 0 | 0 | 0 | 99 |
13,317,536 |
2012-11-09T23:31:00.000
| 4 | 0 | 0 | 0 |
python,flask
| 57,108,419 | 11 | false | 1 | 0 |
You can view all the Routes via flask shell by running the following commands after exporting or setting FLASK_APP environment variable.
flask shell
app.url_map
| 1 | 179 | 0 |
I have a complex Flask-based web app. There are lots of separate files with view functions. Their URLs are defined with the @app.route('/...') decorator. Is there a way to get a list of all the routes that have been declared throughout my app? Perhaps there is some method I can call on the app object?
|
Get list of all routes defined in the Flask app
| 0.072599 | 0 | 0 | 103,592 |
13,318,291 |
2012-11-10T01:19:00.000
| 0 | 0 | 0 | 0 |
python,scrapyd
| 13,344,717 | 2 | true | 1 | 0 |
I found the answer by adding mylibs to site-packages of python by using setup.py inside mylib folder. That way I could import everything inside mylib in my projects. Actually mylibs were way outside from the location where setup.py of my deploy-able project is present. setup.py looks for packages on same level and inside the folders where it is located.
| 1 | 3 | 0 |
Scrapyd is service where we can eggify deploy our projects. However I am facing a problem. I have a Project named MyScrapers whose spider classes uses an import statement as follows:
from mylibs.common.my_base_spider import MyBaseSpider
The path to my_base_spider is /home/myprojectset/mylibs/common/my_base_spider
While setting environment variable PYTHONPATH=$HOME/myprojectset/, I am able to run MyScrapers using scrapy command: scrapy crawl MyScrapers.
But when I use scrapyd for deploying MyScrapers by following command: scrapy deploy scrapyd2 -p MyScrapers, I get the following error:
Server response (200):
{"status": "error", "message": "ImportError: No module named mylibs.common.my_base_spider"}
Please tell how to make deployed project to use these libs?
|
Scrapyd: How to specify libs and common folders that deployed projects can use?
| 1.2 | 0 | 0 | 973 |
13,318,611 |
2012-11-10T02:17:00.000
| 2 | 0 | 1 | 0 |
sorting,python-2.7,absolute-value
| 29,600,848 | 2 | false | 0 | 0 |
I had the same problem. The answer: Python will sort numbers by the absolute value if you have them as strings. So as your key, make sure to include an int() or float() argument. My working syntax was
data = sorted(data, key = lambda x: float(x[0]))
...the lambda x part just gives a function which outputs the thing you want to sort by. So it takes in a row in my list, finds the float 0th element, and sorts by that.
| 2 | 2 | 1 |
I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?
|
Sort a list of ints and floats with negative and positive values?
| 0.197375 | 0 | 0 | 5,933 |
13,318,611 |
2012-11-10T02:17:00.000
| 0 | 0 | 1 | 0 |
sorting,python-2.7,absolute-value
| 65,985,248 | 2 | false | 0 | 0 |
In addition to doublefelix,below code gives the absolute order to me from string.
siparis=sorted(siparis, key=lambda sublist:abs(float(sublist[1])))
| 2 | 2 | 1 |
I'm trying to sort a list of unknown values, either ints or floats or both, in ascending order. i.e, [2,-1,1.0] would become [-1,1.0,2]. Unfortunately, the sorted() function doesn't seem to work as it seems to sort in descending order by absolute value. Any ideas?
|
Sort a list of ints and floats with negative and positive values?
| 0 | 0 | 0 | 5,933 |
13,321,042 |
2012-11-10T10:01:00.000
| 0 | 0 | 0 | 0 |
python,numpy
| 13,345,287 | 1 | false | 0 | 0 |
If I understand correctly, every pixel in the gray image is mapped to a single pixel in N other images. In that case, the map array is numpy.zeros((i.shape[0], i.shape[1], N, 2), dtype=numpy.int32) since you need to store 1 x and 1 y coordinate into each other N arrays, not the full Nth array every time. Using integer indices will further reduce memory use.
Then result[y,x,N,0] and result[y,x,N,1] are the y and x mappings into the Nth image.
| 1 | 1 | 1 |
I have a gray image in which I want to map every pixel to N other matrices of size LxM.How do I initialize such a matrix?I tried
result=numpy.zeros(shape=(i_size[0],i_size[1],N,L,M)) for which I get the Value Error 'array is too big'.Can anyone suggest an alternate method?
|
Creating a 5D array in Python
| 0 | 0 | 0 | 3,179 |
13,331,665 |
2012-11-11T13:44:00.000
| 0 | 0 | 1 | 0 |
python,language-agnostic
| 13,441,013 | 2 | false | 0 | 0 |
I would suggest to merge the 3 lists of data into one dictionary, which maps names to country names, e.g., it maps "England" -> "England", "English" -> "England", "London" -> "England". It can be easily stored in a database or a file and retrieved.
Then I would search for the keys in the dictionary, and label the item with the value from the dictionary.
| 2 | 0 | 0 |
I am playing around with parsing RSS feeds looking for references to countries. At the moment I am using Python, but I think this question is fairly language agnostic (in theory).
Let's say I have three lists (all related)
Countries - Nouns (i.e. England, Norway, France )
Countries - Adjectives (i.e. English, Norwegian, French)
Cities (i.e. London, Newcastle, Birmingham)
My aim is to begin by parsing the feeds for these strings.
So for example if 'London' was found, the country would be 'England', if 'Norwegian' was found it would be 'Norway' etc.
What would be the optimal method for working with this data? Would it be jason and pulling it all in to create nested dictionaries? sets? or some type of database?
At the moment this is only intended to be used on a local machine.
|
Data storage for query
| 0 | 0 | 1 | 55 |
13,331,665 |
2012-11-11T13:44:00.000
| 0 | 0 | 1 | 0 |
python,language-agnostic
| 13,331,819 | 2 | true | 0 | 0 |
It is a very debatable question. There can be multiple solutions for this. If I were you, I would simply a small DB in Mongodb with three tables like these
Country:
Columns: id, name
Country-adj:
Columns: id, name, country_id
Cities:
Columns: id, name, country_id
then simple queries would give your desired results.
| 2 | 0 | 0 |
I am playing around with parsing RSS feeds looking for references to countries. At the moment I am using Python, but I think this question is fairly language agnostic (in theory).
Let's say I have three lists (all related)
Countries - Nouns (i.e. England, Norway, France )
Countries - Adjectives (i.e. English, Norwegian, French)
Cities (i.e. London, Newcastle, Birmingham)
My aim is to begin by parsing the feeds for these strings.
So for example if 'London' was found, the country would be 'England', if 'Norwegian' was found it would be 'Norway' etc.
What would be the optimal method for working with this data? Would it be jason and pulling it all in to create nested dictionaries? sets? or some type of database?
At the moment this is only intended to be used on a local machine.
|
Data storage for query
| 1.2 | 0 | 1 | 55 |
13,332,268 |
2012-11-11T14:55:00.000
| 4 | 0 | 0 | 1 |
python,linux,subprocess,pipe
| 13,359,172 | 9 | false | 0 | 0 |
Also, try to use 'pgrep' command instead of 'ps -A | grep 'process_name'
| 1 | 326 | 0 |
I want to use subprocess.check_output() with ps -A | grep 'process_name'.
I tried various solutions but so far nothing worked. Can someone guide me how to do it?
|
How to use `subprocess` command with pipes
| 0.088656 | 0 | 0 | 305,963 |
13,334,722 |
2012-11-11T19:43:00.000
| 2 | 0 | 1 | 0 |
python,list,methods
| 13,334,734 | 6 | false | 0 | 0 |
sum(map(sum, my_list))
This runs sum on every element of the first list, then puts the results from those into sum again.
| 1 | 2 | 0 |
I'm looking for method in python to sum a list of list that contain only integers.
I saw that the method sum() works only for list but not for list of list.
There is anything fit for me?
thank you
|
sum for list of lists
| 0.066568 | 0 | 0 | 9,434 |
13,336,852 |
2012-11-12T00:04:00.000
| 3 | 0 | 1 | 1 |
python,macos
| 53,291,437 | 3 | false | 0 | 0 |
If you have python 2 and 3 on brew. Following worked for me.
brew unlink python@2
brew link python@3 (if not yet linked)
| 2 | 8 | 0 |
Currently running Mac OS X Lion 10.7.5 , and it has python2.7 as default. In the terminal, i type 'python' and it automatically pulls up python2.7. I don't want that.
from terminal I have to instead type 'python3.2' if i want to use python3.2.
How do i change that?
|
Setting python3.2 as default instead of python2.7 on Mac OSX Lion 10.7.5
| 0.197375 | 0 | 0 | 16,097 |
13,336,852 |
2012-11-12T00:04:00.000
| 4 | 0 | 1 | 1 |
python,macos
| 13,336,983 | 3 | false | 0 | 0 |
You could edit the default python path and point it to python3.2
Open up ~/.bash_profile in an editor and edit it so it looks like
PATH="/Library/Frameworks/Python.framework/Versions/3.2/bin:${PATH}"
export PATH
| 2 | 8 | 0 |
Currently running Mac OS X Lion 10.7.5 , and it has python2.7 as default. In the terminal, i type 'python' and it automatically pulls up python2.7. I don't want that.
from terminal I have to instead type 'python3.2' if i want to use python3.2.
How do i change that?
|
Setting python3.2 as default instead of python2.7 on Mac OSX Lion 10.7.5
| 0.26052 | 0 | 0 | 16,097 |
13,337,870 |
2012-11-12T03:06:00.000
| 4 | 0 | 0 | 1 |
python,fabric,backslash
| 13,338,597 | 2 | false | 0 | 0 |
OK, finally worked this out. RocketDonkey was correct. Needed to prefix with "r" but also needed to set "shell=False". This allowed what ever worked directly in the bash terminal to work when being called from fabric.api.
Thanks RocketDonkey!!
| 1 | 7 | 0 |
I'm new to python and fabric api. I'm trying to use the sudo functionality to run a sed command in bash terminal which insert some text after a particular line of text is found. Some of the text I'm trying to insert into the file I'm modifying contains backslashes which seem to either be ignored by fabric or cause syntax errors. I've tried "shell=true" and "shell=false" options but still no luck. How can I escape the backslash? It seems "shell=true" only escapes $ and ". My Code below.
sudo (' sed -i "/sometext/a textwith\backslash" /home/me/somefile.txt',shell=True)
|
Python fabric.api backslash hell
| 0.379949 | 0 | 0 | 676 |
13,337,924 |
2012-11-12T03:15:00.000
| 3 | 0 | 0 | 0 |
python,web,flask
| 13,338,019 | 3 | false | 1 | 0 |
Short answer: you can't.
Longer answer: once you have "sent the page" (that is, you have completed a HTTP response) there is no way for you to change what was sent. You can, however, use JavaScript to make additional HTTP requests to the server, and use the HTTP responses to modify the DOM which will change the page that the person is looking at. There are many ways to make a live chat feed, all of which are too complicated to put in a single Stack Overflow answer, but you can be sure that they all use JavaScript.
| 1 | 1 | 0 |
I have a question about using Flask with Python.
Lets say I want to make a website for some mod I'm making for a game, and I want to put in a live chat feed, how would I go around modifying the contents of the page after the page has been sent to the person?
|
Python Flask Modifying Page after loaded
| 0.197375 | 0 | 0 | 1,583 |
13,340,080 |
2012-11-12T07:59:00.000
| 1 | 1 | 0 | 0 |
python
| 13,340,136 | 4 | false | 0 | 0 |
I would highly recommend using a 3rd party HTTP server to serve static files.
Servers like nginx are heavily optimized for the task at hand, parallelized and written in fast languages.
Python is tied to one processor and interpreted.
| 2 | 4 | 0 |
What's the fastest way to serve static files in Python? I'm looking for something equal or close enough to Nginx's static file serving.
I know of SimpleHTTPServer but not sure if it can handle serving multiple files efficiently and reliably.
Also, I don't mind it being a part of a lib/framework of some sort as long as its lib/framework is lightweight.
|
Python fast static file serving
| 0.049958 | 0 | 0 | 3,272 |
13,340,080 |
2012-11-12T07:59:00.000
| 1 | 1 | 0 | 0 |
python
| 13,340,760 | 4 | false | 0 | 0 |
If you look for a oneliner you can do the following:
$> python -m SimpleHTTPServer
This will not fullfil all the task required but worth mentioning that this is the simplest way :-)
| 2 | 4 | 0 |
What's the fastest way to serve static files in Python? I'm looking for something equal or close enough to Nginx's static file serving.
I know of SimpleHTTPServer but not sure if it can handle serving multiple files efficiently and reliably.
Also, I don't mind it being a part of a lib/framework of some sort as long as its lib/framework is lightweight.
|
Python fast static file serving
| 0.049958 | 0 | 0 | 3,272 |
13,341,780 |
2012-11-12T10:12:00.000
| 1 | 0 | 1 | 0 |
python,multiprocessing
| 13,729,930 | 1 | false | 0 | 0 |
I ended up creating a Pipe for every Process. Then when the main Process shuts down it can send a message to all the children Processes that they should shut down too.
In order to make that work right you've got to put a periodic check into the children Processes' "do loop" to see if there are messages in the pipe, and if so, check them to see if it's a "quit now" message.
| 1 | 4 | 0 |
I noticed that os._exit(<num>) ::
Exit the process with status n, without calling cleanup handlers,
flushing stdio buffers, etc.
and that sys.exit() ::
“only” raises an exception, it will only exit the process when called
from the main thread
I need a solution to close a multi-processed application that will ensure all processes are closed (none left orphaned) and that it exits in the best state possible.
Extras:
I am creating the processes using the python multiprocessing library, by creating classes which inherit from multiprocessing.Process
|
Exiting a multiprocessed python application safely
| 0.197375 | 0 | 0 | 1,183 |
13,345,239 |
2012-11-12T14:11:00.000
| 1 | 1 | 0 | 1 |
python,usb,debian
| 13,345,336 | 2 | true | 0 | 0 |
cat /etc/mtab | awk '{ print $2 }'
Will give you a list of mountpoints. You can as well read /etc/mtab yourself and just check if anything's mounted under /media/usb0 (file format: whitespace-divided, most likely single space). The second column is mount destination, the first is the source.
| 1 | 1 | 0 |
I'm using debian with usbmount. I want to check if a USB memory stick is available to write to.
Currently I check if a specific dir exists on the USB drive. If this is True I can then write the rest of my files - os.path.isdir('/media/usb0/Test_Folder')
I would like to create Test_Folder if it doesn't exist. However /media/usb0/ exists even if no USB device is there so I can't just os.mkdir('/media/usb0/Test_Folder') As it makes the file locally.
I need a check that there is a usb drive available on /media/usb0/ to write to before creating the file. Is there a quick way of doing this?
|
Python usbmount checking for device before writing
| 1.2 | 0 | 0 | 905 |
13,346,470 |
2012-11-12T15:25:00.000
| 0 | 0 | 0 | 0 |
python,django,django-models,django-forms
| 13,399,089 | 2 | false | 1 | 0 |
I have now a partial solution. I override the Manager and in particular its all() and get() functions (because I only need those functions for now). all() returns a queryset in which I added the result of some logics that give me objects build from external datas (taken through xmlrpc in my case). I added those objects to the qs through _result_cache attribute.
I think it's not clean and in fact my Model is now a custom Model and I don't have any database field. I may use it to fill database Models... However I can use it the same way as classic models: MyModel.objects.all() for example.
If anyone has another idea I'd really appreciate.
Regards
| 1 | 3 | 0 |
Does anyone can tell me if it's possible to create a Model class, with some model fields and some other fields taking their data from external data sources. The point is that I would like this model to be exploited the same way as another model by ModelForm for instance. I mean if I redefine "objects" Manager of the model by specifying the actions to get the datas for special fields (those who may not be linked to datas from the database), would the modelForm link the input with the fields not attached to the database ? Similar question about related objects. If I have a Model that has a relation with that special Model, can I get this Model instances through the classic way to get related objects (with both the classic model fields and the non-database fields) ?
Please tell me if I'm not clear, I'll reformulate.
Thanks.
EDIT: I tried to make a Model with custom fields, and then override the default Manager and its functions: all, get, ... to get objects like it would be with classical Model and Manager, it works. However, I don't use QuerySet, and it seems that the only way to get ModelForm, related objects and the admin functionnalities, working with it, is to build the QuerySet properly and let it being returned by the manager. That's why now I'm wondering if it's possible to properly and manually build a QuerySet with data got from external sources, or tell django-admin, model forms and related objects to take care of another class than queryset on this Model.
Thanks
|
How to define a Model with fields filled by other data sources than database in django?
| 0 | 0 | 0 | 1,001 |
13,346,698 |
2012-11-12T15:38:00.000
| 1 | 0 | 0 | 1 |
python,openerp
| 13,358,175 | 2 | false | 1 | 0 |
Good question..
Openerp on windows uses a dll for python (python26.dll in /Server/server of the openerp folder in program files). It looks like all the extra libraries are in the same folder, so you should be able to download the extra libraries to that folder and restart the service. (I usually stop the service and run it manually from the command line - its easier to see if there are any errors etc while debugging)
Let us know if you get it working!
| 1 | 6 | 0 |
I installed OpenERP 6.1 on windows using the AllInOne package. I did NOT install Python separately. Apparently OpenERP folders already contain the required python executables.
Now when I try to install certain addons, I usually come across requirements to install certain python modules. E.g. to install Jasper_Server, I need to install http2, pypdf and python-dime.
As there is no separate Python installation, there is no C:\Python or anything like that. Where and how do I install these python packages so that I am able to install the addon?
Thanks
|
Installing Python modules for OpenERP 6.1 in Windows
| 0.099668 | 0 | 0 | 3,249 |
13,347,378 |
2012-11-12T16:22:00.000
| 1 | 0 | 0 | 0 |
wpf,xaml,ironpython,sharpdevelop
| 13,352,605 | 1 | true | 1 | 1 |
You could put in some code to catch the error and log it to a file.
Something possibly simpler is to compile your application as a Console Application. This can be done via Project Options - Application - Output type. Then you will get a console window when you run your WPF application and any exception that happens at startup will be logged to this window.
| 1 | 0 | 0 |
I am doing an application with GUI using WPF/XAML with Ironpython and SharpDevelop, until now it works fine, when I'm in the development environment I can see the errors in console and know what is wrong.
But when I build and deploy the app for us on other system or I ran it outside of the development environment and there is no longer the console when there is some error or crashes, it fails silently, and I cannot know what went wrong.
How can I alert or log to see what fails?
|
Ironpython: How to see when a WPF application fails?
| 1.2 | 0 | 0 | 240 |
13,351,608 |
2012-11-12T21:10:00.000
| 1 | 0 | 0 | 0 |
python,quickfix
| 13,368,991 | 2 | true | 1 | 0 |
Solved! I think there was something wrong with my datadictionary (FIX44.xml) file. I had seen a problem in it before, but thought I fixed it. I got a new copy online and dropped it in and now everything seems to be working. Maybe the bad dictionary was not letting FIX accept the logon response?
| 2 | 1 | 0 |
QuickFIX logon trouble: (using QuickFIX, with FIX 4.4 in Python 2.7)
Once I do initiator.start() a connection is made, and logon message is sent. However, I don't ever see the ACK and session status message that the broker is sending back (all the overloaded Application methods are just supposed to print out what they receive).
QuickFIX immediately re-tries the logon (according to the broker log files), and the same thing happens, but according to the server, I am already logged in.
QuickFIX then issues a Logout command, which the server complies with.
I have tried enter Timeout values in the settings file, but to no avail. (Do I need to explicitly reference these values in the code to have the utilized, or will the engine see them and act accordingly automatically?)
Any ideas what is going on here?
|
QuickFIX logon trouble: multiple rapid fire logon attempts being sent
| 1.2 | 0 | 0 | 1,050 |
13,351,608 |
2012-11-12T21:10:00.000
| 2 | 0 | 0 | 0 |
python,quickfix
| 13,368,881 | 2 | false | 1 | 0 |
Sounds like you do not have message logs enabled. If your app rejects messages below the application level (such as if the seq no is wrong, or the message is malformed), then it'll be rejected before your custom message handlers even see it.
If you are starting your Initiator with a ScreenLogStore, change it to a FileLogStore. This will create a log file that will contain every message sent and received on the session, valid or not. Dollars to donuts you'll see your Logon acks in there as well as some Transport-layer rejections.
| 2 | 1 | 0 |
QuickFIX logon trouble: (using QuickFIX, with FIX 4.4 in Python 2.7)
Once I do initiator.start() a connection is made, and logon message is sent. However, I don't ever see the ACK and session status message that the broker is sending back (all the overloaded Application methods are just supposed to print out what they receive).
QuickFIX immediately re-tries the logon (according to the broker log files), and the same thing happens, but according to the server, I am already logged in.
QuickFIX then issues a Logout command, which the server complies with.
I have tried enter Timeout values in the settings file, but to no avail. (Do I need to explicitly reference these values in the code to have the utilized, or will the engine see them and act accordingly automatically?)
Any ideas what is going on here?
|
QuickFIX logon trouble: multiple rapid fire logon attempts being sent
| 0.197375 | 0 | 0 | 1,050 |
13,351,694 |
2012-11-12T21:16:00.000
| 0 | 0 | 0 | 0 |
python,django,django-views
| 13,397,523 | 2 | false | 1 | 0 |
It is an interesting problem. Ideally you should pull all the components from database before rendering. But looking at hierarchy, making template tags makes sense. These template tag will pull appropriate data. Assume for the purpose of this problem that database query gets cached due to search locality.
| 1 | 8 | 0 |
I'm working on a big social networking app in Django where I expect to use certain front-end components many times, and often with functionality designed in such a way that custom components contain other custom components, which might contain yet smaller subcomponents (ad infinitum). All of these components are typically dynamic generated. I'm trying to figure out the best way to architect this in the Django framework, such that my components are easy to maintain and have clear programming interfaces. Relying heavily on global context would seem to be the opposite of this, however, I can see advantages in avoiding redundant queries by doing them all at once in the view.
Custom inclusion template tags seem like a good fit for implementing components, but I'm wondering, do highly nested template tags can create performance issues, or does the parsing architecture prevent this? What is the best way of making it self-documenting at the view level what context is needed to render the main page template, custom tags and all? I'm imagining it to be a minor nightmare to try to properly maintain the code to set up the template context. Lastly, what is the best way to maintain the CSS for these components?
Feel free to suggest other recommended approaches for creating a nested-component design.
|
Django: Implementing a nested, reusable component design
| 0 | 0 | 0 | 1,068 |
13,352,796 |
2012-11-12T22:42:00.000
| 0 | 0 | 0 | 0 |
python,mongodb,twitter,tweepy
| 22,388,827 | 1 | false | 0 | 0 |
Unfortunately, the Twitter API doesn't provide a way to do this. You can try searching through receive tweets for the keywords you specified, but it might not match exactly.
| 1 | 1 | 0 |
I'm filtering the twitter streaming API by tracking for several keywords.
If for example I only want to query and return from my database tweet information that was filtered by tracking for the keyword = 'BBC' how could this be done?
Do the tweet information collected have a key:value relating to that keyword by which it was filtered?
I'm using python, tweepy and MongoDB.
Would an option be to search for the keyword in the returned json 'text' field? Thus generate a query where it searches for that keyword = 'BBC' in the text field of the returned json data?
|
Querying twitter streaming api keywords from a database
| 0 | 1 | 0 | 374 |
13,353,113 |
2012-11-12T23:08:00.000
| 1 | 0 | 0 | 1 |
python,django,ftp,virtualenv,virtualbox
| 13,353,762 | 1 | true | 1 | 0 |
The reason that the the client reported back "Connection refused by server" is that the server returned a TCP packet with the reset bit set, in response to an application trying to connect to a port that is not being listened on by an application, or by a firewall.
I think that the FTP service is not running, or running on an alternate port. Take a look at the output from netstat -nltp (on Linux) or netstat -ntlb (on windows). You should see a program that is waiting to hear request on TCP port 21. If you don't see the program listed at all or not on the expected port that your client is going to try and connect to, then modify the FTP servers configuration file.
| 1 | 0 | 0 |
I've recently started learning Django and have set up a virtual machine running a Django server on VirtualEnv. I can use the runserver command to run the basic Django installation server and view it on another computer with the local IP address.
However, I can't figure out how to connect to my virtual machine with my FTP client so that I can edit files on my host machine (Windows). I've tried using the IP address of the virtual machine with an FTP client but it says "Connection refused by server".
Any help would be appreciated, thanks!
|
How to FTP into a virtual machine?
| 1.2 | 0 | 0 | 5,276 |
13,353,978 |
2012-11-13T00:37:00.000
| 1 | 0 | 1 | 1 |
python,debian,fedora
| 13,425,528 | 2 | false | 0 | 0 |
Create an rpm package, give it to your Debian users and tell them to convert the rpm to a Debian package using alien on their Debian box.
| 1 | 1 | 0 |
I have created a small python application to be used internally in my organization. I wrote the code on my primary development machine running Fedora 17 and I would like to create a .deb in order to make it easy for my colleagues to install my program.
Is it possible to create debian packages for python application from a system running fedora? If yes, how?
|
Creating a debian package for my python application from a system running fedora
| 0.099668 | 0 | 0 | 110 |
13,354,317 |
2012-11-13T01:22:00.000
| 2 | 0 | 0 | 0 |
macos,installation,wxpython
| 13,357,833 | 2 | true | 0 | 1 |
At least for development, I would suggest to install (python and) wx using homebrew. It will install version 2.9 and you're ensured that Apple-provided system libraries remain untouched.
| 1 | 0 | 0 |
I am new to Mac, have always used windows and I am confused on how to install wxPython. I downloaded the .dmg file from the website, and it contained three files:
a pkg file, a readme, and an uninstall.py
I opened the pkg file, went through the steps, and Im not sure where it installed after it said "Installation Complete"
Also, I did the import wx in idle, which caused a stacktrace error.
Thanks.
|
Install wxpython on mac
| 1.2 | 0 | 0 | 363 |
13,355,358 |
2012-11-13T03:53:00.000
| 5 | 1 | 0 | 0 |
python,publishing,bioinformatics,biopython
| 13,355,383 | 4 | false | 0 | 0 |
While there are many approaches to this, one of the customary solutions would be to indeed publish it on github and then link to it from your research institution's website.
| 1 | 4 | 0 |
I've written an analytical pipeline in Python that I think will be useful to other people. I'm wondering whether it is customary to publish such scripts in GitHub, whether there's a specific place to do this for Python scripts, or even if there's a more specific place for biology-related Python scripts.
|
Where to deposit a Python script that performs bioinformatics analyses?
| 0.244919 | 0 | 0 | 521 |
13,355,370 |
2012-11-13T03:55:00.000
| 8 | 0 | 1 | 1 |
python-2.7
| 16,665,606 | 1 | false | 0 | 0 |
python.org
The installer from python.org installs to /Library/Frameworks/Python.framework/, and only that python executable looks in the contained site-package dir for packages.
/Library/Python
In contrast, the dir /Library/Python/2.7/site-packages/ is a global place where you can put python packages, all python 2.7 interpreter will. (For example the python 2.7 that comes with OS X).
~/Library/Python
The dir ~/Library/Python/2.7/site-packages, if it exists, is also used but for your user only.
sys.path
From within python, you can check, which directories are currently used by import sys; print(sys.path)
homebrew
Note, a python installed via homebrew, will put it's site-packages in $(brew --prefix)/lib/python2.7/site-packages but also be able to import packages from /Library/Python/2.7/site-packages and ~/Library/Python/2.7/site-packages.
| 1 | 4 | 0 |
I am working on a mac, a quick question, could someone told me the difference of these two directories?
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
/Library/Python/2.7/site-packages/
|
what is the difference between "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/" and "/Library/Python/2.7/"
| 1 | 0 | 0 | 2,452 |
13,356,024 |
2012-11-13T05:33:00.000
| 0 | 1 | 0 | 1 |
php,python,caching,egg
| 13,356,068 | 1 | false | 0 | 0 |
Make sure whatever user php is running under has appropriate permissions. You can try opening a pipe and changing users, or just use apache's suexec.
| 1 | 0 | 0 |
I have a python script that runs as a daemon process. I want to be able to stop and start the process via a web page. I made a PHP script that runs exec() on the python daemon. Any idea?
Traceback (most recent call last): File
"/home/app/public_html/daemon/daemon.py", line 6, in from
socketServer import ExternalSocketServer, InternalSocketServer File
"/home/app/public_html/daemon/socketServer.py", line 3, in
import json, asyncore, socket, MySQLdb, hashlib, urllib, urllib2,
logging, traceback, sys File
"build/bdist.linux-x86_64/egg/MySQLdb/init.py", line 19, in
File "build/bdist.linux-x86_64/egg/_mysql.py", line 7, in
File "build/bdist.linux-x86_64/egg/_mysql.py", line 4, in
bootstrap File "build/bdist.linux-i686/egg/pkg_resources.py", line 882, in resource_filename File
"build/bdist.linux-i686/egg/pkg_resources.py", line 1351, in
get_resource_filename File
"build/bdist.linux-i686/egg/pkg_resources.py", line 1373, in
_extract_resource File "build/bdist.linux-i686/egg/pkg_resources.py", line 962, in
get_cache_path File "build/bdist.linux-i686/egg/pkg_resources.py",
line 928, in extraction_error pkg_resources.ExtractionError: Can't
extract file(s) to egg cache The following error occurred while
trying to extract file(s) to the Python egg cache: [Errno 13]
Permission denied: '//.python-eggs' The Python egg cache directory is
currently set to: //.python-eggs Perhaps your account does not
have write access to this directory? You can change the cache
directory by setting the PYTHON_EGG_CACHE environment variable to
point to an accessible directory.
|
Running Python from PHP
| 0 | 0 | 0 | 189 |
13,356,348 |
2012-11-13T06:20:00.000
| 2 | 0 | 0 | 0 |
python,nltk,smoothing
| 13,397,869 | 1 | false | 0 | 0 |
I'd suggest to replace all the words with low (specially 1) frequency to <unseen>, then train the classifier in this data.
For classifying you should query the model for <unseen> in the case of a word that is not in the training data.
| 1 | 4 | 1 |
I am using Naive Bayes classifier in python for text classification. Is there any smoothing methods to avoid zero probability for unseen words in python NLTK? Thanks in advance!
|
Smoothing in python NLTK
| 0.379949 | 0 | 0 | 1,430 |
13,357,227 |
2012-11-13T07:58:00.000
| 0 | 1 | 0 | 0 |
java,python,robotframework
| 13,602,048 | 2 | false | 1 | 0 |
Try to put your Library into this folder:
...YourPythonFolder\Lib\site-packages\
or, if this doesn't work, make in the folder "site-packages" folder with the name "MyLibrary" and put your library there.
This should work.
| 1 | 2 | 0 |
i am facing difficulty when trying to run my tests. Here is what i did :
Create a java project with one class which has one method called hello(String name)
Exported this as a jar and kept it in the same directory where i keep my test case file.
my Test case looks like this.
Setting * * Value * * Value * * Value * * Value * * Value *
Library MyLibrary
Variable * * Value * * Value * * Value * * Value * * Value *
Test Case * * Action * * Argument * * Argument * * Argument * * Argument *
MyTest
hello World
Keyword * * Action * * Argument * * Argument * * Argument * * Argument *
I always get the following error :
Error in file 'C:\Users\yahiya\Desktop\robot-practice\testcase_template.tsv' in table 'Setting': Importing test library 'MyLibrary' failed: ImportError: No module named MyLibrary
I have configured Pythopath in the system variables in my windows machine.
Please let me know what am i doing wrong here.
Thanks
|
Robot Framework - using User Libraries
| 0 | 0 | 0 | 2,367 |
13,358,729 |
2012-11-13T10:03:00.000
| 3 | 0 | 0 | 0 |
python,html,output,tabular
| 13,358,764 | 2 | false | 1 | 0 |
Why not do both ? Make your data available as CSV (for simple export to scripts etc.) and provide a decorated HTML version.
At some stage you may want (say) a proper Excel sheet, a PDF etc. So I would enforce a separation of the data generation from the rendering. Make your generator return a structure that can be consumed by an abstract renderer, and your concrete implementations would present CSV, PDF, HTML etc.
| 2 | 0 | 0 |
I am using Python3 to calculate some statistics from language corpora. Until now I was exporting the results in a csv-file or directly on the shell. A few days ago I started learning how to output the data to html-tables. I must say I really like it, it deals perfect height/width of cell and unicodes and you can apply color to different values. although I think there are some problem when dealing with large data or tables.
Anyway, my question is, I'mot not sure if I should continue in this direction and output the results to html. Can someone with experience in this field help me with some pros and cons of using html as output?
|
Pros and Cons of html-output for statistical data
| 0.291313 | 0 | 0 | 196 |
13,358,729 |
2012-11-13T10:03:00.000
| 1 | 0 | 0 | 0 |
python,html,output,tabular
| 13,359,571 | 2 | true | 1 | 0 |
The question lists some benefits of HTML format. These alone are sufficient for using it as one of output formats. Used that way, it does not really matter much what you cannot easily do with the HTML format, as you can use other formats as needed.
Benefits include reasonable default rendering, which can be fine-tuned in many ways using CSS, possibly with alternate style sheets (now supported even by IE). You can also include links.
What you cannot do in HTML without scripting is computation, sorting, reordering, that kind of stuff. But they can be added with JavaScript – not trivial, but doable.
There’s a technical difficulty with large tables: by default, a browser will start showing any content in the table only after having got, parsed, and processed the entire table. This may cause a delay of several seconds. A way to deal with this is to use fixed layout (table-layout: fixed) with specific widths set on table columns (they need not be fixed in physical units; the great em unit works OK, and on modern browsers you can use ch too).
Another difficulty is bad line breaks. It’s easy fixable with CSS (or HTML), but authors often miss the issue, causing e.g. cell contents like “10 m” to be split into two lines.
Other common problems with formatting statistical data in HTML include:
Not aligning numeric fields to the right.
Using serif fonts.
Using fonts where not all digits have equal width.
Using the unnoticeable hyphen “-” insted of the proper Unicode minus “−” (U+2212, −).
Not indicating missing values in some reasonable way, leaving some cells empty. (Browsers may treat empty cells in odd ways.)
Insufficient horizontal padding, making cell contents (almost) hit cell border or cell background edge.
There are good and fairly easy solutions to such problems, so this is just something to be noted when using HTML as output format, not an argument against it.
| 2 | 0 | 0 |
I am using Python3 to calculate some statistics from language corpora. Until now I was exporting the results in a csv-file or directly on the shell. A few days ago I started learning how to output the data to html-tables. I must say I really like it, it deals perfect height/width of cell and unicodes and you can apply color to different values. although I think there are some problem when dealing with large data or tables.
Anyway, my question is, I'mot not sure if I should continue in this direction and output the results to html. Can someone with experience in this field help me with some pros and cons of using html as output?
|
Pros and Cons of html-output for statistical data
| 1.2 | 0 | 0 | 196 |
13,358,955 |
2012-11-13T10:20:00.000
| 4 | 0 | 1 | 0 |
python,memory-management,set,itertools
| 13,358,975 | 1 | true | 0 | 0 |
Sets are just like dict and list; on creation they copy the references from the seeding iterable.
Iterators cannot be sets, because you cannot enforce the uniqueness requirement of a set. You cannot know if a future value yielded by an iterator has already been seen before.
Moreover, in order for you to determine what the intersection is between two iterables, you have to load all data from at least one of these iterables to see if there are any matches. For each item in the second iterable, you need to test if that item has been seen in the first iterable. To do so efficiently, you need to have loaded all the items from the first iterable into a set. The alternative would be to loop through the first iterable from start to finish for each item from the second iterable, leading to exponential performance degradation.
| 1 | 2 | 0 |
so I discovered Sets in Python a few days ago and am surprised that they never crossed my mind before even though they make a lot of things really simple. I give an example later.
Some things are still unclear to me. The docs say that Sets can be created from iterables and that the operators always return new Sets but do they always copy all data from one set to another and from the iterable? I work with a lot of data and would love to have Sets and set operators that behave much like itertools. So Sets([iterable]) would be more like a wrapper and the operators union, intersection and so on would return "iSets" and would not copy any data. They all would evaluate once I iter my final Set. In the end I really much would like to have "iSet" operators.
Purpose:
I work with MongoDB using mongoengine. I have articles saved. Some are associated with a user, some are marked as read others were shown to the user and so on. Wrapping them in Sets that do not load all data would be a great way to combine, intersect etc. them. Obviously I could make special queries but not always since MongoDB does not support joins. So I end up doing joins in Python. I know I could use a relational database then, however, I don't need joins that often and the advantages of MongoDB outweigh them in my case.
So what do you think? Is there already a third party module? Would a few lines combining itertools and Sets do?
EDIT:
I accepted the answer by Martijn Pieters because it is obviously right. I ended up loading only IDs into sets to work with them. Also, the sets in Python have a pretty good running time.
|
Python: Combining itertools and sets to save memory
| 1.2 | 1 | 0 | 707 |
13,360,145 |
2012-11-13T11:42:00.000
| 1 | 0 | 0 | 1 |
python,django,asynchronous,celery,gevent
| 13,429,864 | 2 | false | 1 | 0 |
Have you tried to use Celery + eventlet? It works well in our project
| 1 | 4 | 0 |
We're using Celery for background tasks in our Django project.
Unfortunately, we have many blocking sockets in tasks, that can be established for a long time. So Celery becomes fully loaded and does not respond.
Gevent can help me with sockets, but Celery has only experimental support of gevent (and as I found in practice, it doesn't work well).
So I considered to switch to another task queue system.
I can choose between two different ways:
Write my own task system. This is a least preferred choice, because it requires much time.
Find good and well-tried replacement for Celery that will work after monkey patching.
Is there any analogue of Celery, that will guarantee me execution of my tasks even after sudden exit?
|
Asynchronous replacement for Celery
| 0.099668 | 0 | 0 | 1,476 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.