Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 1 | 2 | 0 | 1.2 | 0 |
I have a typical producer, consumer pattern. If the producer sends an object over a channel, the producer is blocked until the consumer accepts the object. After the consumer accepts the object, the producer alters the object in some way. Does the consumer see the object get altered? Or was there an implicit copy when sending the data over the channel?
| 0 |
python,immutability,stackless,python-stackless
|
2010-10-28T19:09:00.000
| 0 | 4,046,351 |
Stackless sends a reference to the python object over the channel, so any changes the producer makes to the object will be "seen" by the consumer. No copying going on.
| 0 | 135 | true | 0 | 1 |
In stackless python, is data sent over a channel immutable?
| 9,828,193 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
Hey,
I am helping to set up a regression testing suite for our python web application. Many of our tests are scheduling style tests where the current date is important. For example: create a recurring event that runs every week for a month starting on Feb 1.
In order to test this, what I really want to do is override the current date so I can move back and forward in time to check the state of the app. For example, I may add an test-only page that lets me set the 'current' date which gets passed to the python back end and is used for date calculations.
In the past when I have done this, I engineered it into the application architecture from day 1. Unfortunately, I am coming in late to this project and there is no application support for this.
So here's my question, is there any way I can override the current date on a web service call? For example, can I intercept calls for the current date (monkey patching possibly?). I would rather not have to do a whole IOC thing as it would mean changing hundreds of methods.
- dave
| 0 |
python,date,automated-tests,regression-testing
|
2010-10-28T20:46:00.000
| 0 | 4,047,049 |
Answering my own question:
Our current plan is to test on virtual machines and change the date/time on the VMs. This is a more complete solution as it gets the database and worker threads all at once.
I'll post an update with some real world experience when we come to actual do it.
| 0 | 317 | false | 1 | 1 |
Date sensitive regression testing using python
| 4,121,205 |
1 | 3 | 0 | 9 | 20 | 0 | 1 | 1 |
I'm writing a Python client+server that uses gevent.socket for communication. Are there any good ways of testing the socket-level operation of the code (for example, verifying that SSL connections with an invalid certificate will be rejected)? Or is it simplest to just spawn a real server?
Edit: I don't believe that "naive" mocking will be sufficient to test the SSL components because of the complex interactions involved. Am I wrong in that? Or is there a better way to test SSL'd stuff?
| 0 |
python,sockets,testing,gevent
|
2010-10-28T23:16:00.000
| 0 | 4,047,897 |
Mocking and stubbing are great, but sometimes you need to take it up to the next level of integration. Since spawning a server, even a fakeish one, can take some time, consider a separate test suite (call them integration tests) might be in order.
"Test it like you are going to use it" is my guideline, and if you mock and stub so much that your test becomes trivial it's not that useful (though almost any test is better than none). If you are concerned about handling bad SSL certs, by all means make some bad ones and write a test fixture you can feed them to. If that means spawning a server, so be it. Maybe if that bugs you enough it will lead to a refactoring that will make it testable another way.
| 0 | 14,758 | false | 0 | 1 |
Python: unit testing socket-based code?
| 4,048,286 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 |
I'm developing a Java program through Eclipse locally, and debugging on a remote machine. Whenever I make a change to my program, I copy the corresponding class file to the bin directory on the remote machine. I run my program (a simulator) through a python script via the OS.system command.
The problem is that my program sometimes does not use the updated class files after they have been moved over.
The problem persists even if I log out and back into the remote machine. What's really strange is that, as a test, I deleted the bin directory entirely on the remote machine, and was still able to run my program.
Can anyone explain this?
| 0 |
java,python
|
2010-10-29T03:42:00.000
| 1 | 4,048,821 |
I would bet dollars for donuts that under some conditions you are not restarting the JVM between tests.
The other obvious thought is that the class is not being copied to the target system as expected, or not to the correct location. Or, of course, the program is not being run from where you expect (i.e. there is another copy of the class files, perhaps in a JAR, which is actually be run).
Explicitly recheck all your assumptions.
| 0 | 88 | false | 0 | 1 |
Java: updated class files not used
| 4,048,850 |
3 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 0 |
I want to perform a comparison of multiple implementations of basically the same algorithm, written in Java, C++ and Python, the latter executed using Pypy, Jython and CPython on a Mac OS X 10.6.4 Macbook Pro with normal (non-SSD) HDD.
It's a "decode a stream of data from a file" type of algorithm, where the relevant measurement is total execution time, and I want to prevent bias through e.g. OS an HDD caches, other programs running simultaneously, too large/small sample file etc. What do I need to pay attention to to create a fair comparison?
| 0 |
java,c++,python,jython,performance
|
2010-10-29T14:19:00.000
| 1 | 4,052,691 |
These are difficult to do well.
In many cases the operating system will cache files so the second time they are executed they suddenly perform much better.
The other problem is you're comparing interpreted languages against compiled. The interpreted languages require an interpreter loaded into memory somewhere or they can't run. To be scrupulously fair you really should consider if memory usage and load time for the interpreter should be part of the test. If you're looking for performance in an environment where you can assume the interpreter is always preloaded then you can ignore that. Many setups for web servers will be able to keep an interpreter preloaded. If you're doing ad hoc client applications on a desktop then the start up can be very slow while the interpreter is loaded.
| 0 | 183 | false | 0 | 1 |
Performing unbiased program/script performance comparison
| 4,053,156 |
3 | 4 | 0 | 0 | 0 | 0 | 1.2 | 0 |
I want to perform a comparison of multiple implementations of basically the same algorithm, written in Java, C++ and Python, the latter executed using Pypy, Jython and CPython on a Mac OS X 10.6.4 Macbook Pro with normal (non-SSD) HDD.
It's a "decode a stream of data from a file" type of algorithm, where the relevant measurement is total execution time, and I want to prevent bias through e.g. OS an HDD caches, other programs running simultaneously, too large/small sample file etc. What do I need to pay attention to to create a fair comparison?
| 0 |
java,c++,python,jython,performance
|
2010-10-29T14:19:00.000
| 1 | 4,052,691 |
I would recommend that you simply run each program many times (like 20 or so) and take the lowest measurement of each set. This will make it so it is highly likely that the program will use the HD cache and other things like that. If they all do that, then it isn't biased.
| 0 | 183 | true | 0 | 1 |
Performing unbiased program/script performance comparison
| 4,053,208 |
3 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
I want to perform a comparison of multiple implementations of basically the same algorithm, written in Java, C++ and Python, the latter executed using Pypy, Jython and CPython on a Mac OS X 10.6.4 Macbook Pro with normal (non-SSD) HDD.
It's a "decode a stream of data from a file" type of algorithm, where the relevant measurement is total execution time, and I want to prevent bias through e.g. OS an HDD caches, other programs running simultaneously, too large/small sample file etc. What do I need to pay attention to to create a fair comparison?
| 0 |
java,c++,python,jython,performance
|
2010-10-29T14:19:00.000
| 1 | 4,052,691 |
To get totally unbiased is impossible, you can do various stuff like running minimum processes etc but IMO best way is to run scripts in random order over a long period of time over different days and get average which will be as near to unbias as possible.
Because ultimately code will run in such environment in random order and you are interested in average behavior not some numbers.
| 0 | 183 | false | 0 | 1 |
Performing unbiased program/script performance comparison
| 4,053,230 |
2 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 |
I'm trying to figure out how I can best save the map data for a 2d ORPG engine I am developing, the file would contain tile data (Is it blocked, what actual graphics would it use, and various other properties).
I am currently using a binary format but I think this might be a bit too limited and hard to debug, what alternatives are there, I was thinking about perhaps JSON or XML but I don't know if there are any other better options.
It has to work with C++ and C# and preferably also with Python.
| 0 |
c#,python,c++
|
2010-10-29T14:53:00.000
| 0 | 4,052,990 |
XML is well supported across basically every language. It may become verbose for large maps, however, depending on how you encode the map data in XML.
JSON might not be a good choice, simply because I don't think it supports multiline strings, which would be helpful (although not really necessary)
YAML is another alternative, though it's not as well-known.
You could just stick to binary - most maps would be a pain to edit by hand, no matter what format you pick (though I've heard of Starcraft maps being edited with hex editors...) Just use whatever seems easiest for you.
Additionally, check out the Tiled map editor (http://www.mapeditor.org/), which lets you edit maps (with custom tile properties, I think) and save it in an XML based format, including optional GZip for compression.
| 0 | 446 | false | 0 | 1 |
Saving map data in a 2d ORPG
| 4,056,557 |
2 | 3 | 0 | 1 | 0 | 1 | 1.2 | 0 |
I'm trying to figure out how I can best save the map data for a 2d ORPG engine I am developing, the file would contain tile data (Is it blocked, what actual graphics would it use, and various other properties).
I am currently using a binary format but I think this might be a bit too limited and hard to debug, what alternatives are there, I was thinking about perhaps JSON or XML but I don't know if there are any other better options.
It has to work with C++ and C# and preferably also with Python.
| 0 |
c#,python,c++
|
2010-10-29T14:53:00.000
| 0 | 4,052,990 |
Personally, I would stick with a binary format. Whatever method you choose, it's going to be a pain in the ass to edit by hand anyway, so you may as well stick to binary which gives you a size and speed advantage.
You're also going to want a map editor anyway so that you do not have to edit it by hand.
| 0 | 446 | true | 0 | 1 |
Saving map data in a 2d ORPG
| 4,062,198 |
2 | 4 | 0 | 2 | 1 | 1 | 0.099668 | 0 |
just encountered a problem at dict "type" subclassing. I did override __iter__ method and expected it will affect other methods like iterkeys, keys etc. because I believed they call __iter__ method to get values but it seems they are implemented independently and I have to override all of them.
Is this a bug or intention they don't make use of other methods and retrieves values separately ?
I didn't find in the standard Python documentation description of calls dependency between methods of standard classes. It would be handy for sublassing work and for orientation what methods is required to override for proper behaviour. Is there some supplemental documentation about python base types/classes internals ?
| 0 |
python,python-datamodel
|
2010-10-29T16:02:00.000
| 0 | 4,053,662 |
If not specified in the documentation, it is implementation specific. Implementations other that CPython might re-use the iter method to implement iterkeys and others. I would not consider this to be a bug, but simply a bit of freedom for the implementors.
I suspect there is a performance factor in implementing the methods independently, especially as dictionaries are so widely used in Python.
So basically, you should implement them.
| 0 | 142 | false | 0 | 1 |
Python std methods hierarchy calls documented?
| 4,053,781 |
2 | 4 | 0 | 1 | 1 | 1 | 0.049958 | 0 |
just encountered a problem at dict "type" subclassing. I did override __iter__ method and expected it will affect other methods like iterkeys, keys etc. because I believed they call __iter__ method to get values but it seems they are implemented independently and I have to override all of them.
Is this a bug or intention they don't make use of other methods and retrieves values separately ?
I didn't find in the standard Python documentation description of calls dependency between methods of standard classes. It would be handy for sublassing work and for orientation what methods is required to override for proper behaviour. Is there some supplemental documentation about python base types/classes internals ?
| 0 |
python,python-datamodel
|
2010-10-29T16:02:00.000
| 0 | 4,053,662 |
You know the saying: "You know what happens when you assume." :-)
They don't officially document that stuff because they may decide to change it in the future. Any unofficial documentation you may find would simply document the current behavior of one Python implementation, and relying on it would result in your code being very, very fragile.
When there is official documentation of special methods, it tends to describe behavior of the interpreter with respect to your own classes, such as using __len__() when __nonzero__() isn't implemented, or only needing __lt()__ for sorting.
Since Python uses duck typing, you usually don't actually need to inherit from a built-in class to make your own class act like one. So you might reconsider whether subclassing dict is really what you want to do. You might choose a different class, such as something from the collections module, or to encapsulate rather than inheriting. (The UserString class uses encapsulation.) Or just start from scratch.
| 0 | 142 | false | 0 | 1 |
Python std methods hierarchy calls documented?
| 4,053,834 |
3 | 4 | 1 | 6 | 4 | 0 | 1.2 | 0 |
I just downloaded the original Python interpreter from Python's site. I just want to learn this language but to start with, I want to write Windows-based standalone applications that are powered by any RDBMS. I want to bundle it like any typical Windows setup.
I searched old posts on SO and found guys suggesting wxPython and py2exe. Apart from that few suggested IronPython since it is powered by .NET.
I want to know whether IronPython is a pure variant of Python or a modified variant. Secondly, what is the actual use of Python? Is it for PHP like thing or like C# (you can either program Windows-based app. or Web.).
| 0 |
python,wxpython,ironpython
|
2010-10-30T14:49:00.000
| 0 | 4,059,201 |
IronPython isn't a variant of Python, it is Python. It's an implementation of the Python language based on the .NET framework. So, yes, it is pure Python.
IronPython is caught up to CPython (the implementation you're probably used to) 2.6, so some of the features/changes seen in Python 2.7 or 3.x will not be present in IronPython. Also, the standard library is a bit different (but what you lose is replaced by all that .NET has to offer).
The primary application of IronPython is to script .NET applications written in C# etc., but it can also be used as a standalone. IronPython can also be used to write web applications using the SilverLight framework.
If you need access to .NET features, use IronPython. If you're just trying to make a Windows executable, use py2exe.
Update
For writing basic RDBMS apps, just use CPython (original Python), it's more extensible and faster. Then, you can use a number of tools to make it stand alone on a Windows PC. For now, though, just worry about learning Python (those skills will mostly carry over to IronPython if you choose to switch) and writing your application.
| 0 | 1,297 | true | 0 | 1 |
Is IronPython a 100% pure Python variant?
| 4,059,227 |
3 | 4 | 1 | 1 | 4 | 0 | 0.049958 | 0 |
I just downloaded the original Python interpreter from Python's site. I just want to learn this language but to start with, I want to write Windows-based standalone applications that are powered by any RDBMS. I want to bundle it like any typical Windows setup.
I searched old posts on SO and found guys suggesting wxPython and py2exe. Apart from that few suggested IronPython since it is powered by .NET.
I want to know whether IronPython is a pure variant of Python or a modified variant. Secondly, what is the actual use of Python? Is it for PHP like thing or like C# (you can either program Windows-based app. or Web.).
| 0 |
python,wxpython,ironpython
|
2010-10-30T14:49:00.000
| 0 | 4,059,201 |
IronPython is an implementation of Python using C#. It's just like the implementation of Python using Java by Jython. You might want to note that IronPython and Jython will always lag behind a little bit in development. However, you do get the benefit of having some libraries that's not available in the standard Python libraries. In IronPython, you will be able to get access to some of the .NET stuff, like System.Drawings and such, though by using these non-standard libraries, it will be harder to port your code to other platforms. For example, you will have to install mono to run apps written in IronPython on Linux (On windows you will need the .NET Framework)
| 0 | 1,297 | false | 0 | 1 |
Is IronPython a 100% pure Python variant?
| 4,059,540 |
3 | 4 | 1 | 1 | 4 | 0 | 0.049958 | 0 |
I just downloaded the original Python interpreter from Python's site. I just want to learn this language but to start with, I want to write Windows-based standalone applications that are powered by any RDBMS. I want to bundle it like any typical Windows setup.
I searched old posts on SO and found guys suggesting wxPython and py2exe. Apart from that few suggested IronPython since it is powered by .NET.
I want to know whether IronPython is a pure variant of Python or a modified variant. Secondly, what is the actual use of Python? Is it for PHP like thing or like C# (you can either program Windows-based app. or Web.).
| 0 |
python,wxpython,ironpython
|
2010-10-30T14:49:00.000
| 0 | 4,059,201 |
what does "Pure Python" mean? If you're talking about implemented in Python in the same sense that a module may be pure python, then no, and no Python implementation is. If you mean "Compatible with cPython" then yes, code written to cPython will work in IronPython, with a few caveats. The one that's likely to matter most is that the libraries are different, for instance code depending on ctypes or Tkinter won't work. Another difference is that IronPython lags behind cPython by a bit. the very latest version of this writing is 2.6.1, with an Alpha version supporting a few of the 2.7 language features available too.
What do you really need? If you want to learn to program with python, and also want to produce code for windows, you can use IronPython for that, but you can also use cPython and py2exe; both will work equally well for this with only differences in the libraries.
| 0 | 1,297 | false | 0 | 1 |
Is IronPython a 100% pure Python variant?
| 4,059,291 |
1 | 3 | 0 | 0 | 0 | 0 | 0 | 0 |
I am using sticky notes in ubuntu . And was wondering if it would be possible to read the text written in sticky notes using any scripting language .
| 0 |
python,ruby,perl
|
2010-10-30T17:35:00.000
| 1 | 4,059,851 |
The tomboy notes are saved as xml files so you could write a xml parser.
| 0 | 536 | false | 0 | 1 |
is it possible to read the text written in a sticky note using a script in linux?
| 4,060,827 |
1 | 6 | 0 | 2 | 5 | 1 | 0.066568 | 0 |
I'm doing a few projects in python right now, and I'm trying to figure out how to work with my own versions of existing open source packages.
For instance, I'm using tipfy with zc.buildout, and I've added in the 'paypal' package. Unfortunately it doesn't have a feature I need, so I've forked it on github and added the feature. I will send the original package maintainers a pull request, but whether they accept my additions or not, I'd like to use my version of the package and keep the convenience of having zc.buildout manage my dependencies. How do I do this?
Do I upload my own take on the library to PyPI and prefix it with my name? Wouldn't that unnecessarily pollute the index?
Or should I make and maintain my own index and package repo? Where do I find the format for this? And is it against to terms of the OSS licenses to host my own repo with modified packages with the same names? (I'd rather not modify every file in the project with new namespaces)
I'm sure this problem comes up quite a lot, and not just with python. I can see this happening with Maven and SBT, too... what do people usually do when they want to use their own versions of popular packages?
| 0 |
python,buildout,pypi,tipfy
|
2010-11-01T04:06:00.000
| 0 | 4,066,571 |
To keep your headache in check, I would really recommend just bundling all such custom code with your package. Say you made a fork of some package. As long as its license allows it, I would just bundle the modified version of package with my code, as if it's just another directory. You can place it locally under package so it will be easily found. Once the developers of package fix what you need, just remove this directory and make it a dependency on an online package once again.
An added bonus of this approach is making distribution to users/customers easier.
| 0 | 1,349 | false | 0 | 1 |
Using custom packages on my python project
| 4,066,739 |
1 | 2 | 0 | 1 | 1 | 1 | 1.2 | 0 |
When I use google IMAP and try to delete message the message removes from folder but not going to trash folder. Did i must to copy this message before delete it?
| 0 |
python,gmail,imap,imaplib,gmail-imap
|
2010-11-01T08:34:00.000
| 0 | 4,067,499 |
Simple answer: yes.
There is no concept of Deleted Items, Trash, etc. in IMAP. If you want to have a message in one of those folders after deletion, you have to copy it.
| 0 | 896 | true | 0 | 1 |
Deleting using google IMAP (python, imaplib)
| 4,069,524 |
2 | 3 | 0 | 0 | 0 | 0 | 1.2 | 1 |
Our client wants us to implement change history for website articles. What is the best way to do it?
| 0 |
php,.net,python,ruby
|
2010-11-02T06:11:00.000
| 0 | 4,075,309 |
I presume you're using a CMS. If not, use one. WordPress is a good start.
If you're developing from scratch, the usual method is to have two tables: one for page information (so title, menu position etc.) and then a page_content table, which has columns for page_id, content, and timestamp.
As you save a page, instead of updating a database table you instead write a new record to the page_content table with the page's ID and the time of the save. That way, when displaying pages on your front-end you just select the latest record for that particular page ID, but you also have a history of that page by querying for all records by page_id, sorted by timestamp.
| 0 | 289 | true | 1 | 1 |
What is the best way to store change history of website articles?
| 4,076,484 |
2 | 3 | 0 | -1 | 0 | 0 | -0.066568 | 1 |
Our client wants us to implement change history for website articles. What is the best way to do it?
| 0 |
php,.net,python,ruby
|
2010-11-02T06:11:00.000
| 0 | 4,075,309 |
There is a wide variety of ways to do this as you alluded by tagging php, .net, python, and ruby. You missed a few off the top of my head perl and jsp. Each of these have their plusses and minuses and is really a question of what best suits your needs.
PHP is probably the fastest reward for time spent.
Ruby, i'm assuming Ruby on Rails, is the automatic Buzz Word Bingo for the day.
.Net, are you all microsoft every where and want easy integration with your exchange server and a nice outlook API?
python? Do you like the scripted languages but you're too good for php and ruby.
Each of these languages have their strong points and their draw backs and it's really a matter of what you know, how much you have to spend, and what is your timeframe.
| 0 | 289 | false | 1 | 1 |
What is the best way to store change history of website articles?
| 4,075,372 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 |
I need to extend a python code which has plenty of hard coded path
In order not to mess everything, I want to create unit-tests for the code before my modifications: it will serve as non-regression tests with my new code (that will not have hard-coded paths)
But because of hard coded system path, I shall run my test inside a chroot tree (I don't want to pollute my system dir)
My problem is that I want to set up the chroot only for test, and this can be done with os.chroot only with root privileges (and I don't want to run the test scripts as root)
In fact, I just need a fake tree diretory so that when the code that open('/etc/resolv.conf) retrieves a fake resolv.conf and not my system one
I obviously don't want to replace myself the hard coded path in the code because it would not be real regression test
Do you have any idea how to achieve this?
Thanks
Note that all the path accessed are readable with a user accout
| 0 |
python,linux,unit-testing,regression
|
2010-11-02T11:52:00.000
| 1 | 4,077,338 |
You could use a helper application that is setuid root to run the chroot; that would avoid the need to run the tests as root. Of course, that would probably still open up a local root exploit, so should only be done with appropriate precautions (e.g. in a VM image).
At any rate, any solution with chroot is inherently platform-dependent, so it's rather awkward. I actually like the idea of Dave Webb (override open) better, I must admit...
| 0 | 291 | false | 0 | 1 |
regression test dealing with hard coded path
| 4,078,021 |
2 | 6 | 0 | 1 | 8 | 0 | 1.2 | 0 |
I've done some experiments using Apache Bench to profile my code response times, and it doesn't quite generate the right kind of data for me. I hope the good people here have ideas.
Specifically, I need a tool that
Does HTTP requests over the network (it doesn't need to do anything very fancy)
Records response times as accurately as possible (at least to a few milliseconds)
Writes the response time data to a file without further processing (or provides it to my code, if a library)
I know about ab -e, which prints data to a file. The problem is that this prints only the quantile data, which is useful, but not what I need. The ab -g option would work, except that it doesn't print sub-second data, meaning I don't have the resolution I need.
I wrote a few lines of Python to do it, but the httplib is horribly inefficient and so the results were useless. In general, I need better precision than pure Python is likely to provide. If anyone has suggestions for a library usable from Python, I'm all ears.
I need something that is high performance, repeatable, and reliable.
I know that half my responses are going to be along the lines of "internet latency makes that kind of detailed measurements meaningless." In my particular use case, this is not true. I need high resolution timing details. Something that actually used my HPET hardware would be awesome.
Throwing a bounty on here because of the low number of answers and views.
| 0 |
python,profiling,benchmarking,latency,apachebench
|
2010-11-03T01:25:00.000
| 1 | 4,083,523 |
I have done this in two ways.
With "loadrunner" which is a wonderful but pretty expensive product (from I think HP these days).
With combination perl/php and the Curl package. I found the CURL api slightly easier to use from php. Its pretty easy to roll your own GET and PUT requests. I would also recommend manually running through some sample requests with Firefox and the LiveHttpHeaders add on to captute the exact format of the http requests you need.
| 0 | 6,473 | true | 0 | 1 |
Alternatives to ApacheBench for profiling my code speed
| 4,083,570 |
2 | 6 | 0 | 0 | 8 | 0 | 0 | 0 |
I've done some experiments using Apache Bench to profile my code response times, and it doesn't quite generate the right kind of data for me. I hope the good people here have ideas.
Specifically, I need a tool that
Does HTTP requests over the network (it doesn't need to do anything very fancy)
Records response times as accurately as possible (at least to a few milliseconds)
Writes the response time data to a file without further processing (or provides it to my code, if a library)
I know about ab -e, which prints data to a file. The problem is that this prints only the quantile data, which is useful, but not what I need. The ab -g option would work, except that it doesn't print sub-second data, meaning I don't have the resolution I need.
I wrote a few lines of Python to do it, but the httplib is horribly inefficient and so the results were useless. In general, I need better precision than pure Python is likely to provide. If anyone has suggestions for a library usable from Python, I'm all ears.
I need something that is high performance, repeatable, and reliable.
I know that half my responses are going to be along the lines of "internet latency makes that kind of detailed measurements meaningless." In my particular use case, this is not true. I need high resolution timing details. Something that actually used my HPET hardware would be awesome.
Throwing a bounty on here because of the low number of answers and views.
| 0 |
python,profiling,benchmarking,latency,apachebench
|
2010-11-03T01:25:00.000
| 1 | 4,083,523 |
I've used a script to drive 10 boxes on the same switch to generate load by "replaying" requests to 1 server. I had my web app logging response time (server only) to the granularity I needed, but I didn't care about the response time to the client. I'm not sure you care to include the trip to and from the client in your calculations, but if you did it shouldn't be to difficult to code up. I then processed my log with a script which extracted the times per url and did scatter plot graphs, and trend graphs based on load.
This satisfied my requirements which were:
Real world distribution of calls to different urls.
Trending performance based on load.
Not influencing the web app by running other intensive ops on the same box.
I did controller as a shell script that foreach server started a process in the background to loop over all the urls in a file calling curl on each one. I wrote the log processor in Perl since I was doing more Perl at that time.
| 0 | 6,473 | false | 0 | 1 |
Alternatives to ApacheBench for profiling my code speed
| 4,162,131 |
1 | 3 | 0 | 0 | 8 | 0 | 0 | 0 |
I'm using PyDev ( with Aptana ) to write and debug a Python Pylons app, and I'd like to step through the tests in the debugger.
Is it possible to launch nosetests through PyDev and stop at breakpoints?
| 0 |
python,debugging,pylons,pydev,nose
|
2010-11-03T13:38:00.000
| 1 | 4,087,582 |
Try import pydevd; pydevd.settrace() where would like a breakpoint.
| 0 | 4,516 | false | 0 | 1 |
Interactive debugging with nosetests in PyDev
| 4,087,763 |
1 | 2 | 0 | 5 | 4 | 1 | 0.462117 | 0 |
I wrote a nice little script to do some lightweight work. I set it to run all night, and when I eagerly checked it this morning, I found that I had left a module name prefix out of one of its variables. Is there any way to check for this kind of chicanery statically? The trouble is that this thing sleeps a lot, so running it isn't the best way to find out.
| 0 |
python
|
2010-11-04T12:44:00.000
| 0 | 4,096,751 |
There are three most popular tools: pylint, pyflakes and pycheker.
Pyflakes will show you unused imports, variables, variable usage before assignment, syntax errors and things like that. Pychecker, AFAIK is similar to pyflakes.
Pylint, on the other hand, is a much more comprehensive tool: apart from the listed above, it also checks for PEP8 compatibility, variable names, docstrings, proper indentation, checks for maximum line and module length, number of local variables and class methods and so on. It gives a more or less complete report with a universal score of your code. However, because of the outstanding amount of errors it shows, without proper configuration it is quite tedious to use.
| 0 | 108 | false | 0 | 1 |
Python syntax and other things checking?
| 4,096,865 |
4 | 6 | 0 | 1 | 2 | 0 | 0.033321 | 0 |
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks
| 0 |
java,c++,python,storage,simulation
|
2010-11-04T15:51:00.000
| 0 | 4,098,509 |
Using D-Bus format to send the information may be to your advantage. The format is standard, binary, and D-Bus is implemented in multiple languages, and can be used to send both over the network and inter-process on the same machine.
| 1 | 2,032 | false | 0 | 1 |
Collecting, storing, and retrieving large amounts of numeric data
| 4,098,941 |
4 | 6 | 0 | 0 | 2 | 0 | 0 | 0 |
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks
| 0 |
java,c++,python,storage,simulation
|
2010-11-04T15:51:00.000
| 0 | 4,098,509 |
If you are just storing, then use system tools. Don't write your own. If you need to do some real-time processing of the data before it is stored, then that's something completely different.
| 1 | 2,032 | false | 0 | 1 |
Collecting, storing, and retrieving large amounts of numeric data
| 4,098,550 |
4 | 6 | 0 | 1 | 2 | 0 | 0.033321 | 0 |
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks
| 0 |
java,c++,python,storage,simulation
|
2010-11-04T15:51:00.000
| 0 | 4,098,509 |
Actually, this is quite similar to what I'm doing, which is monitoring changes players make to the world in a game. I'm currently using an sqlite database with python.
At the start of the program, I load the disk database into memory, for fast writing procedures. Each change is put in to two lists. These lists are for both the memory database and the disk database. Every x or so updates, the memory database is updated, and a counter is pushed up one. This is repeated, and when the counter equals 5, it's reset and the list with changes for the disk is flushed to the disk database and the list is cleared.I have found this works well if I also set the writing more to WOL(Write Ahead Logging). This method can stand about 100-300 updates a second if I update memory every 100 updates and the disk counter is set to update every 5 memory updates. You should probobly choose binary, sense, unless you have faults in your data sources, would be most logical
| 1 | 2,032 | false | 0 | 1 |
Collecting, storing, and retrieving large amounts of numeric data
| 4,098,613 |
4 | 6 | 0 | 3 | 2 | 0 | 0.099668 | 0 |
I am about to start collecting large amounts of numeric data in real-time (for those interested, the bid/ask/last or 'tape' for various stocks and futures). The data will later be retrieved for analysis and simulation. That's not hard at all, but I would like to do it efficiently and that brings up a lot of questions. I don't need the best solution (and there are probably many 'bests' depending on the metric, anyway). I would just like a solution that a computer scientist would approve of. (Or not laugh at?)
(1) Optimize for disk space, I/O speed, or memory?
For simulation, the overall speed is important. We want the I/O (really, I) speed of the data just faster than the computational engine, so we are not I/O limited.
(2) Store text, or something else (binary numeric)?
(3) Given a set of choices from (1)-(2), are there any standout language/library combinations to do the job-- Java, Python, C++, or something else?
I would classify this code as "write and forget", so more points for efficiency over clarity/compactness of code. I would very, very much like to stick with Python for the simulation code (because the sims do change a lot and need to be clear). So bonus points for good Pythonic solutions.
Edit: this is for a Linux system (Ubuntu)
Thanks
| 0 |
java,c++,python,storage,simulation
|
2010-11-04T15:51:00.000
| 0 | 4,098,509 |
Optimizing for disk space and IO speed is the same thing - these days, CPUs are so fast compared to IO that it's often overall faster to compress data before storing it (you may actually want to do that). I don't really see memory playing a big role (though you should probably use a reasonably-sized buffer to ensure you're doing sequential writes).
Binary is more compact (and thus faster). Given the amount of data, I doubt whether being human-readable has any value. The only advantage of a text format would be that it's easier to figure out and correct if it gets corrupted or you lose the parsing code.
| 1 | 2,032 | false | 0 | 1 |
Collecting, storing, and retrieving large amounts of numeric data
| 4,098,582 |
1 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 |
We have an application based on Excel 2003 and Python 2.4 on Windows XP 32bit. The application consists of a large collection of Python functions which can be called from a number of excel worksheets.
We've notcied an anomolous behavior which is that sometimes in the middle of one of these calls the python interpreter will start hunting around for modules which almost certainly are already loaded and in memory.
We know this because we were able to hook-up Sysinternal's Process Monitor to the process and observe that from time to time the process (when called) starts hunting around a bunch of directories and eggs for certain .py files.
The obvious thing to try is to see if the python search-path had become modified, however we found this not to be the case. It's exactly what we'd expect. The odd thing is that:
The occasions on which this searching behavior was triggered appears to be random, i.e. it did not happen every time or with any noticable pattern.
The behavior did not affect the result of the function. It returned the same value irrespective of whether this file searching behavior was triggered.
The folders that were being scanned were non-existant (e.g. J:/python-eggs ) on a machine where J-drive contained no-such folder. Naturally procmon reports that this generated a file-not found error.
It's all very mysterious so I dont expect anybody to be able to provide a definitive answer as to what might be going wrong. I would appreciate any suggestions about how this problem might be debugged.
Thanks!
Answers to comments
All the things that are being searched for are actual, known python files which exist in the main project .egg file. The odd thing is that at the time they are being searched-for those particuar modules have already been imported. They must be in memory in order for the process to work.
Yes, this affects performance because sometimes this searching behavior tries to hit network drives. Also by searching eggs which couldnt possibly contain certain modules it the process gets interrupted by the corporate mandated virus-scanner. That slows down what would normally be a harmless and instant interruption.
This is stock python 2.4.4. No modifications.
| 0 |
python,windows,excel
|
2010-11-04T18:14:00.000
| 0 | 4,099,817 |
"Python functions which can be called from a number of excel worksheets"
And you're not blaming Excel for randomly running Python modules? Why not? How have you proven that Excel is behaving properly?
| 0 | 111 | false | 0 | 1 |
Odd python search-path behavior, what's going wrong here?
| 4,103,210 |
1 | 2 | 0 | 15 | 6 | 0 | 1.2 | 0 |
Using windows for the first time in quite awhile and have picked up notepad++ and am using the nppexec plugin to run python scripts. However, I noticed that notepad++ doesn't pick up the directory that my script is saved in. For example, I place "script.py" in 'My Documents' however os.getcwd() prints "Program Files \ Notepad++"
Does anyone know how to change this behavior? Not exactly used to it in Mac.
| 0 |
python,notepad++,nppexec
|
2010-11-05T02:15:00.000
| 1 | 4,103,085 |
Notepad++ >nppexec >follow $(current directory)
| 0 | 4,494 | true | 0 | 1 |
Getting NppExec to understand path of the current file in Notepad++ (for Python scripts)
| 4,106,339 |
1 | 1 | 0 | 8 | 4 | 1 | 1 | 0 |
I am going to build vim and see that it supports the pythoninterp feature by
--enable-pythoninterp. What is it? Since I am a big Python fan, I'd like to know more about it.
And also, what's the --with-python-config-dir=PATH for?
| 0 |
python,vim
|
2010-11-05T07:44:00.000
| 0 | 4,104,202 |
vim supports scripting in various languages, Python being one of them. See :h python for more details.
| 0 | 3,212 | false | 0 | 1 |
What is the vim feature: --enable-pythoninterp
| 4,104,215 |
4 | 7 | 0 | 2 | 4 | 1 | 0.057081 | 0 |
What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size.
| 0 |
python
|
2010-11-05T09:59:00.000
| 0 | 4,104,898 |
Assuming you do have to remember their order and that the numbers are in the range of 1 to 1,000,000, it would only take 20 bits or 2½ bytes to write each one since 1,000,000 is 0xF4240 in hexadecimal. You'd have to pack them together to not waste any space with this approach, but by doing so it would only take 2.5 * 1,000,000 bytes.
| 0 | 1,746 | false | 0 | 1 |
Python: Write 1,000,000 ints to file
| 4,105,223 |
4 | 7 | 0 | 0 | 4 | 1 | 0 | 0 |
What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size.
| 0 |
python
|
2010-11-05T09:59:00.000
| 0 | 4,104,898 |
I would only write the start and end of the given range, in this case 1 and 1,000,000, because nowhere has the interviewer mentioned order is important.
| 0 | 1,746 | false | 0 | 1 |
Python: Write 1,000,000 ints to file
| 4,105,172 |
4 | 7 | 0 | 2 | 4 | 1 | 1.2 | 0 |
What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size.
| 0 |
python
|
2010-11-05T09:59:00.000
| 0 | 4,104,898 |
Well, your solution takes three bytes (= 24 bits) per integer. Theoretically, 20 bits are enough (since 2^19 < 1.000.000 < 2^20).
EDIT: Oops, just noticed Neil’s comment stating the same. I’m making this answer CW since it really belongs to him.
| 0 | 1,746 | true | 0 | 1 |
Python: Write 1,000,000 ints to file
| 4,105,217 |
4 | 7 | 0 | 0 | 4 | 1 | 0 | 0 |
What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc? My answer is: 1,000,000 * 3 bytes using struct module, but it seems like interviewer expected another answer...
Edit. Numbers from 1 to 1,000,000 in random order (so transform like 5, 6, 7 -> 5-7 can be applied in rare case). You can use any writing method you know, but the resulting file should have minimum size.
| 0 |
python
|
2010-11-05T09:59:00.000
| 0 | 4,104,898 |
What is the most compact way to write 1,000,000 ints (0, 1, 2...) to file using Python without zipping etc
If you interpret the 1,000,000 ints as "I didn't specify that they have to be different", you can just use a for loop to write 0 one million times.
| 0 | 1,746 | false | 0 | 1 |
Python: Write 1,000,000 ints to file
| 4,105,238 |
1 | 4 | 1 | 1 | 9 | 0 | 0.049958 | 0 |
I am loading an IronPython script from a database and executing it. This works fine for simple scripts, but imports are a problem. How can I intercept these import calls and then load the appropriate scripts from the database?
EDIT: My main application is written in C# and I'd like to intercept the calls on the C# side without editing the Python scripts.
EDIT: From the research I've done, it looks like creating your own PlatformAdaptationLayer is the way you're supposed to to implement this, but it doesn't work in this case. I've created my own PAL and in my testing, my FileExsists method gets called for every import in the script. But for some reason it never calls any overload of the OpenInputFileStream method. Digging through the IronPython source, once FileExists returns true, it tries to locate the file itself on the path. So this looks like a dead end.
| 0 |
import,ironpython
|
2010-11-05T12:25:00.000
| 0 | 4,105,804 |
You can re-direct all I/O to the database using the PlatformAdaptationLayer. To do this you'll need to implement a ScriptHost which provides the PAL. Then when you create the ScriptRuntime you set the HostType to your host type and it'll be used for the runtime. On the PAL you then override OpenInputFileStream and return a stream object which has the content from the database (you could just use a MemoryStream here after reading from the DB).
If you want to still provide access to file I/O you can always fall back to FileStream's for "files" you can't find.
| 0 | 2,738 | false | 0 | 1 |
Custom IronPython import resolution
| 4,111,764 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Can someone recommend a library for calculating SHA1WithRSAEncryption in Python?
Context: I'm trying to do some message authentication. I've looked at PyXMLDSig, but it seemed to expect the certificates as separate files. As a first step to better understanding the problem space, I wanted to calculate the digest values "by hand".
I've looked around and seen Java implementations, but not Python ones. (Jython isn't really an option for my environment.)
Thanks in advance.
| 0 |
python,xml,sha1
|
2010-11-05T16:27:00.000
| 0 | 4,107,888 |
Take a look at M2Crypto, it's probably the best and most complete crypto library for Python.
| 0 | 378 | false | 0 | 1 |
sha1WithRSAEncryption in Python
| 4,233,195 |
2 | 2 | 0 | 0 | 1 | 1 | 1.2 | 0 |
I got a VS10 project. I want to build some C++ code so I can use it in python. I followed the boost tutorial and got it working. However VS keeps to link boost-python-vc100-mt-gd-1_44.lib but it's just a wrapper which calls boost-python-vc100-mt-gd-1_44.dll. That's why I need to copy the .dll with my .dll(.pyd) file. So I want to link boost:python statically to that .dll(.pyd) file. But I just can't find any configuration option in VS or in the compiler and linker manual. The weirdest thing is I've got one older project using boost::filesystem with the very same config but that project links against libboost-filesystem-*.lib which is static lib so it's ok. I've been googling for couple of hours without any success and it drivers me crazy.
Thanks for any help or suggestion.
| 0 |
c++,visual-studio-2010,static,linker,boost-python
|
2010-11-07T22:58:00.000
| 0 | 4,120,169 |
What libraries are linked depends on the settings of your project. There are two possibilities: You can build against
statically
dynamically
linked versions of the c-runtime libs. Depending on which option is selected, the boost sends a proper #pragma to the linker. These options need to be set consistently in all projects which constitute your program. So go to "properties -> c++ -> code generation" (or similar, I am just guessing, don't have VS up and running right now) and be sure that the right option is set (consistently). Of course, you must have compiled boost libraries in required format before...
| 0 | 4,303 | true | 0 | 1 |
MSVC - boost::python static linking to .dll (.pyd)
| 4,121,910 |
2 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 |
I got a VS10 project. I want to build some C++ code so I can use it in python. I followed the boost tutorial and got it working. However VS keeps to link boost-python-vc100-mt-gd-1_44.lib but it's just a wrapper which calls boost-python-vc100-mt-gd-1_44.dll. That's why I need to copy the .dll with my .dll(.pyd) file. So I want to link boost:python statically to that .dll(.pyd) file. But I just can't find any configuration option in VS or in the compiler and linker manual. The weirdest thing is I've got one older project using boost::filesystem with the very same config but that project links against libboost-filesystem-*.lib which is static lib so it's ok. I've been googling for couple of hours without any success and it drivers me crazy.
Thanks for any help or suggestion.
| 0 |
c++,visual-studio-2010,static,linker,boost-python
|
2010-11-07T22:58:00.000
| 0 | 4,120,169 |
You probably don't want to do that. Statically linked Boost python has a number of problems and quirks when there are more then one boost python based library imported. "But I only have one" you say. Can you guarantee that your users won't have another? That you might want to use another in the future? Stick with the DLL. Distributing another DLL is really not that big a deal. Just put it side-by-side in the same directory.
| 0 | 4,303 | false | 0 | 1 |
MSVC - boost::python static linking to .dll (.pyd)
| 4,146,530 |
1 | 2 | 0 | 0 | 2 | 1 | 0 | 0 |
Let's say I have a bunch of small targets in different programming languages (C++, Java, Python, etc), with inter programming language dependencies (Java project depends on a C++, Python depends on C++). How can one build/compile them?
I tried scons and more recently gyp. I don't remember what issues I had with scons. Gyp has a very ugly language definition plus I had to hack ant scripts in order to build my java targets.
| 0 |
java,c++,python,build,scons
|
2010-11-08T22:07:00.000
| 0 | 4,128,555 |
I once checked out CMake (for C++), I liked it very much. It's easy to use yet powerful and quite similar to Make syntax. It also has Java support.
| 0 | 664 | false | 0 | 1 |
How to build/compile C++, Java and Python projects?
| 4,129,038 |
2 | 3 | 0 | 3 | 0 | 0 | 1.2 | 0 |
What would be this encoding's name?
smb://nas/music/_lib/v/voivod/voivod-rrr%C3%B6%C3%B6%C3%B6aaarrr/01%20-%20voivod%20-%20rrr%C3%B6%C3%B6%C3%B6aaarrr%20-%20korg%C3%BCll_the_exterminator.mp3
I would like to convert such string to unicode using Python. How would I do that?
| 0 |
python,unicode
|
2010-11-09T01:19:00.000
| 0 | 4,129,658 |
That's URL-encoded UTF-8. URL-decode it, then decode it as UTF-8.
| 0 | 791 | true | 0 | 1 |
unknown encoding to unicode
| 4,129,665 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 |
What would be this encoding's name?
smb://nas/music/_lib/v/voivod/voivod-rrr%C3%B6%C3%B6%C3%B6aaarrr/01%20-%20voivod%20-%20rrr%C3%B6%C3%B6%C3%B6aaarrr%20-%20korg%C3%BCll_the_exterminator.mp3
I would like to convert such string to unicode using Python. How would I do that?
| 0 |
python,unicode
|
2010-11-09T01:19:00.000
| 0 | 4,129,658 |
Try urllib.unquote().
| 0 | 791 | false | 0 | 1 |
unknown encoding to unicode
| 4,129,716 |
1 | 2 | 0 | 0 | 7 | 0 | 1.2 | 0 |
What should I do for fast, full-text searching on App Engine with as little work as possible (and as little Java — I’m doing Python.)?
| 0 |
python,google-app-engine,search,full-text-search,full-text-indexing
|
2010-11-09T05:36:00.000
| 1 | 4,130,813 |
GAE has announced plans to offer full-text searching natively in the Datastore soon.
| 0 | 1,246 | true | 1 | 1 |
How should I do full-text searching on App Engine?
| 5,072,790 |
2 | 4 | 0 | 3 | 2 | 1 | 0.148885 | 0 |
There was the Unladen Swallow project that aims to get a faster python, but it seems to be stopped :
Is there a way to get a faster python, I mean faster than C-Python, without the use of psyco ?
| 0 |
python,performance
|
2010-11-09T10:09:00.000
| 1 | 4,132,493 |
Sure. Use one of the variants that uses a JITer, such as IronPython, Jython, or PyPy.
| 0 | 312 | false | 0 | 1 |
When a faster python?
| 4,132,499 |
2 | 4 | 0 | 1 | 2 | 1 | 0.049958 | 0 |
There was the Unladen Swallow project that aims to get a faster python, but it seems to be stopped :
Is there a way to get a faster python, I mean faster than C-Python, without the use of psyco ?
| 0 |
python,performance
|
2010-11-09T10:09:00.000
| 1 | 4,132,493 |
I saw pypy to be very fast on some tests : have a look
| 0 | 312 | false | 0 | 1 |
When a faster python?
| 4,132,523 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 |
I needed to make a change to a 3rd party library, so I edited the files in the egg (which is not zipped). The egg lives in site-packages in a virtualenv. Everything works fine on my dev machine, but when I copied the egg to another machine, the module can longer be found to import.
I'm sure I went about this the wrong way, but I'm hoping there's a way to fix it.
| 0 |
python
|
2010-11-09T19:57:00.000
| 0 | 4,137,917 |
A quick fix to your problem should be by adding the full path of the egg to a .pth file which should exist in the sys-path (in your case site-packages).
| 0 | 89 | true | 0 | 1 |
Copied python egg no longer works
| 4,138,407 |
1 | 4 | 0 | 0 | 0 | 1 | 0 | 0 |
I'm going through the Python tutorial, and I got to the section on modules.
I created a fibo.py file in Users/Me/code/Python (s
Now I'm back in the interpreter and I can't seem to import the module, because I don't understand how to import a relative (or absolute) path.
I'm also thoroughly confused by how and if to modify PYTHONPATH and/or sys.path.
All the other 'import module' questions on here seem to be
| 0 |
python,module
|
2010-11-09T22:06:00.000
| 0 | 4,139,167 |
Before importing any user defined module,specify the path to the directory containg that module
sys.path.append("path to your directory")
| 0 | 6,865 | false | 0 | 1 |
Importing a module in Python
| 30,371,608 |
2 | 2 | 0 | 1 | 9 | 1 | 1.2 | 0 |
I am writing now writing some evented code (In python using gevent) and I use the nginx as a web server and I feel both are great. I was told that there is a trade off with events but was unable to see it. Can someone please shed some light?
James
| 0 |
python,asynchronous,libevent,gevent
|
2010-11-10T02:13:00.000
| 1 | 4,140,656 |
Biggest issue is that without threads, a block for one client will cause a block for all client. For example, if one client requests a resource (file on disk, paged-out memory, etc) that requires the OS to block the requesting process, then all clients will have to wait. A multithreaded server can block just the one client and continue to serve others.
That said, if the above scenario is unlikely (that is, all clients will request the same resources), then event-driven is the way to go.
| 0 | 1,703 | true | 0 | 1 |
Why shouldn't I use async (evented) IO
| 4,140,840 |
2 | 2 | 0 | 9 | 9 | 1 | 1 | 0 |
I am writing now writing some evented code (In python using gevent) and I use the nginx as a web server and I feel both are great. I was told that there is a trade off with events but was unable to see it. Can someone please shed some light?
James
| 0 |
python,asynchronous,libevent,gevent
|
2010-11-10T02:13:00.000
| 1 | 4,140,656 |
The only difficulty of evented programming is that you mustn't block, ever. This can be hard to achieve if you use some libraries that were designed with threads in mind. If you don't control these libraries, a fork() + message ipc is the way to go.
| 0 | 1,703 | false | 0 | 1 |
Why shouldn't I use async (evented) IO
| 4,291,204 |
2 | 2 | 0 | 0 | 1 | 1 | 0 | 0 |
how do I change the version of Python that emacs uses in the python-mode to the latest version that I just installed ?
I tried setting the PATH in my init.el file to the path where the latest version of python resides but its not working.
| 0 |
python,emacs
|
2010-11-10T03:20:00.000
| 0 | 4,140,943 |
PATH is only searched when a program is launched via the shell.
For programs that are launched directly by Emacs (for example, via call-process), it's the exec-path variable that is searched.
| 0 | 811 | false | 0 | 1 |
Emacs on Mac for Python - python-mode keeps using the default Python version
| 4,143,655 |
2 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 |
how do I change the version of Python that emacs uses in the python-mode to the latest version that I just installed ?
I tried setting the PATH in my init.el file to the path where the latest version of python resides but its not working.
| 0 |
python,emacs
|
2010-11-10T03:20:00.000
| 0 | 4,140,943 |
Set the variable python-python-command. This can be done via customize:
M-x customize-option RET python-python-command RET
Change the value to point to the appropriate binary.
| 0 | 811 | false | 0 | 1 |
Emacs on Mac for Python - python-mode keeps using the default Python version
| 4,141,003 |
4 | 5 | 1 | 4 | 6 | 0 | 0.158649 | 0 |
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects.
My question is: what building mechanism can I use to manage dependencies and build these projects?
I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python.
I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
| 0 |
java,c++,python,scons,gyp
|
2010-11-10T05:32:00.000
| 1 | 4,141,511 |
I tried to do a Java / C++ / C++ To Java swig / (+ Protocol buffers) project in CMAKE and it was horrible! In such a case the problem with Cmake is, that the scripting language is extremely limited. I switched to Scons and everything got much easier.
| 0 | 2,318 | false | 0 | 1 |
What are the SCons alternatives?
| 7,201,456 |
4 | 5 | 1 | 1 | 6 | 0 | 0.039979 | 0 |
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects.
My question is: what building mechanism can I use to manage dependencies and build these projects?
I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python.
I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
| 0 |
java,c++,python,scons,gyp
|
2010-11-10T05:32:00.000
| 1 | 4,141,511 |
For Java and C++ projects you can take a look into Maven + Maven-nar-plugin but for Python i really don't know the best. May be other tools like CMake would fit better.
| 0 | 2,318 | false | 0 | 1 |
What are the SCons alternatives?
| 4,142,509 |
4 | 5 | 1 | 9 | 6 | 0 | 1 | 0 |
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects.
My question is: what building mechanism can I use to manage dependencies and build these projects?
I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python.
I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
| 0 |
java,c++,python,scons,gyp
|
2010-11-10T05:32:00.000
| 1 | 4,141,511 |
CMake
I use and prefer it for my projects.
There's also Rake (comes with Ruby, but can be used for anything), which I regard rather highly.
| 0 | 2,318 | false | 0 | 1 |
What are the SCons alternatives?
| 4,141,589 |
4 | 5 | 1 | 1 | 6 | 0 | 0.039979 | 0 |
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects.
My question is: what building mechanism can I use to manage dependencies and build these projects?
I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python.
I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
| 0 |
java,c++,python,scons,gyp
|
2010-11-10T05:32:00.000
| 1 | 4,141,511 |
In Java world ant is "lingua franca" for build systems.
Ant supports a C++ task via ant-contrib - so you can compile your C++ code.
With Ant's exec task you can still run swig on C++ code in order to get the wrappers.
Then standard tasks as javac/jar can be used for java application build.
| 0 | 2,318 | false | 0 | 1 |
What are the SCons alternatives?
| 4,143,403 |
4 | 5 | 0 | 3 | 11 | 0 | 0.119427 | 0 |
Does anyone know of a "language level" facility for pickling in C++? I don't want something like Boost serialization, or Google Protocol Buffers. Instead, something that could automatically serialize all the members of a class (with an option to exclude some members, either because they're not serializable, or else because I just don't care to save them for later). This could be accomplished with an extra action at parse time, that would generate code to handle the automatic serialization. Has anyone heard of anything like that?
| 0 |
c++,python,serialization,boost,pickle
|
2010-11-10T21:07:00.000
| 0 | 4,149,086 |
something that could automatically
serialize all the members of a class
This is not possible in C++. Python, C#, Java et al. use run-time introspection to achieve this. You can't do that in C++, RTTI is not powerful enough.
In essence, there is nothing in the C++ language that would enable someone to discover the member variables of an object at run-time. Without that, you can't automatically serialize them.
| 0 | 3,405 | false | 0 | 1 |
Python-style pickling for C++?
| 4,149,474 |
4 | 5 | 0 | 0 | 11 | 0 | 0 | 0 |
Does anyone know of a "language level" facility for pickling in C++? I don't want something like Boost serialization, or Google Protocol Buffers. Instead, something that could automatically serialize all the members of a class (with an option to exclude some members, either because they're not serializable, or else because I just don't care to save them for later). This could be accomplished with an extra action at parse time, that would generate code to handle the automatic serialization. Has anyone heard of anything like that?
| 0 |
c++,python,serialization,boost,pickle
|
2010-11-10T21:07:00.000
| 0 | 4,149,086 |
One quick way to do this that I got working once when I needed to save a struct to a file was to cast my struct to a char array and write it out to a file. Then when I wanted to load my struct back in, I would read the entire file (in binary mode), and cast the whole thing to my struct's type. Easy enough and exploits the fact that structs are stored as a contiguous block in memory. I wouldn't expect this to work with convoluted data structures or pointers, though, but food for thought.
| 0 | 3,405 | false | 0 | 1 |
Python-style pickling for C++?
| 4,149,445 |
4 | 5 | 0 | 7 | 11 | 0 | 1.2 | 0 |
Does anyone know of a "language level" facility for pickling in C++? I don't want something like Boost serialization, or Google Protocol Buffers. Instead, something that could automatically serialize all the members of a class (with an option to exclude some members, either because they're not serializable, or else because I just don't care to save them for later). This could be accomplished with an extra action at parse time, that would generate code to handle the automatic serialization. Has anyone heard of anything like that?
| 0 |
c++,python,serialization,boost,pickle
|
2010-11-10T21:07:00.000
| 0 | 4,149,086 |
I don't believe there's any way to do this in a language with no run-time introspection capabilities.
| 0 | 3,405 | true | 0 | 1 |
Python-style pickling for C++?
| 4,149,141 |
4 | 5 | 0 | 1 | 11 | 0 | 0.039979 | 0 |
Does anyone know of a "language level" facility for pickling in C++? I don't want something like Boost serialization, or Google Protocol Buffers. Instead, something that could automatically serialize all the members of a class (with an option to exclude some members, either because they're not serializable, or else because I just don't care to save them for later). This could be accomplished with an extra action at parse time, that would generate code to handle the automatic serialization. Has anyone heard of anything like that?
| 0 |
c++,python,serialization,boost,pickle
|
2010-11-10T21:07:00.000
| 0 | 4,149,086 |
There's the standard C++ serialization with the << and >> operators, although you'll have to implement these for each of your classes (which it sounds like you don't want to do). Some practitioners say you should alway implement these operators, although of course, most of us rarely do.
| 0 | 3,405 | false | 0 | 1 |
Python-style pickling for C++?
| 4,149,128 |
2 | 4 | 0 | 1 | 10 | 1 | 0.049958 | 0 |
I'm a 5 years Python programmer, but shortly I'll be also working with PHP. Could you recommend me some readings to getting in touch with this language having in mind my Python skills?
| 0 |
php,python
|
2010-11-10T22:46:00.000
| 0 | 4,149,861 |
If you are familiar with MVC start learning from Zend Framework I think it will be easier for you to understand php, right php developing this way with a right leg start.
Object oriented business logics are same in any language.
I really want to get into python so we can exchange knowledge ;)
| 0 | 3,823 | false | 0 | 1 |
PHP for Python programmers?
| 4,149,942 |
2 | 4 | 0 | 0 | 10 | 1 | 0 | 0 |
I'm a 5 years Python programmer, but shortly I'll be also working with PHP. Could you recommend me some readings to getting in touch with this language having in mind my Python skills?
| 0 |
php,python
|
2010-11-10T22:46:00.000
| 0 | 4,149,861 |
the best resource IMHO is still php.net there are a ton of decent books out there, but I still prefer to rely on php.net for the latest and greatest.
| 0 | 3,823 | false | 0 | 1 |
PHP for Python programmers?
| 4,149,875 |
1 | 2 | 0 | 0 | 10 | 0 | 0 | 1 |
Using Python, how might one read a file's path from a remote server?
This is a bit more clear to me on my local PC.
| 0 |
python
|
2010-11-12T09:58:00.000
| 0 | 4,163,456 |
use the os.path module to manipulate path string (you need to import os)
the current directory is os.path.abspath(os.curdir)
join 2 parts of a path with os.path.join(dirname, filename): this will take care of inserting the right path separator ('\' or '/', depending on the operating system) for building the path
| 0 | 36,179 | false | 0 | 1 |
Python - how to read path file/folder from server
| 4,164,507 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 |
I have my own unit testing suite based on the unittest library. I would like to track the history of each test case being run. I would also like to identify after each run tests which flipped from PASS to FAIL or vice versa.
I have very little knowledge about databases, but it seems that I could utilize sqlite3 for this task.
Are there any existing solutions which integrate unittest and a database?
| 1 |
python,unit-testing,sqlite
|
2010-11-13T01:33:00.000
| 0 | 4,170,442 |
Technically, yes. The only thing that you need is some kind of scripting language or shell script that can talk to sqlite.
You should think of a database like a file in a file system where you don't have to care about the file format. You just say, here are tables of data, with columns. And each row of that is one record. Much like in a Excel table.
So if you are familiar with shell scripts or calling command line tools, you can install sqlite and use the sqlitecommand to interact with the database.
Although I think the first thing you should do is to learn basic SQL. There are a lot of SQL tutorials out there.
| 0 | 280 | true | 0 | 1 |
Using sqlite3 to track unit test results
| 4,170,458 |
1 | 1 | 0 | 3 | 1 | 1 | 1.2 | 0 |
About the only reason I can think of to distribute a python package as an egg is so that you can not include the .py files with your package (and only include .pyc files, which is a dubious way to protect your code anyway). Aside from that, I can't really think of any reason to upload a package as an egg rather than an sdist. In fact, pip doesn't even support eggs.
Is there any real reason to use an egg rather than an sdist?
| 0 |
python,packaging,egg,sdist
|
2010-11-13T01:42:00.000
| 0 | 4,170,477 |
One reason: eggs can include compiled C extension modules so that the end user does not need to have the necessary build tools and possible additional headers and libraries to build the extension module from scratch. The drawback to that is that the packager may need to supply multiple eggs to match each targeted platform and Python configuration. If there are many supported configurations, that can prove to be a daunting task but it can be effective for more homogenous environments.
| 0 | 251 | true | 0 | 1 |
Why would one use an egg over an sdist?
| 4,170,898 |
1 | 3 | 0 | 0 | 3 | 1 | 0 | 0 |
I'm looking at writing a mashup app that will take submission titles from a subreddit and attempt to plot them on a map based on where they are likely to be relevant. I'd also like to add on things like Twitter later on.
What I'm having difficulty planning is how to detect the most likely to be relevant country from the title. My first guess is to have a list of countries, along with their matching permutations (e.g. "English" matches "England", etc.) and check for occurrences of those items in the text. However this is probably going to be quite slow and will require me listing the possessive* name for each country.
I'm planning on doing this in Python (so as to learn to use it) so I'm wondering is there a) a library that does this (and that I can learn from it) or b) a more obvious way to do this?
To give an idea of the types of input I'm working with here are some samples and what I'm trying to get out of them:
"Well they can't arrest all of us - Giving the middle finger to the British legal system (pic)"
Keyword: British (Great Britain)
"Poll: Wikileaks Assange leading Time 'Person of the Year' - Assange, an Australian who has become a thorn in the side of the Pentagon with his releases of secret US military documents about the wars in Iraq and Afghanistan, had received 21,736 votes as of Friday."
Keywords: Afghanistan, Iraq, [Australian] (Afghanistan, Iraq, [Australia]) - Australia would be difficult to catch out as mainly irrelevant but this is acceptable for my purposes
"Cyber attack on Nobel peace prize website launched. Stay classy, China."
Keyword: China (China)
"A Jewish surgeon refuses to operate on a patient and walks out of the operating room after discovering a nazi tattoo on the patient's arm."
Keywords: none - acceptable for my purposes
* This is probably the wrong word to use
| 0 |
python,categorization
|
2010-11-13T01:50:00.000
| 0 | 4,170,503 |
Use a FullText search index in MySQL. Then use AJAX calls to query against your database.
| 0 | 2,432 | false | 0 | 1 |
Extracting a country name from a text string
| 4,170,680 |
1 | 1 | 0 | 5 | 3 | 0 | 1.2 | 1 |
I am looking for tutorials and/or examples of certain components of a social network web app that may include Python code examples of:
user account auto-gen function(database)
friend/follow function (Twitter/Facebook style)
messaging/reply function (Twitter style)
live chat function (Facebook style)
blog function
public forums (like Get Satisfaction or Stack Overflow)
profile page template auto-gen function
I just want to start getting my head around how Python can be used to make these features. I am not looking for a solution like Pinax since it is built upon Django and I will be ultimately using Pylons or just straight up Python.
| 0 |
python,social-networking,pylons,get-satisfaction
|
2010-11-13T17:49:00.000
| 0 | 4,173,883 |
So you're not interested in a fixed solution but want to program it yourself, do I get that correctly? If not: Go with a fixed solution. This will be a lot of programming effort, and whatever you want to do afterwards, doing it in another framework than you intended will be a much smaller problem.
But if you're actually interested in the programming experience, and you haven't found any tutorials googling for, say "messaging python tutorial", then that's because these are large-scale projects,- if you describe a project of this size, you're so many miles above actual lines of code that the concrete programming language almost doesn't matter (or at least you don't get stuck with the details). So you need to break these things down into smaller components.
For example, the friend/follow function: How to insert stuff into a table with a user id, how to keep a table of follow-relations, how to query for a user all texts from people she's following (of course there's also some infrastructural issues if you hit >100.000 people, but you get the idea ;). Then you can ask yourself, which is the part of this which I don't know how to do in Python? If your problem, on the other hand, is breaking down the problems into these subproblems, you need to start looking for help on that, but that's probably not language specific (so you might just want to start googling for "architecture friend feed" or whatever). Also, you could ask that here (beware, each bullet point makes for a huge question in itself ;). Finally, you could get into the Pinax code (don't know it but I assume it's open source) and see how they're doing it. You could try porting some of their stuff to Pylons, for example, so you don't have to reinvent their wheel, learn how they do it, end up in the framework you wanted and maybe even create something reusable by others.
sorry for tl;dr, that's because I don't have a concrete URL to point you to!
| 0 | 2,075 | true | 0 | 1 |
Where can I find Python code examples, or tutorials, of social networking style functions/components?
| 4,174,212 |
1 | 4 | 0 | 4 | 2 | 1 | 0.197375 | 0 |
I'm using emacs for python, and I'd like to have a nice useable shell in emacs to run an interpreter alongside my editing.
Is there any better emacs shell package out there? The default shell is awful.
| 0 |
python,emacs
|
2010-11-13T20:51:00.000
| 0 | 4,174,633 |
That depends on what shell you are using, in GNU Emacs 23 there are at least 3 built in:
shell - ugly, not working tab
eshell - not ugly but tab not working
term - not ugly and seems like ipython works with all goodies in it
So you might want to try the term mode.
| 0 | 2,955 | false | 0 | 1 |
Is there any way to get a better terminal in emacs?
| 4,174,664 |
1 | 1 | 0 | 10 | 4 | 0 | 1.2 | 0 |
I have some Python scripts that I run on my desktop now for cutting up files. I want to put them on the web and write a simple front-end in PHP where a user uploads a file and it is passed as an argument to a python script on the web server and it is written out in chunks and the user can re-download the chunks.
I know a decent amount of PHP, but I do not see:
How to mix PHP and Python programmatically
Is it possible to have a webpage in python that can just call the python script? Can one have a GUI page that is like zzz.com/text.py as example
| 0 |
php,python,apache
|
2010-11-14T00:04:00.000
| 0 | 4,175,419 |
For http requests, you need to set your web server to hand over certain request to PHP and others to Python. From within PHP's scripts, if you need to call some Python executable scripts, use one of PHP's shell functions. e.g. exec()
Yes it is possible. The djangobook is a nice tutorial that covers this in one of the earlier chapters. It shows you how to run python as a cgi or with apache.
On a personal note, if you have time to dig deeper into Python, I'd strongly encourage you to do the whole thing in it, rather than mix things with PHP. My experience tells me that there are probably more cases where a PHP app needs some Python support rather than the reverse.
If the supporting language can do everything that the main language does, what's the point of using the main language?
| 0 | 4,155 | true | 1 | 1 |
Mixing Python and PHP?
| 4,175,473 |
1 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 1 |
I want to do a test load for a web page. I want to do it in python with multiple threads.
First POST request would login user (set cookies).
Then I need to know how many users doing the same POST request simultaneously can server take.
So I'm thinking about spawning threads in which requests would be made in loop.
I have a couple of questions:
1. Is it possible to run 1000 - 1500 requests at the same time CPU wise? I mean wouldn't it slow down the system so it's not reliable anymore?
2. What about the bandwidth limitations? How good the channel should be for this test to be reliable?
Server on which test site is hosted is Amazon EC2 script would be run from another server(Amazon too).
Thanks!
| 0 |
python,multithreading,load-testing
|
2010-11-14T21:46:00.000
| 0 | 4,179,879 |
too many variables. 1000 at the same time... no. in the same second... possibly. bandwidth may well be the bottleneck. this is something best solved by experimentation.
| 0 | 8,721 | false | 0 | 1 |
Python script load testing web page
| 4,180,003 |
1 | 4 | 0 | 4 | 23 | 0 | 0.197375 | 0 |
I want to execute a Python script on several (15+) remote machine using SSH. After invoking the script/command I need to disconnect ssh session and keep the processes running in background for as long as they are required to.
I have used Paramiko and PySSH in past so have no problems using them again. Only thing I need to know is how to disconnect a ssh session in python (since normally local script would wait for each remote machine to complete processing before moving on).
| 0 |
python,ssh
|
2010-11-14T23:40:00.000
| 1 | 4,180,390 |
On Linux machines, you can run the script with 'at'.
echo "python scriptname.py" ¦ at now
| 0 | 51,995 | false | 0 | 1 |
Execute remote python script via SSH
| 4,180,771 |
2 | 3 | 0 | 1 | 5 | 1 | 0.066568 | 0 |
I have a system currently written in Python that can be separated into backend and frontend layers. Python is too slow, so I want to rewrite the backend in a fast compiled language while keeping the frontend in Python, in a way that lets the backend functionality be called from Python. What are the best choices to do so?
I've considered cython but it's very limited and cumbersome to write, and not that much faster. From what I remember of Boost Python for C++, it's very annoying to maintain the bridge between languages. Are there better choices?
My main factors are:
speed of execution
speed of compilation
language is declarative
| 0 |
java,c++,python,boost-python
|
2010-11-15T19:58:00.000
| 0 | 4,188,273 |
I would disagree about Boost::Python. It can get cumbersome when wrapping an existing c++-centric library and trying not to change the interface. But that is not what you are looking to do.
You are looking to push the heavy lifting of an existing python solution in to a faster language. That means that you can control the interface.
If you are in control of the interface, you can keep it python-friendly, and bp-friendly (IE: avoid problematic things like pointers and immutable types as l-values)
In that case, Boost::Python can be as simple as telling it which functions you want to call from python.
| 0 | 247 | false | 0 | 1 |
Language choices for writing very fast abstractions interfacing with Python?
| 4,227,430 |
2 | 3 | 0 | 2 | 5 | 1 | 0.132549 | 0 |
I have a system currently written in Python that can be separated into backend and frontend layers. Python is too slow, so I want to rewrite the backend in a fast compiled language while keeping the frontend in Python, in a way that lets the backend functionality be called from Python. What are the best choices to do so?
I've considered cython but it's very limited and cumbersome to write, and not that much faster. From what I remember of Boost Python for C++, it's very annoying to maintain the bridge between languages. Are there better choices?
My main factors are:
speed of execution
speed of compilation
language is declarative
| 0 |
java,c++,python,boost-python
|
2010-11-15T19:58:00.000
| 0 | 4,188,273 |
If you used Jython you could call into Java back-end routines easily (trivially). Java's about twice as slow as c and 10x faster than python last time I checked.
| 0 | 247 | false | 0 | 1 |
Language choices for writing very fast abstractions interfacing with Python?
| 4,188,359 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
I would like to be able to access all the components of say a Flash image gallery on someone else's site. I want to be able to find the images, image coordinates, action script code, audio files, video, etc. I do not want to manipulate these elements, I just want to view them and their related information.
Is this possible via scripting languages like Ruby, Python or Javascript?
| 0 |
javascript,python,flash
|
2010-11-16T19:20:00.000
| 0 | 4,198,069 |
No, not really. Not like you can examine the DOM of a webpage. You can download and decompile the swf, but you may or may not be able to get all the info you want out.
| 0 | 110 | false | 1 | 1 |
Is it possible to access the internal elements of an embedded Flash object via a scripting language?
| 4,198,087 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 |
I would like to be able to access all the components of say a Flash image gallery on someone else's site. I want to be able to find the images, image coordinates, action script code, audio files, video, etc. I do not want to manipulate these elements, I just want to view them and their related information.
Is this possible via scripting languages like Ruby, Python or Javascript?
| 0 |
javascript,python,flash
|
2010-11-16T19:20:00.000
| 0 | 4,198,069 |
You can if (and only if) your application domain is the same.
| 0 | 110 | true | 1 | 1 |
Is it possible to access the internal elements of an embedded Flash object via a scripting language?
| 4,198,271 |
1 | 2 | 0 | 3 | 0 | 0 | 0.291313 | 0 |
I am completely new to Python-- never used it before today. I am interested in devloping Python applications for the web. I would like to check to see if my web server supports WSGI or running python apps in some way.
Let's say I have a .py file that prints "Hello world!". How can I test to see if my server supports processing this file?
FYI, this is a Mac OS X server 10.5. So I know Python is installed (It's installed on Mac OS X by default), but I don't know if it's set up to process .py files server-side and return the results.
BTW, I'm coming from a PHP background, so this is a bit foreign to me. I've looked at the python docs re: wgsi, cgi, etc. but since I haven't done anything concrete yet, it's not quite making sense.
| 0 |
python,wsgi
|
2010-11-16T21:57:00.000
| 0 | 4,199,442 |
If you are new to Python and Python web application development, then ignore all the hosting issues to begin with and don't start from scratch. Simply go get a full featured Python web framework such as Django or web2py and learn how to write Python web applications using their in built development web server. You will only cause yourself much pain by trying to solve distinct problem of production web hosting first.
| 0 | 2,547 | false | 0 | 1 |
Beginner Python question about making a web app
| 4,200,386 |
4 | 6 | 0 | 1 | 45 | 1 | 0.033321 | 0 |
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now.
For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning?
Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
| 0 |
python,domain-driven-design
|
2010-11-17T05:47:00.000
| 0 | 4,201,846 |
If Domain Driven Design is an effectively defined design pattern, why does it matter what language you're using? Advice for design philosophies and the like should be largely language agnostic. They're higher level than the language, so to speak.
| 0 | 14,335 | false | 1 | 1 |
Why does domain driven design seem only popular with static languages like C# & Java?
| 4,205,497 |
4 | 6 | 0 | 2 | 45 | 1 | 0.066568 | 0 |
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now.
For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning?
Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
| 0 |
python,domain-driven-design
|
2010-11-17T05:47:00.000
| 0 | 4,201,846 |
Most books on design/coding techniques such as TDD and design patterns are written in Java or C#, since that is currently the lowest common denominator language and have the widest user base, or at least the largest base of people who can read and understand the language. This is done largely for marketing reasons so that they appeals to the largest demographic.
That does not mean the the techniques are not applicable to or used in other languages. From what I know of DDD most of the principles are language independent and AFAICR the original DDD book had almost no code samples in it (but it is a couple of years since I read it, so I may be mistaken).
| 0 | 14,335 | false | 1 | 1 |
Why does domain driven design seem only popular with static languages like C# & Java?
| 4,224,643 |
4 | 6 | 0 | 5 | 45 | 1 | 0.16514 | 0 |
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now.
For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning?
Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
| 0 |
python,domain-driven-design
|
2010-11-17T05:47:00.000
| 0 | 4,201,846 |
Python seems to be not too popular in enterprises till now compared to Java (but I believe the wind is in that direction. An example is Django, which was created by a newspaper company). Most programmers working with python are likely either into scientific computing or into web applications. Both of these fields relates to (computer) sciences, not domain-specific businesses, whereas DDD is most applicable within domain-specific businesses.
So I would argue that it is mostly a matter of legacy. C# and Java were targeted towards enterprise applications from the start.
| 0 | 14,335 | false | 1 | 1 |
Why does domain driven design seem only popular with static languages like C# & Java?
| 12,297,993 |
4 | 6 | 0 | 20 | 45 | 1 | 1.2 | 0 |
Domain driven design has become my architecture of choice. I've been able to find a abundance of books & tutorials for applying DDD principles within the ASP.net framework. It mostly seems inspired from what Java developers have been doing for a good while now.
For my personal projects, I'm starting to lean more towards Python even though I'm finding it difficult to abandon static typing. I was hoping to find lots of help with applying DDD using a dynamic language. There doesn't seem to be anything out there about Python & DDD. Why is that? Obviously DDD can apply quite well to Python. Do people not take on as large of projects in Python? Or is applying DDD simply easier in Python given the dynamic typing therefore reducing the amount of required learning?
Perhaps my questionning is due to my lack of experience with Python. Any advice you might have for me will be appreciated.
| 0 |
python,domain-driven-design
|
2010-11-17T05:47:00.000
| 0 | 4,201,846 |
I think it is definitely popular elsewhere, especially functional languages. However, certain patterns associated with the Big Blue Book are not as applicable in dynamic languages and frameworks like Rails tend to lead people away from ideas of bounded context
However, the true thrust of DDD being ubiquitous language is certainly prevalent in dynamic languages. Rubyists especially takes a great deal of joy in constructing domain specific languages - think of how cucumber features end up looking, that's as DDD as it gets!
Keep in mind, DDD is not a new idea at all, it was just repackaged in a way that got good uptake from C# and Java guys. Those same ideas are around elsewhere under different banners.
| 0 | 14,335 | true | 1 | 1 |
Why does domain driven design seem only popular with static languages like C# & Java?
| 4,208,311 |
2 | 3 | 0 | -1 | 5 | 0 | 1.2 | 0 |
is there a way to run syncdb without loading fixtures?
xo
| 0 |
python,django,django-models
|
2010-11-17T07:34:00.000
| 0 | 4,202,358 |
Rename the fixture to something else than initial_data
| 0 | 1,286 | true | 0 | 1 |
how do run syncdb without loading fixtures?
| 4,203,124 |
2 | 3 | 0 | 0 | 5 | 0 | 0 | 0 |
is there a way to run syncdb without loading fixtures?
xo
| 0 |
python,django,django-models
|
2010-11-17T07:34:00.000
| 0 | 4,202,358 |
best to name your fixtures something_else.json, then run syncdb (and migrate if needed), followed by manage.py loaddata something_else.json
| 0 | 1,286 | false | 0 | 1 |
how do run syncdb without loading fixtures?
| 15,206,734 |
3 | 4 | 0 | 4 | 3 | 1 | 0.197375 | 0 |
I want to learn python, but I feel I should learn C or C++ to get a solid base to build on. I already know some C/C++ as well as other programming languages, which does help. So, should I master C/C++ first?
| 0 |
c++,python,c
|
2010-11-17T07:54:00.000
| 0 | 4,202,455 |
I would say it depends on what you want to achieve (cheesy answer...)
The truth is, learning language is a long process. If you plan on learning a language as a step toward learning another language, you're probably wasting your time.
It takes a good year to be proficient with C++, and that is with basic knowledge of algorithms and object concepts. And I only mean proficient, meaning you can get things done, but certainly not expert or anything.
So the real question is, do you want to spend a year learning C++ before beginning to learn Python ?
If the ultimate goal is to program in Python... it doesn't seem worth it.
| 0 | 12,034 | false | 0 | 1 |
Is it worth learning C/C++ before learning Python?
| 4,202,951 |
3 | 4 | 0 | 1 | 3 | 1 | 0.049958 | 0 |
I want to learn python, but I feel I should learn C or C++ to get a solid base to build on. I already know some C/C++ as well as other programming languages, which does help. So, should I master C/C++ first?
| 0 |
c++,python,c
|
2010-11-17T07:54:00.000
| 0 | 4,202,455 |
In my opinion you should defiantly learn Python before attempting to learn C or C++ as you will get a better understanding of the core concepts, C++ is mush lower level than Python so you will need to make more commands to do something that you can do in one line in python.
| 0 | 12,034 | false | 0 | 1 |
Is it worth learning C/C++ before learning Python?
| 4,202,502 |
3 | 4 | 0 | 2 | 3 | 1 | 0.099668 | 0 |
I want to learn python, but I feel I should learn C or C++ to get a solid base to build on. I already know some C/C++ as well as other programming languages, which does help. So, should I master C/C++ first?
| 0 |
c++,python,c
|
2010-11-17T07:54:00.000
| 0 | 4,202,455 |
Real mastery of a language takes time and lots of practice .. its analogous to learning a natural language like French . you have to do a lot of practice in it. but then different languages teach you different programming methodologies.
python and c++ are all object oriented languages so you will be learning the same programming methodology
The order in which you learn languages doesn't really matter but starting from a lower abstraction to higher one makes understanding some things easier..
| 0 | 12,034 | false | 0 | 1 |
Is it worth learning C/C++ before learning Python?
| 4,202,571 |
1 | 3 | 0 | 3 | 15 | 1 | 0.197375 | 0 |
Imagine a script is running in these 2 sets of "conditions":
live action, set up in sudo crontab
debug, when I run it from console ./my-script.py
What I'd like to achieve is an automatic detection of "debug mode", without me specifying an argument (e.g. --debug) for the script.
Is there a convention about how to do this? Is there a variable that can tell me who the script owner is? Whether script has a console at stdout? Run a ps | grep to determine that?
Thank you for your time.
| 0 |
python,bash,environment-variables,crontab
|
2010-11-18T09:01:00.000
| 1 | 4,213,091 |
Use a command line option that only cron will use.
Or a symlink to give the script a different name when called by cron. You can then use sys.argv[0]to distinguish between the two ways to call the script.
| 0 | 4,781 | false | 0 | 1 |
Detect if python script is run from console or by crontab
| 4,213,327 |
2 | 5 | 0 | 7 | 5 | 0 | 1 | 0 |
I'm trying to get the CPU serial or motherboard serial using C or Python for licensing purposes. Is it possible?
I'm using Linux.
| 0 |
python,c,licensing,cpu,motherboard
|
2010-11-18T14:49:00.000
| 1 | 4,216,009 |
Under Linux, you could use "lshw -quiet -xml" and parse its output. You'll find plenty of system information here: cpuid, motherboard id and much more.
| 0 | 7,942 | false | 0 | 1 |
Getting CPU or motherboard serial number?
| 4,216,127 |
2 | 5 | 0 | 0 | 5 | 0 | 0 | 0 |
I'm trying to get the CPU serial or motherboard serial using C or Python for licensing purposes. Is it possible?
I'm using Linux.
| 0 |
python,c,licensing,cpu,motherboard
|
2010-11-18T14:49:00.000
| 1 | 4,216,009 |
CPUs no longer obtain a serial number and it's been like that for a while now. For the CPUID - it's unique per CPU model therefore it doesn't help with licensing.
| 0 | 7,942 | false | 0 | 1 |
Getting CPU or motherboard serial number?
| 4,223,022 |
2 | 3 | 0 | 4 | 0 | 1 | 1.2 | 1 |
i am working on a project that requires me to create multiple threads to download a large remote file. I have done this already but i cannot understand while it takes a longer amount of time to download a the file with multiple threads compared to using just a single thread. I used my xampp localhost to carry out the time elapsed test. I would like to know if its a normal behaviour or is it because i have not tried downloading from a real server.
Thanks
Kennedy
| 0 |
python,multithreading,download,urllib2
|
2010-11-18T20:15:00.000
| 0 | 4,219,134 |
9 women can't combine to make a baby in one month. If you have 10 threads, they each have only 10% the bandwidth of a single thread, and there is the additional overhead for context switching, etc.
| 0 | 1,871 | true | 0 | 1 |
Python/Urllib2/Threading: Single download thread faster than multiple download threads. Why?
| 4,219,434 |
2 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 1 |
i am working on a project that requires me to create multiple threads to download a large remote file. I have done this already but i cannot understand while it takes a longer amount of time to download a the file with multiple threads compared to using just a single thread. I used my xampp localhost to carry out the time elapsed test. I would like to know if its a normal behaviour or is it because i have not tried downloading from a real server.
Thanks
Kennedy
| 0 |
python,multithreading,download,urllib2
|
2010-11-18T20:15:00.000
| 0 | 4,219,134 |
Twisted uses non-blocking I/O, that means if data is not available on socket right now, doesn't block the entire thread, so you can handle many socket connections waiting for I/O in one thread simultaneous. But if doing something different than I/O (parsing large amounts of data) you still block the thread.
When you're using stdlib's socket module it does blocking I/O, that means when you're call socket.read and data is not available at the moment — it will block entire thread, so you need one thread per connection to handle concurrent download.
These are two approaches to concurrency:
Fork new thread for new connection (threading + socket from stdlib).
Multiplex I/O and handle may connections in one thread (Twisted).
| 0 | 1,871 | false | 0 | 1 |
Python/Urllib2/Threading: Single download thread faster than multiple download threads. Why?
| 4,222,497 |
1 | 6 | 0 | 0 | 13 | 0 | 0 | 0 |
Can someone recommend a project skeleton for python tornado? I suppose it's easy enough to roll my own but I'm curious what else is out there since (obviously) others have been down this road before.
| 0 |
python,tornado
|
2010-11-18T22:24:00.000
| 0 | 4,220,244 |
Tornado comes with a good number of examples and the source is well documented with a few code snipits as well.
| 0 | 3,298 | false | 1 | 1 |
Defacto Project Template for Python Tornado
| 4,220,282 |
1 | 1 | 0 | 4 | 0 | 0 | 1.2 | 0 |
Just wondering, as I think about learning either PHP or Django (I have previous Python knowledge), what advantages do Python and Django have over PHP, what disadvantages etc.
I don't want to know which one is better, surely neither is better, both have their good sides as well as bad sides and I will probably learn both at some point. I don't want to start a flame war or anything, but please tell me some advantages and disadvantages for both to help me choose which one to learn first.
Thanks in advance!
| 0 |
php,python,django
|
2010-11-19T21:10:00.000
| 0 | 4,229,394 |
PHP is a popular language for web development with tons of libraries and examples online.
Python is a modern, well-design programming language where everything is an object. It works well in many environments, including web programming, although it wasn't originally designed for that environment.
If you want a general purpose scripting language that can also be used for web development then learning Python would be a good idea. If you only plan to do web development and your main concern is to get a job, experience in PHP will make you attractive to a large number of potential employers who are already using this technology.
| 0 | 1,897 | true | 1 | 1 |
Python (with Django) and PHP
| 4,229,417 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 |
I'm using simplejson to get data from the New York Time API. It works when I run the file through the terminal with the command "python test.py" but not when I run through TextMate using command + R. I'm running the exact same file. Why is this?
I am running Snow Leopard 10.6.4, TextMate 1.5.10, and Python 2.6.4.
Edit: Sorry for forgetting to include this: by "doesn't work," I mean it says "No module named simplejson". I also noticed that this happens for PyMongo as well ("No module named pymongo").
| 0 |
python,textmate,simplejson
|
2010-11-21T00:39:00.000
| 0 | 4,235,821 |
What doesn't work? You should provide more information like error messages and what-not. However, I assume that the version of python is different, and simplejson isn't on your PYTHONPATH when launched from textmate.
| 0 | 300 | false | 0 | 1 |
Why does simplejson work in Terminal and not TextMate?
| 4,235,847 |
1 | 2 | 0 | 3 | 10 | 0 | 0.291313 | 0 |
is there any way how to use Mechanize with Python 3.x?
Or is there any substitute which works in Python 3.x?
I've been searching for hours, but I didn't find anything :(
I'm looking for way how to login to the site with Python, but the site uses javascript.
Thanks in advance,
Adam.
| 0 |
python,login,screen-scraping,screen,mechanize
|
2010-11-21T09:18:00.000
| 0 | 4,237,164 |
lxml.html provides form handling facilities and supports Python 3.
| 0 | 3,776 | false | 0 | 1 |
Mechanize for Python 3.x
| 4,238,162 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 |
I am developing an email parsing application using python POP3 library on a linux server using Dovecot email server. I have parsed the emails to get the contents and the attachments etc. using POP3 library.
Now the issue is how to notify a user or actually the application that a new email has arrived? I guess there should be some notification system on email server itself which I am missing or something on linux which we can use to implement the same.
Please suggest.
Thanks in advance.
| 0 |
python,linux,email
|
2010-11-22T05:28:00.000
| 0 | 4,242,540 |
POP3 does not have push ability. Like a regular ol' post office you need to actually go to check your e-mail. IMAP does have functionality similar to (but not exactly the same as) mail pushing. I'd suggest taking a look at it.
| 0 | 240 | true | 0 | 1 |
Linux email server, how to know a new email has arrived
| 4,242,804 |
1 | 6 | 0 | 3 | 12 | 0 | 0.099668 | 1 |
I am looking for a python snippet to read an internet radio stream(.asx, .pls etc) and save it to a file.
The final project is cron'ed script that will record an hour or two of internet radio and then transfer it to my phone for playback during my commute. (3g is kind of spotty along my commute)
any snippits or pointers are welcome.
| 0 |
python,stream,audio-streaming,radio
|
2010-11-22T15:47:00.000
| 0 | 4,247,248 |
I am aware this is a year old, but this is still a viable question, which I have recently been fiddling with.
Most internet radio stations will give you an option of type of download, I choose the MP3 version, then read the info from a raw socket and write it to a file. The trick is figuring out how fast your download is compared to playing the song so you can create a balance on the read/write size. This would be in your buffer def.
Now that you have the file, it is fine to simply leave it on your drive (record), but most players will delete from file the already played chunk and clear the file out off the drive and ram when streaming is stopped.
I have used some code snippets from a file archive without compression app to handle a lot of the file file handling, playing, buffering magic. It's very similar in how the process flows. If you write up some sudo-code (which I highly recommend) you can see the similarities.
| 0 | 16,163 | false | 0 | 1 |
Record streaming and saving internet radio in python
| 13,279,976 |
1 | 3 | 0 | 1 | 5 | 0 | 0.066568 | 1 |
For a research project, I am collecting tweets using Python-Twitter. However, when running our program nonstop on a single computer for a week we manage to collect about only 20 MB of data per week. I am only running this program on one machine so that we do not collect the same tweets twice.
Our program runs a loop that calls getPublicTimeline() every 60 seconds. I tried to improve this by calling getUserTimeline() on some of the users that appeared in the public timeline. However, this consistently got me banned from collecting tweets at all for about half an hour each time. Even without the ban, it seemed that there was very little speed-up by adding this code.
I know about Twitter's "whitelisting" that allows a user to submit more requests per hour. I applied for this about three weeks ago, and have not hear back since, so I am looking for alternatives that will allow our program to collect tweets more efficiently without going over the standard rate limit. Does anyone know of a faster way to collect public tweets from Twitter? We'd like to get about 100 MB per week.
Thanks.
| 0 |
python,twitter,python-twitter
|
2010-11-22T20:02:00.000
| 0 | 4,249,684 |
I did a similar project analyzing data from tweets. If you're just going at this from a pure data collection/analysis angle, you can just scrape any of the better sites that collect these tweets for various reasons. Many sites allow you to search by hashtag, so throw in a popular enough hashtag and you've got thousands of results. I just scraped a few of these sites for popular hashtags, collected these into a large list, queried that list against the site, and scraped all of the usable information from the results. Some sites also allow you to export the data directly, making this task even easier. You'll get a lot of garbage results that you'll probably need to filter (spam, foreign language, etc), but this was the quickest way that worked for our project. Twitter will probably not grant you whitelisted status, so I definitely wouldn't count on that.
| 0 | 5,951 | false | 0 | 1 |
How to Collect Tweets More Quickly Using Twitter API in Python?
| 4,250,479 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 1 |
I'm using python with urllib2 & cookielib and such to open a url. This url set's one cookie in it's header and two more in the page with some javascript. It then redirects to a different page.
I can parse out all the relevant info for the cookies being set with the javascript, but I can't for the life of me figure out how to get them into the cookie-jar as cookies.
Essentially, when I follow to the site being redirected too, those two cookies have to be accessible by that site.
To be very specific, I'm trying to login in to gomtv.net by using their "login in with a Twitter account" feature in python.
Anyone?
| 0 |
python,authentication,cookies,cookielib
|
2010-11-23T16:25:00.000
| 0 | 4,258,278 |
You can't set cookies for another domain - browsers will not allow it.
| 0 | 255 | false | 0 | 1 |
How do I manually put cookies in a jar?
| 4,258,354 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
Can you guys please tell if building my own birtviewer like reporting tool but using python is a crazy idea. The company I'm working now, we are using birtviewer to generate reports for the clients, but I'm already getting frustrated tweaking the code to suit our client needs and it's written on massive java code which I don't have any experience at all. And they don't want to mavenize birtviewer, so every new releases I have to manually update my local copy and mavenize it. And the fact that it is really owned by a private company worries me about the future of birtviewer. What do you guys think?
| 0 |
python,reporting
|
2010-11-23T16:56:00.000
| 0 | 4,258,624 |
Sure. Write it. Make it open source and give us a git repo to have a little look... Honestly if the problem exists solve it.
| 0 | 280 | false | 1 | 1 |
python reporting tool, similar to birtviewer
| 4,258,672 |
2 | 5 | 0 | 0 | 12 | 0 | 0 | 0 |
Does anyone have any experience using r/python with data stored in Solid State Drives. If you are doing mostly reads, in theory this should significantly improve the load times of large datasets. I want to find out if this is true and if it is worth investing in SSDs for improving the IO rates in data intensive applications.
| 0 |
python,r,data-analysis,solid-state-drive
|
2010-11-24T02:31:00.000
| 0 | 4,262,984 |
The read and write times for SSDs are significantly higher than standard 7200 RPM disks (it's still worth it with a 10k RPM disk, not sure how much of an improvement it is over a 15k). So, yes, you'd get much faster times on data access.
The performance improvement is undeniable. Then, it's a question of economics. 2TB 7200 RPM disks are $170 a piece, and 100GB SSDS cost $210. So if you have a lot of data, you may run into a problem.
If you read/write a lot of data, get an SSD. If the application is CPU intensive, however, you'd benefit much more from getting a better processor.
| 1 | 4,485 | false | 0 | 1 |
Data analysis using R/python and SSDs
| 4,263,022 |
2 | 5 | 0 | 2 | 12 | 0 | 0.07983 | 0 |
Does anyone have any experience using r/python with data stored in Solid State Drives. If you are doing mostly reads, in theory this should significantly improve the load times of large datasets. I want to find out if this is true and if it is worth investing in SSDs for improving the IO rates in data intensive applications.
| 0 |
python,r,data-analysis,solid-state-drive
|
2010-11-24T02:31:00.000
| 0 | 4,262,984 |
I have to second John's suggestion to profile your application. My experience is that it isn't the actual data reads that are the slow part, it's the overhead of creating the programming objects to contain the data, casting from strings, memory allocation, etc.
I would strongly suggest you profile your code first, and consider using alternative libraries (like numpy) to see what improvements you can get before you invest in hardware.
| 1 | 4,485 | false | 0 | 1 |
Data analysis using R/python and SSDs
| 4,264,161 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.