Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 3 | 0 | 4 | 6 | 0 | 0.26052 | 0 | We are using a basic python log server based on BaseHTTPServer to aggregate our python logs on an ubunutu server. This solution has fulfilled our needs... until now. The number of programs dumping to this log server has grown and now the logger is crippling the system.
Now that we are back to the drawing board, we are considering using syslog.
Would it be advantageous to use syslog over other logging facilites.
Thanks for the help | 0 | python,logging,syslog | 2011-10-13T07:07:00.000 | 1 | 7,750,560 | The advantages of using syslog where available (all modern *nix systems, including Linux, FreeBSD, OS-X etc.) are numerous:
Performance is better: syslog is compiled C and most importantly it works as a separate process so all your logging operations become non-blocking to the applications, processes, and threads that make them
You can log from multiple processes/threads concurrently without worrying about locking. All logging is safely serialized for you so you don't lose data
You get standard sortable time-stamps on all logged lines for free
You get log rotation for free
You get severity level support for free (see man syslog)
You can call logging from any language with a C binding, which is virtually any language
You can trivially log from shell scripts or command line (via logger)
You don't need to reinvent the (how to log) wheel
The only disadvantage I can think of is that syslog is non portable (to non *nix systems), but if you're on any modern *nix, any alternative is more complicated and likely less reliable.
The concern of losing packets because syslog is using UDP may be valid, but in practice on a LAN, I've never found it to be an issue. | 0 | 3,342 | false | 0 | 1 | What are the advantages of using syslog over other logging facilites? | 31,257,970 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 1 | I'm using Python to transfer (via scp) and database a large number of files. One of the servers I transfer files to has odd ssh config rules to stop too many ssh requests from a single location. The upshot of this is that my python script, currently looping through files and copying via os.system, hangs after a few files have been transferred.
Is there a way in which Python could open up an ssh or other connection to the server, so that each file being transferred does not require an instance of ssh login?
Thanks, | 0 | python,sockets,ssh,scp | 2011-10-13T16:07:00.000 | 1 | 7,757,059 | This is not really python specific, but it probably depends on what libraries you can use.
What you need is a way to send files through a single connection.
(This is probably better suited to superuser or severfault.com though.)
Create tarfile locally, upload it and unpack at target?
Maybe you could even run 'tar xz' remotely and upload the file on stdin over SSH? (As MichaelDillon says in the comment, Python can create the tarfile on the fly...)
Is SFTP an option?
Rsync over SSH?
Twisted is an async library that can handle many sockets/connections at once. Is probably overkill for your solution though,
Hope it helps. | 0 | 696 | false | 0 | 1 | open (and maintain) remote connection with python | 7,757,147 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | It seems that it is not possible any more to use the PyDev test runner for the latest version of Aptana studio (3.0.5) (containing Pydev v 2.2.2)
When running the unit tests, the following exception is through by the pydev-plugin: AttributeError: 'module' object has no attribute 'exc_clear' (Occurs in 'Aptana Studio 3\plugins\org.python.pydev.debug_2.2.2.2011100512\pysrc\runfiles.py", line 72, in main' : 'sys.exc_clear()')
I figured out that sys.exc_clear() is a Python-2 method that isn't supported any more by Python-3 ...
I don't know if pydev 2.2.3 fixes this problem ... but it is not available for Aptana yet ... | 0 | eclipse,unit-testing,python-3.x,aptana,pydev | 2011-10-14T13:21:00.000 | 0 | 7,768,237 | The question has been answered.
-> Waiting for fix in next stable release
-> Using nightly until fix available (thanks to Fabio for the hint) | 0 | 402 | true | 0 | 1 | Apatana 3.0.5 (or Eclipse) using Pydev 2.2.2: Unit-tests not working any more for Python 3? | 7,790,327 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I still haven't gotten an answer that I'm happy with. Please do submit some answers if you have a nice system for your Python or PHP projects.
I'm having a management issue with my PHP and Python projects. They both have two kinds of code files: files that should be run in the console or web browser, and files that should be included from other files to extend functionality. After my projects grow into large namespace or module trees, it starts getting disorienting that "executable" files and library files lay side by side with the same file extensions in all my folders. If PHP and Python were pre-compile languages, this would be the files with the main function.
For example, picture I have the namespace com.mycompany.map.address which contained multiple .py or .php files depending on the project. It would contain models for different kinds of addresses and tons of functions for working with addresses. But in addition, it would contain some executable files that runs from the terminal, providing a user tools for searching for addresses, and perhaps adding and removing addresses from a database or such.
I want a way of distinguish such executable files from the tons and tons of code files in my namespace trees
If the files had separate file extensions this wouldn't be a problem. But since they don't, I'm thinking I should separate folders or something, but I don't know what to name them. In PHP I could perform the hack solution of configuring PHP to parse different file extensions, so my project could contain phps or phpx files, for instance.
If anyone has some language-independent advice on how to handle this issue, I'd appreciate it. This could also apply to languages such as C, where one project might compile into many executable files. How should one separate the source files containing main functions from the rest of them? | 0 | php,python,project-management | 2011-10-14T14:11:00.000 | 1 | 7,768,881 | For my PHP projects, I follow a very Java-esque naming convention (my background):
index.php
/classes/{organization type: net, org, com}/{organization name}/{component}
/includes/ <- general configuration, etc; all non-executable;
/lib/ <- third party libraries that are tied to specific releases; not modified, patched/upgraded as needed;
/modules/ <- place for extensions written within the score of the project | 0 | 629 | false | 0 | 1 | How do I separate my executable files from my library files? | 7,786,309 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 1 | I'm trying to run some automated functional tests using python and Twill. The tests verify that my application's OAuth login and connection endpoints work properly.
Luckily Twitter doesn't mind that Twill/Mechanize is accessing twitter.com. However, Facebook does not like the fact that I'm using Twill to access facebook.com. I get their 'Incompatible Browser' response. I simply want to access their OAuth dialog page and either allow or deny the application I'm testing. Is there a way to configure Twill/Mechanize so that Facebook will think its a standard browser? | 0 | python,facebook,browser,twill | 2011-10-14T19:06:00.000 | 0 | 7,772,387 | Try to send user agent header w/ mechanize. | 0 | 946 | false | 0 | 1 | How to configure the python Twill/Mechanize library to acces Facebook | 7,773,528 |
2 | 2 | 1 | 2 | 1 | 1 | 0.197375 | 0 | I've decided to try and create a game before I finish studies. Searching around the net, I decided to create the basic game logic in python (for simplicity and quicker development time), and the actual I/O engine in c# (for better performance. specifically, I'm using Mono with the SFML library).
After coming to grips with both languages and IDEs, I've gotten stuck on integrating the two, which leads me to three questions (the most important one is the second):
a. which module should encapsulate the other? should the python game logic call the c# I/O for input and then update it for output, or should it be the other way around?
b. whatever the answer is, how can I do it? I haven't found any specific instructions on porting or integrating scripts or binaries in either language.
c. Will the calls between modules be significantly harmful for performance? If they will, should I just develop everything in in one language?
Thanks! | 0 | c#,python,mono | 2011-10-17T09:48:00.000 | 0 | 7,792,013 | Have you considered IronPython? It's trivial to integrate and since it's working directly with .net the integration works very well. | 0 | 814 | false | 0 | 1 | Integrating python and c# | 7,792,073 |
2 | 2 | 1 | 2 | 1 | 1 | 1.2 | 0 | I've decided to try and create a game before I finish studies. Searching around the net, I decided to create the basic game logic in python (for simplicity and quicker development time), and the actual I/O engine in c# (for better performance. specifically, I'm using Mono with the SFML library).
After coming to grips with both languages and IDEs, I've gotten stuck on integrating the two, which leads me to three questions (the most important one is the second):
a. which module should encapsulate the other? should the python game logic call the c# I/O for input and then update it for output, or should it be the other way around?
b. whatever the answer is, how can I do it? I haven't found any specific instructions on porting or integrating scripts or binaries in either language.
c. Will the calls between modules be significantly harmful for performance? If they will, should I just develop everything in in one language?
Thanks! | 0 | c#,python,mono | 2011-10-17T09:48:00.000 | 0 | 7,792,013 | Sincerely, I would say C# is today gives you a lot of goods from Python. To quote Jon Skeet:
Do you know what I really like about dynamic languages such as Python, Ruby, and
Groovy? They suck away fluff from your code, leaving just the essence of it—the bits
that really do something. Tedious formality gives way to features such as generators,
lambda expressions, and list comprehensions.
The interesting thing is that few of the features that tend to give dynamic lan-
guages their lightweight feel have anything to do with being dynamic. Some do, of
course—duck typing, and some of the magic used in Active Record, for example—
but statically typed languages don't have to be clumsy and heavyweight.
And you can have dynamic typing too. That's a new project, I would use just C# here. | 0 | 814 | true | 0 | 1 | Integrating python and c# | 7,792,064 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 1 | I can't import WebOb 1.1 with the Python 2.7 runtime, as WebOb imports io, io imports _io, which is blocked by the SDK. Is there a way to whitelist _io? It is obviously not supposed to be blacklisted. | 0 | python,google-app-engine,webob | 2011-10-18T01:06:00.000 | 0 | 7,801,387 | From context, it sounds like you're trying to run your app on the dev_appserver. The dev_appserver does not yet support the Python 2.7 runtime; for now you'll have to do your development and testing on appspot. | 0 | 201 | true | 0 | 1 | GAE Python 2.7, no _io module? | 7,815,687 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I'm running an embedded Python interpreter in Obj-C. I can run Python scripts just fine, but when I try to import certain standard modules, I get ImportError: No module named random, for instance.
However, I can import certain other modules. My investigations has given me this list so far:
Can:
import sys
import math
import datetime
import time
Can't:
import random
import re
import cmath
import numbers
import string
This is from a python file enclosed in a package, imported via PyImport_Import ('package.module'). There is one extension module loaded via Py_InitModule.
This is on Python 2.7.0 - is there any reason some of these modules are available and others not? | 0 | python,objective-c,import | 2011-10-18T10:47:00.000 | 0 | 7,806,031 | Thomas K set me on the right track, even though the problem was the completely opposite.
My Python setup was lacking the standard Python library - the part written in Python (usually distributed through /Library, /Lib or /pylib in the distribution. Once those files were added to my application, all of it imported fine.
So, the link between the importable and non-importable modules above were that the importable were written as Python extensions in C, whereas the non-importable are written in pure Python. | 0 | 771 | false | 0 | 1 | Can't import specific modules in embedded Python | 7,852,425 |
1 | 6 | 0 | 57 | 43 | 1 | 1 | 0 | Is there any way to know by which Python version the .pyc file was compiled? | 0 | python,compilation | 2011-10-18T12:57:00.000 | 0 | 7,807,541 | The first two bytes of the .pyc file are the magic number that tells the version of the bytecodes. The word is stored in little-endian format, and the known values are:
Python version
Decimal
Hexadecimal
Comment
Python 1.5
20121
0x994e
Python 1.5.1
20121
0x994e
Python 1.5.2
20121
0x994e
Python 1.6
50428
0x4cc4
Python 2.0
50823
0x87c6
Python 2.0.1
50823
0x87c6
Python 2.1
60202
0x2aeb
Python 2.1.1
60202
0x2aeb
Python 2.1.2
60202
0x2aeb
Python 2.2
60717
0x2ded
Python 2.3a0
62011
0x3bf2
Python 2.3a0
62021
0x45f2
Python 2.3a0
62011
0x3bf2
!
Python 2.4a0
62041
0x59f2
Python 2.4a3
62051
0x63f2
Python 2.4b1
62061
0x6df2
Python 2.5a0
62071
0x77f2
Python 2.5a0
62081
0x81f2
ast-branch
Python 2.5a0
62091
0x8bf2
with
Python 2.5a0
62092
0x8cf2
changed WITH_CLEANUP opcode
Python 2.5b3
62101
0x95f2
fix wrong code: for x, in ...
Python 2.5b3
62111
0x9ff2
fix wrong code: x += yield
Python 2.5c1
62121
0xa9f2
fix wrong lnotab with for loops and storing constants that should have been removed
Python 2.5c2
62131
0xb3f2
fix wrong code: for x, in ... in listcomp/genexp
Python 2.6a0
62151
0xc7f2
peephole optimizations and STORE_MAP opcode
Python 2.6a1
62161
0xd1f2
WITH_CLEANUP optimization
Python 2.7a0
62171
0xdbf2
optimize list comprehensions/change LIST_APPEND
Python 2.7a0
62181
0xe5f2
optimize conditional branches: introduce POP_JUMP_IF_FALSE and POP_JUMP_IF_TRUE
Python 2.7a0
62191
0xeff2
introduce SETUP_WITH
Python 2.7a0
62201
0xf9f2
introduce BUILD_SET
Python 2.7a0
62211
0x03f3
introduce MAP_ADD and SET_ADD
Python 3000
3000
0xb80b
3010
0xc20b
removed UNARY_CONVERT
3020
0xcc0b
added BUILD_SET
3030
0xd60b
added keyword-only parameters
3040
0xe00b
added signature annotations
3050
0xea0b
print becomes a function
3060
0xf40b
PEP 3115 metaclass syntax
3061
0xf50b
string literals become unicode
3071
0xff0b
PEP 3109 raise changes
3081
0x090c
PEP 3137 make __file__ and __name__ unicode
3091
0x130c
kill str8 interning
3101
0x1d0c
merge from 2.6a0, see 62151
3103
0x1f0c
__file__ points to source file
Python 3.0a4
3111
0x270c
WITH_CLEANUP optimization
Python 3.0a5
3131
0x3b0c
lexical exception stacking, including POP_EXCEPT
Python 3.1a0
3141
0x450c
optimize list, set and dict comprehensions: change LIST_APPEND and SET_ADD, add MAP_ADD
Python 3.1a0
3151
0x4f0c
optimize conditional branches: introduce POP_JUMP_IF_FALSE and POP_JUMP_IF_TRUE
Python 3.2a0
3160
0x580c
add SETUP_WITH, tag: cpython-32
Python 3.2a1
3170
0x620c
add DUP_TOP_TWO, remove DUP_TOPX and ROT_FOUR, tag: cpython-32
Python 3.2a2
3180
0x6c0c
add DELETE_DEREF
Sources:
Python/import.c - merged by aix from Python 2.7.2 and Python 3.2.2
Little endian hex values for comparison first two bytes of Igor Popov's method added by jimbob | 0 | 29,640 | false | 0 | 1 | Is there a way to know by which Python version the .pyc file was compiled? | 7,807,661 |
2 | 2 | 0 | 3 | 1 | 0 | 1.2 | 0 | I'm considering writing a program in C++ which will embed a Python interpreter for executing macros created by users. Suppose that I'm intending to release my program under the GPLv3. What is not clear to me is what impact this has on the macros and data created by the users of my program.
Do any of the requirements of the GPL carry over from my program to the macros written by users of my program? What about to other user data which is stored with the macros, such as images?
Programs like The GIMP and Gnumeric have macro languages, but are released under the GPL. I've never seen a discussion of whether user-created GIMP Script-Fu or Gnumeric spreadsheets must also be distributed under the GPL. I suspect that this is not the case, but I haven't been able to turn up any evidence for it. In the case of Gnumeric in particular, macros seem more like data than like plugins. (I'm aware that there's extensive discussion about the GPL and plugins, but I've found no discussion about what qualifies something as a plugin.)
A confounding issue is that some core functions of may also be implemented as macros which are copied into user data. In this case, the user may be completely unaware that he is even distributing these macros when he distributes his data.
I'd like not to put my users in a situation which encourages them to violate the GPL, unknowingly or otherwise. | 0 | c++,python,macros,licensing,gpl | 2011-10-19T20:21:00.000 | 0 | 7,827,628 | No the macros don't become GPL derived works anymore than a 'C' program written on a GPL linux system or with a GPL gcc compiler become GPL.
Including library functions is a little more complicated. If you wrote those library functions then you can let the user do what they want with them. If they are part of the GIMP/Gnumeric runtime then that's presumably also not a problem.
But if you pull the source for the library/plugin from a GPL (rather than LGPL) app and wrapped it in a library to be called standalone from the user macro then the macro may well be a derived work. | 0 | 186 | true | 0 | 1 | How does the GPL affect macros stored in user data? | 7,854,444 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I'm considering writing a program in C++ which will embed a Python interpreter for executing macros created by users. Suppose that I'm intending to release my program under the GPLv3. What is not clear to me is what impact this has on the macros and data created by the users of my program.
Do any of the requirements of the GPL carry over from my program to the macros written by users of my program? What about to other user data which is stored with the macros, such as images?
Programs like The GIMP and Gnumeric have macro languages, but are released under the GPL. I've never seen a discussion of whether user-created GIMP Script-Fu or Gnumeric spreadsheets must also be distributed under the GPL. I suspect that this is not the case, but I haven't been able to turn up any evidence for it. In the case of Gnumeric in particular, macros seem more like data than like plugins. (I'm aware that there's extensive discussion about the GPL and plugins, but I've found no discussion about what qualifies something as a plugin.)
A confounding issue is that some core functions of may also be implemented as macros which are copied into user data. In this case, the user may be completely unaware that he is even distributing these macros when he distributes his data.
I'd like not to put my users in a situation which encourages them to violate the GPL, unknowingly or otherwise. | 0 | c++,python,macros,licensing,gpl | 2011-10-19T20:21:00.000 | 0 | 7,827,628 | You may want to consider using the LGPL for this reason. I am curious what the FSF has to say about such things though. | 0 | 186 | false | 0 | 1 | How does the GPL affect macros stored in user data? | 7,854,671 |
3 | 3 | 0 | 2 | 4 | 0 | 0.132549 | 0 | Suppose I have a script written in Python or Ruby, or a program written in C. How do I ensure that the script has no access to network capabilities? | 0 | python,c,sandbox | 2011-10-20T22:55:00.000 | 1 | 7,843,238 | Unless you're using a sandboxed version of Python (using PyPy for example), there is no reliable way to switch-off network access from within the script itself. Of course, you could run under a VM with the network access shut off. | 0 | 291 | false | 0 | 1 | How do I block all network access for a script? | 7,843,516 |
3 | 3 | 0 | 5 | 4 | 0 | 1.2 | 0 | Suppose I have a script written in Python or Ruby, or a program written in C. How do I ensure that the script has no access to network capabilities? | 0 | python,c,sandbox | 2011-10-20T22:55:00.000 | 1 | 7,843,238 | You more or less gave a generic answer yourself by tagging it with "sandbox" because that's what you need, some kind of sandbox. Things that come to mind are: using JPython or JRuby that run on the JVM. Within the JVM you can create a sandbox using a policy file so no code in the JVM can do thing you don't allow.
For C code, it's more difficult. The brute force answer could be to run your C code in a virtual machine with no networking capabilities. I really don't have a more elegant answer right now for that one. :) | 0 | 291 | true | 0 | 1 | How do I block all network access for a script? | 7,843,275 |
3 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | Suppose I have a script written in Python or Ruby, or a program written in C. How do I ensure that the script has no access to network capabilities? | 0 | python,c,sandbox | 2011-10-20T22:55:00.000 | 1 | 7,843,238 | Firewalls can block specific applications or processes from accessing the network. ZoneAlarms is a good one that I have used to do exactly what you want in the past. So it can be done programatically, but I don't know near enough about OS programming to offer any advice on how to go about doing it. | 0 | 291 | false | 0 | 1 | How do I block all network access for a script? | 7,843,519 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I need to write a bunch of scripts to grab diagnostic data from customer environments where our product is installed. Troubleshooting data I need comes various sources including Oracle database, PowerShell commands, WMI, Registry, Windows commands such as netstat, our own property files, log files etc. The results will go into HTML pages, which in turn will be zipped emailed out/FTPed by the script.
Which language is the most suited for this purpose? Python, Ruby or something else? I am new to both Python and Ruby. | 0 | python,ruby,scripting | 2011-10-22T08:59:00.000 | 0 | 7,858,436 | I started with python and ruby both few months ago. Ruby has loads and loads of issues and I find python really cool. Dint try with scripting. I am working on web apps.
Used python script to to interact with database and inserting raw fields. Works really fine. | 0 | 1,651 | false | 0 | 1 | Ruby vs Python: Which is better suited for scripting utilities on Windows? And why? | 7,858,486 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I'm trying to interact with a HTML 4.0 website which uses heavily obfuscated javascript to hide the regular HTML elements. What I want to do is to fill out a form and read the returned results, and this is proving harder to do than expected.
When I read the page using Firebug, it gave me the source code deobfuscated, and I can then use this to do what I want to accomplish. The Firebug output showed all the regular elements of a website, such as -tags and the like, which were hidden in the original source.
I've written the rest of my application in Python, using mechanize to interact with other web services, so I'd rather use an existing Python module to do this if that's possible. The problem is not only how to read the source code in a way mechanize can understand, but also how to generate the response which the web server can interpret. Could I use regular mechanize controls even though the html code is obfuscated?
In the beginning of my project I used pywebkitgtk instead of mechanize, but ditched it because it wasn't really implemented that well in python. Most functions are missing. Would that be a sensible method perhaps, to start up a webkit-browser which I read the HTML from, and use that with mechanize?
Any help would be greatly appreciated, I'm really in a bind here. Thanks!
Edit: I tried dumping the HTML fetched from mechanize and opening that with pywebkitgtk, using load_html_string, and then evaluating the html that way. Unfortunately, since the document I'm trying to parse loads more resources dynamically, that scripts just stops waiting for resources to be loaded. Note that I can't use webkit to load the document itself since I use mechanize's CookieJar function to allow me to log in first.
I also tried dumping the HTML from webkit, which for some reason dumped the obfuscated javascript only, while displaying the website perfectly fine. If webkit could dump the deobfuscated javascript the way Firebug does, I could work with that and form a request according to the clean code.. | 0 | python,screen-scraping,mechanize,web-scraping,deobfuscation | 2011-10-22T09:25:00.000 | 0 | 7,858,552 | Rather than trying to process the page, how about just use Firebug to figure out the names of the form fields, and then use httplib or whatever to send a request with the necessary fields and settings?
If it's sent using ajax, you should be able to determine the values being sent to the server in Firebug as well. | 0 | 754 | false | 1 | 1 | Parse and interact with obfuscated javascript | 7,860,906 |
2 | 2 | 0 | 3 | 3 | 1 | 1.2 | 0 | I'm trying to emulate code.InteractiveInterpreter from the embedded Python C API. I'm using PyEval_Evalcode to evaluate the user input. I am trying to evaluate user input in the interpreter and return the output as a string (just like the interpreter would). However, PyEval_Evalcode returns a multitude of datatypes wrapped in PyObject*. Is there any way to do what I am trying to do?
Constraints: It needs to be done using the embedding api. Cannot be done using PyRun_RunSimpleString() and laying down a code.InteractiveInterpreter. | 0 | python,python-c-api | 2011-10-22T16:35:00.000 | 0 | 7,860,958 | The object returned by PyEval_Evalcode() can be transformed to a Python string using PyObject_Repr() or PyObject_Str(). The resultant python string can be turned into a regular C string with PyString_AsString(). | 0 | 270 | true | 0 | 1 | Evaluating Python Code From the CAPI and getting Output | 7,861,240 |
2 | 2 | 0 | 0 | 3 | 1 | 0 | 0 | I'm trying to emulate code.InteractiveInterpreter from the embedded Python C API. I'm using PyEval_Evalcode to evaluate the user input. I am trying to evaluate user input in the interpreter and return the output as a string (just like the interpreter would). However, PyEval_Evalcode returns a multitude of datatypes wrapped in PyObject*. Is there any way to do what I am trying to do?
Constraints: It needs to be done using the embedding api. Cannot be done using PyRun_RunSimpleString() and laying down a code.InteractiveInterpreter. | 0 | python,python-c-api | 2011-10-22T16:35:00.000 | 0 | 7,860,958 | I have binary string and cannot return it as string because of null terminated string.
if(PyString_Check(pValue))
{
const char* s=/*PyBytes_AsString*/PyString_AsString(PyObject_Repr(pValue)); //return hex representation in ascii
int sz=PyString_Size(pValue);//size is valid
const char* s= PyString_AsString(pValue);//return only below null terminated string
} | 0 | 270 | false | 0 | 1 | Evaluating Python Code From the CAPI and getting Output | 11,220,657 |
1 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | Python 2.4.x (cannot install any non-stock modules).
Question for you all. (assuming use of subprocess.popen)
Say you had 20 - 30 machines - each with 6 - 10 files on them that you needed to read into a variable.
Would you prefer to scp into each machine, once for each file (120 - 300 SCP commands total), reading each file after it's SCP'd down into a variable - then discarding the file.
Or - SSH into each machine, once for each file - reading the file into memory. (120 - 300 ssh commands total).
?
Unless there's some other way to grab all desired files in one shot per machine (files are named YYYYMMDD.HH.blah - range would be given 20111023.00 - 20111023.23). - reading them into memory that I cannot think of? | 0 | python,ssh,scp | 2011-10-23T22:00:00.000 | 1 | 7,869,552 | scp lets you:
Copy entire directories using the -r flag: scp -r g0:labgroup/ .
Specify a glob pattern: scp 'g0:labgroup/assignment*.hs' .
Specify multiple source files: scp 'g0:labgroup/assignment1*' 'g0:labgroup/assignment2*' .
Not sure what sort of globbing is supported, odds are it just uses the shell for this. I'm also not sure if it's smart enough to merge copies from the same server into one connection. | 0 | 1,890 | false | 0 | 1 | Python - SCP vs SSH for multiple computers and multiple files | 7,869,588 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I need to send sms globally to different timezones to each customer registered online
The messages are scheduled to be send each month.
I have the country details of each cutomer . I would be writing a python script and add it to cronjobs for sending sms.
The issue I am facing is knowing a safe time ( sometime during the day) based on the country for sending the SMS.
MY server runs in Australian/Melbourne timezone.What is the best way for me to find the safe time to send SMS to a customer from another country | 0 | python | 2011-10-24T03:10:00.000 | 0 | 7,870,966 | You probably want to store each user's timezone in your database and then you can change a particular datetime object's timezone by using datetime.astimezone(tz) .
Now deciding what a good time to to sms them is entirely up to you. | 0 | 125 | false | 1 | 1 | Dealing with timezones for sending sms -python /django | 7,871,122 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | When developing a Python plug-in (in C++), how does one go about setting
the documentation for __new__? In particular, given a new type
defined by a PyTypeObject structure in the C++, how does one document
the arguments which can be passed to the constructor. | 0 | python | 2011-10-24T13:20:00.000 | 0 | 7,876,250 | Constructor arguments are usually documented in the type docstring, i.e. via the tp_doc slot, so you can do help(type) (or type? in IPython) instead of help(type.__new__) or help(type.__init__). | 0 | 119 | false | 0 | 1 | Generating constructor documentation for class defined in C++ | 7,876,369 |
3 | 3 | 0 | 2 | 7 | 0 | 1.2 | 0 | I like Lettuce, and the feel of testing with it. Could I replace all the tests ( doctests/unit tests ) in a project with Lettuce features? | 0 | python,testing,bdd | 2011-10-24T14:59:00.000 | 0 | 7,877,638 | I have to disagree with Andy.
I do agree that the appropriate testing should be done at the appropriate time, and that unit tests (i.e. ones that do not interact with anything outside their unit) do not replace all other forms of testing.
But that doesn't necessarily mean that with the proper separation you cannot use a BDD framework (I too haven't used Lettuce) as the runner for your tests.
I too really like the fact that Gherkin syntax can be pushed back towards business experts, testers and sponsors as a means of capturing the process to follow, so I see no reason why one set of specifications can't be aimed at the Unit level, but another could be aimed at the system and regression levels.
Consider this (contrived and obviously nowhere near enough granular) example
Given My test server is updated with last nights build
When I run my regressionTestPack1
Then my regression results should match the known results for
regressionTestPack1
I am also not saying that this is appropriate in ever case. You should evaluate what benefit you get from this approach over leaving all your tests in different test running systems. In particular consider what the experience base is of the people performing the testing.
So if you are writing a small technical project, as the only the developer and you prefer this syntax, theres no reason not too. Just be very careful you still isolate your unit tests from your system tests from your regression tests.
If however you are part of a large team of devs, testers, business analysts, then your case will need to much stronger and is unlikely to actually be valid. | 0 | 1,121 | true | 0 | 1 | Can BDD testing with Lettuce replace all other forms of testing in a project? | 7,994,429 |
3 | 3 | 0 | 8 | 7 | 0 | 1 | 0 | I like Lettuce, and the feel of testing with it. Could I replace all the tests ( doctests/unit tests ) in a project with Lettuce features? | 0 | python,testing,bdd | 2011-10-24T14:59:00.000 | 0 | 7,877,638 | In short, no.
I haven't used Lettuce, but your question applies equally to other BDD frameworks such as Cucumber.
This approach is considered bad practice since integration tests are slower to run and more work to maintain than unit tests.
Also, a big advantage of Gherkin syntax is that it's readable by non-technical stakeholders and it can focus on business rules, whereas unit tests generally deal with detailed implementation specifics at the class/function level not of particular interest to business-focused stakeholders.
There's sometimes an overlap between unit tests and integration/acceptance tests but in general you should aim to find an appropriate balance. | 0 | 1,121 | false | 0 | 1 | Can BDD testing with Lettuce replace all other forms of testing in a project? | 7,878,932 |
3 | 3 | 0 | 1 | 7 | 0 | 0.066568 | 0 | I like Lettuce, and the feel of testing with it. Could I replace all the tests ( doctests/unit tests ) in a project with Lettuce features? | 0 | python,testing,bdd | 2011-10-24T14:59:00.000 | 0 | 7,877,638 | It is a poor idea to use Gherkin/Lettuce for everything.
1) You should never do away with manual testing entirely. You might replace repetitive scripted testing, but you need to run the software past someone who can misunderstand or misuse it. Creative, destructive, human testing is important -- but the heavy lifting (90%+ of all testing) should be automated.
2) Another reason is covered alread: it runs slowly compared to unit tests. I find that the longer it takes to run a test, the less likely people are to run it frequently. You want it to be a non-decision to run the tests after each change, maybe 2 or 3 times every 5 minutes (yes, that fast!).
3) Personally, I think that writing unittests with sniffer or autonose in a different window gives me the very best environment for test-driving code. I don't know how to do that with lettuce.
4) Why switch out languages if you don't have to? Unittest is in python, and there are no fixtures or thunks of any sort to get to the code you're interested in testing. It works well with mocks and fakes. Gherkin is fun, but it's got more plumbing involved. The extra plumbing is great if you have non-programmers writing tests, but otherwise is just overhead. | 0 | 1,121 | false | 0 | 1 | Can BDD testing with Lettuce replace all other forms of testing in a project? | 17,815,773 |
1 | 4 | 0 | 1 | 5 | 0 | 0.049958 | 0 | I wrote a WSGI application which I need to deploy to a server, however I've been given a server that already has mod_python installed.
I am not allowed to remove mod_python since there are some mod_python applications running on it already.
One option I considered was installing mod_wsgi alongside mod_python, however I went through sources and found that was a bad idea. Apparently mod_wsgi and mod_python don't mix well.
Another option I considered was installing mod_fastcgi and deploying it using fastcgi.
I would love to hear if someone has a better idea which doesn't break the current mod_python applications running on the server. | 0 | python,mod-wsgi,mod-python,mod-fastcgi | 2011-10-24T20:19:00.000 | 0 | 7,881,474 | The best solution might be to use mod_proxy and run the Python web app in a different webserver. | 0 | 1,117 | false | 1 | 1 | deploying a WSGI application on mod_python | 7,881,775 |
1 | 2 | 1 | 3 | 2 | 1 | 0.291313 | 0 | I will admit that starting programming on your own as a newbie can seem a bit daunting. However after toying around very basically in both Python and currently C++ I'm wondering if C may be more suitable for a hobbyist. By hobbyist I mean someone who foresees no real future in actually programming for a living but rather sees it (at least currently) as an interesting exercise. So while I would like to be able to do things I'm don't really see myself y'know making a 3d game engine.
I know that I don't NEED to learn C to learn C++. But from what I've read a couple of people have said that C is easier to learn because it's a smaller language. It seems like it would be more suitable to me given that, and I know that C is certainly fine for anything I'd want to do with it, and thus not really need to learn or use it as a stepping stone for C++. From what I can see C would be a) Easier to program with, meaning easier to get in and make things and keep one interested. b) lower level means more flexibility, whereas Python would be hindered perhaps by it's high level nature. C) Still widely used (though perhaps not to the extent of C++)
A lot of people ask about learning C to get to C++ but I'm wondering more about C's own merits in and of them self. I wonder if what I'm thinking is true or if I've been filled with misconceptions. Thanks for any help :) | 0 | c++,python,c | 2011-10-26T22:23:00.000 | 0 | 7,909,666 | You're looking at this wrong. What's your goal? If your goal is to "learn a language" then you are wasting your time. That is like investing your time into learning to use photoshop with no ambition to ever create any neato graphics.
Instead of focusing on the tool, focus on what you want to do with it. If I learn how to use a power saw it's probably because I want to build something out of wood, not because I think power saws are just really awesome.
Ask yourself; what do I want to build? Once you answer that then you set forth finding out which tools would be most appropriate. | 0 | 784 | false | 0 | 1 | Learning programming as a hobbyist... the merits of C vs C++ | 7,909,723 |
1 | 5 | 0 | 0 | 4 | 1 | 0 | 0 | When I import a module that has a class, what code is executed when that class is first read and the class object created? Is there any way I can influence what happens?
Edit: I realize my question might be a bit too general...
I'm looking for something more low-level which will allow me to do introspection from C++. I extend my C++ application with Python. I have some classes that are defined in C++ and exposed in Python. The user can inherit from these classes in the scripts and I want to be able to grab details about them when they are first defined. | 0 | c++,python,class,metaprogramming,introspection | 2011-10-29T20:57:00.000 | 0 | 7,941,660 | Python is interpreted, so when a Python module is imported any class code at the module level is run, along with those classes' meta-classes -- this is so the classes will exist.
C++ is compiled: the classes already exist when they are imported; there is no way to control how they are created as they are already created. | 0 | 144 | false | 0 | 1 | What code is executed when a class is being defined? | 7,944,715 |
1 | 7 | 0 | 7 | 1,528 | 0 | 1 | 1 | What is the Python 3 equivalent of python -m SimpleHTTPServer? | 0 | python,python-3.x,httpserver,simplehttpserver | 2011-10-30T07:22:00.000 | 0 | 7,943,751 | Just wanted to add what worked for me:
python3 -m http.server 8000 (you can use any port number here except the ones which are currently in use) | 0 | 725,970 | false | 0 | 1 | What is the Python 3 equivalent of "python -m SimpleHTTPServer" | 71,111,456 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Is there a way to send command to another interactive shell ? Let's take the example of the meterpreter shell used in metasploit. Could it be a way to say command to this shell from python code, as soon as I get control of a computer and have a meterpreter shell to play with ?
I mean All this from python code. | 0 | python,shell,interactive | 2011-10-31T12:54:00.000 | 1 | 7,953,996 | It will not be easy at all.
You will have to know if meterpreter has any means for other programs to communicate with it.
If it doesn't, you might want to go through hacking through it, e.g using OS pipes, etc to be able to get it to work.
In any case, the code needed for such communication might be beyond Python's power. | 0 | 3,763 | false | 0 | 1 | Send commands to an interactive shell from Python | 7,954,357 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 1 | I'm writing a Python library to access Ubuntu One's REST API. (Yes, I know one already exists; this is a scratch-my-itch-and-learn-while-doing-it project.)
The library will be a relatively thin wrapper around the REST calls. I would like to be able to unit-test my library, without hitting U1 at all. What's the best practise standard for making this possible?
At the moment each REST call is an explicit http request. I can't see how to mock that out, but if I create a (mockable) UbuntuOneRESTAPI class hiding those http calls I suspect it will end up including most of the functionality of the wrapper library, which sort of defeats the purpose. | 0 | python,unit-testing,mocking | 2011-10-31T15:18:00.000 | 0 | 7,955,695 | Your cutting point is the HTTP requests.
Write a mock library which intercepts the sending of the HTTP requests. Instead of sending them, convert them into a String and analyze them to test sending code.
For receiving code, mock the response handler. Save a good response from the REST server in a String and create the HTTP response object from it to test your receiver.
Write a few test cases which create these requests against the real thing so you can quickly verify that the requests/responses are good. | 0 | 604 | true | 0 | 1 | How to test python library wrapping an external REST service (without hitting the service) | 7,956,472 |
1 | 4 | 0 | 1 | 5 | 0 | 0.049958 | 0 | Is there any easy way to debug cgi python programs apart from looking at the log file each time the browser generates an error? | 0 | python,linux,cgi | 2011-10-31T16:37:00.000 | 0 | 7,956,696 | You could capture (or form by hand) the data and env variables which the CGI script receives, then plainly run the script under your favorite debugger and feed the data to it.
In order to capture the incoming data you can just dump it from the script in CGI mode to some log file, then re-use under debugger in standalone mode. | 0 | 3,944 | false | 0 | 1 | Debugging CGI python | 7,957,775 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | scrapy crawler is called through a shell script, which is used as the command line in a crontab entry. The shell script looks like:
scrapy crawl targethost.com
when time is due and it did execute, but seems only the constructor is called (I verified with debug output). The problem is solved by re-write the shell script as:
scrapy crawl targethost.com &> cronlog.log
I just don't know why. | 0 | python,crontab,scrapy | 2011-11-02T17:29:00.000 | 0 | 7,984,714 | Scrapy executes correctly, but doesn't output all its messages to STDOUT, so the simple pipe (>) doesn't redirect everything into your file, only that stuff that goes to STDOUT (which as you say, seems to be the constructor only).
With &> it fetches all messages from scrapy and puts them into your log. | 0 | 359 | false | 1 | 1 | scrapy script called from cron only constructor called | 7,984,842 |
1 | 5 | 0 | 2 | 14 | 1 | 0.07983 | 0 | I'm in the process of working on programming project that involves some pretty extensive Monte Carlo simulation in Python, and as such the generation of a tremendous number of random numbers. Very nearly all of them, if not all of them, will be able to be generated by Python's built in random module.
I'm something of a coding newbie, and unfamiliar with efficient and inefficient ways to do things. Is it faster to generate say, all the random numbers as a list, and then iterate through that list, or generate a new random number each time a function is called, which will be in a very large loop?
Or some other, undoubtedly more clever method? | 0 | python,random | 2011-11-02T23:14:00.000 | 0 | 7,988,494 | Code to generate 10M random numbers efficiently and faster:
import random
l=10000000
listrandom=[]
for i in range (l):
value=random.randint(0,l)
listrandom.append(value)
print listrandom
Time taken included the I/O time lagged in printing on screen:
real 0m27.116s
user 0m24.391s
sys 0m0.819s | 1 | 12,900 | false | 0 | 1 | Efficient way to generate and use millions of random numbers in Python | 36,474,703 |
2 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | Would it be better to use c or c++ for handling computationally intensive tasks in a python program, where speed matters above all. Is there much of a difference between the two ? Which is more cleaner? | 0 | c++,python,c | 2011-11-03T00:36:00.000 | 0 | 7,989,036 | Both are compiled to native code and usually with the same compiler and thus the same compiler optimizations available. The performance difference you pay for supporting C++ language constructs should be negligible. Choose the one you prefer / the one that integrates better with Python / the one that integrates better with other native libraries you want to use. | 0 | 146 | false | 0 | 1 | Which is a better extension language for speed optimization for python c or c++ | 7,989,058 |
2 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | Would it be better to use c or c++ for handling computationally intensive tasks in a python program, where speed matters above all. Is there much of a difference between the two ? Which is more cleaner? | 0 | c++,python,c | 2011-11-03T00:36:00.000 | 0 | 7,989,036 | The runtime performance you gain from using C++ versus C is negligible. In terms of integrating the code with your Python program (and most other languages) it is almost always easier to do with C. In fact if you're using ctypes to load and run the code (which I'd recommend), you still need to write a C interface around the C++ library. | 0 | 146 | false | 0 | 1 | Which is a better extension language for speed optimization for python c or c++ | 7,989,440 |
1 | 1 | 0 | 3 | 2 | 0 | 1.2 | 0 | In mechanize, we have :
method : set_cookiejar()
But, why do we need a cookie jar anyway, when we say that mechanize has automatic cookie handling ?
Please help ! | 0 | python,browser,mechanize,web-scraping | 2011-11-03T04:36:00.000 | 0 | 7,990,321 | You don't need one -- if you don't specify one, Mechanize will just handle it. You might want to use cookies you have already stored in a jar, or save cookies in a jar for use with other scripts, so Mechanize lets you specify one. | 0 | 486 | true | 0 | 1 | Python - Mechanize : Why does it need CookieJar? | 7,990,338 |
1 | 4 | 0 | 3 | 6 | 0 | 0.148885 | 0 | I want to connect to putty and want to do few step:
login to Putty
type few command to bring down the server
Traverse to the particular path
Remove the file from the directory
Again start the server
I need to write the code in windows. But my server is in linux.
How shall I proceed?
Thanks in advance | 0 | python,putty | 2011-11-03T07:36:00.000 | 1 | 7,991,529 | you can use code similar to:
command = "plink.exe -ssh username@" + hostname + " -pw password -batch \"export DISPLAY='" + hostname + "/unix:0.0' ; "
which will open an ssh to the desired hostname using username and password
shutdown:
command += "sudo /sbin/halt\""
reboot:
command += "sudo /sbin/reboot\""
add your other commands using the same method as above,
run the command with:
pid = subprocess.Popen(command).pid
As pointed out by Tadeck, this will only work on a windows machine attempting to connect to a linux machine. | 0 | 59,795 | false | 0 | 1 | Connect to putty and type few command | 7,991,590 |
2 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | I have a program in python that includes a class that takes a function as an argument to the __init__ method. This function is stored as an attribute and used in various places within the class. The functions passed in can be quite varied, and passing in a key and then selecting from a set of predefined functions would not give the same degree of flexibility.
Now, apologies if a long list of questions like this is not cool, but...
Is their a standard way to achieve this in a language where functions aren't first class objects?
Do blocks, like in smalltalk or objective-C, count as functions in this respect?
Would blocks be the best way to do this in those languages?
What if there are no blocks?
Could you add a new method at runtime?
In which languages would this be possible (and easy)?
Or would it be better to create an object with a single method that performs the desired operation?
What if I wanted to pass lots of functions, would I create lots of singleton objects?
Would this be considered a more object oriented approach?
Would anyone consider doing this in python, where functions are first class objects? | 0 | python,oop,first-class-functions | 2011-11-03T12:22:00.000 | 0 | 7,994,777 | In Smalltalk you'd mostly be using blocks. You can also create classes and instances at runtime. | 0 | 163 | false | 0 | 1 | What is the equivalent of passing functions as arguments using an object oriented approach | 8,043,889 |
2 | 4 | 0 | 1 | 0 | 1 | 1.2 | 0 | I have a program in python that includes a class that takes a function as an argument to the __init__ method. This function is stored as an attribute and used in various places within the class. The functions passed in can be quite varied, and passing in a key and then selecting from a set of predefined functions would not give the same degree of flexibility.
Now, apologies if a long list of questions like this is not cool, but...
Is their a standard way to achieve this in a language where functions aren't first class objects?
Do blocks, like in smalltalk or objective-C, count as functions in this respect?
Would blocks be the best way to do this in those languages?
What if there are no blocks?
Could you add a new method at runtime?
In which languages would this be possible (and easy)?
Or would it be better to create an object with a single method that performs the desired operation?
What if I wanted to pass lots of functions, would I create lots of singleton objects?
Would this be considered a more object oriented approach?
Would anyone consider doing this in python, where functions are first class objects? | 0 | python,oop,first-class-functions | 2011-11-03T12:22:00.000 | 0 | 7,994,777 | I don't understand what you mean by "equivalent... using an object oriented approach". In Python, since functions are (as you say) first-class objects, how is it not "object-oriented" to pass functions as arguments?
a standard way to achieve this in a language where functions aren't first class objects?
Only to the extent that there is a standard way of functions failing to be first-class objects, I would say.
In C++, it is common to create another class, often called a functor or functionoid, which defines an overload for operator(), allowing instances to be used like functions syntactically. However, it's also often possible to get by with plain old function-pointers. Neither the pointer nor the pointed-at function is a first-class object, but the interface is rich enough.
This meshes well with "ad-hoc polymorphism" achieved through templates; you can write functions that don't actually care whether you pass an instance of a class or a function pointer.
Similarly, in Python, you can make objects register as callable by defining a __call__ method for the class.
Do blocks, like in smalltalk or objective-C, count as functions in this respect?
I would say they do. At least as much as lambdas count as functions in Python, and actually more so because they aren't crippled the way Python's lambdas are.
Would blocks be the best way to do this in those languages?
It depends on what you need.
Could you add a new method at runtime? In which languages would this be possible (and easy)?
Languages that offer introspection and runtime access to their own compiler. Python qualifies.
However, there is nothing about the problem, as presented so far, which suggests a need to jump through such hoops. Of course, some languages have more required boilerplate than others for a new class.
Or would it be better to create an object with a single method that performs the desired operation?
That is pretty standard.
What if I wanted to pass lots of functions, would I create lots of singleton objects?
You say this as if you might somehow accidentally create more than one instance of the class if you don't write tons of boilerplate in an attempt to prevent yourself from doing so.
Would this be considered a more object oriented approach?
Again, I can't fathom your understanding of the term "object-oriented". It doesn't mean "creating lots of objects".
Would anyone consider doing this in python, where functions are first class objects?
Not without a need for the extra things that a class can do and a function can't. With duck typing, why on earth would you bother? | 0 | 163 | true | 0 | 1 | What is the equivalent of passing functions as arguments using an object oriented approach | 7,995,586 |
1 | 2 | 0 | 2 | 2 | 0 | 0.197375 | 0 | I would like to write a hook for Mercurial to do the following, an am struggling to get going.:
Run on central repo, and execute when changeset(s) are pushed (I think I should use the "input" or "changegroup" hook)
Search each commit message for a string with the format "issue:[0-9]*"
IF string found, call a webservice, and provide the issue number, commit message, and a list of files that were changed
So, just for starters, how can I get the commit message for each commit from the "input" or "changegroup" hook? Any advice beyond this on how to achieve the other points would also be appeciated.
Thanks for any help. | 0 | python,mercurial,hook,mercurial-hook | 2011-11-03T18:47:00.000 | 0 | 8,000,280 | changegroup hook is called once per push. If you want to analyse each changeset, then you want incoming hook (there's no input hook AFAIK) — it'll be called for each changeset, with ID in HG_NODE environment variable. You can get the commit message with e.g. hg log -r $HG_NODE --template '{desc}' or via the API. | 0 | 817 | false | 0 | 1 | How to access commit message from Mercurial Input or Changeset hook | 8,000,347 |
3 | 5 | 0 | 3 | 6 | 0 | 0.119427 | 0 | I've been an Eclipse user for the last 3 years or more. I do Java EE (and Spring) development in it and so far I've done 90% of my tasks without having to touch the mouse. Typically my Eclipse setup is as follow:
Subclipse (or alternatively I use command line)
m2clipse (Maven Eclipse plugin)
Data Source Explorer (dealing with SQL)
The typical Eclipse activities I do (and would like to transfer that to Vim/Emacs) are (this is for multi-module/multi-projects/multi-folder source code):
Refactor (rename method throughout the whole "open project")
Jump to class implementation
Search for all usage of a particular class or method
Updating dependencies (3rd party JARs) via maven pom.xml
Jump to the 3rd party library implementation (maven can download the source.jar if local repository does not have it, eclipse will bring me to the actual Java code for let say, Hibernate entity manager implementation).
Write and run unit-test
All of the above activities would not require me to use mouse. There are a few activities where I would need to use a little bit of mouse such as Global Search file
Lately I've been wanting to try development using VMs. The idea here is to create a barebone VM (let's say to use Ubuntu Server) and start coding there or use Putty/SSH.
I have a MacBook Pro 13" which would benefit of using VIM/Emacs or any lightweight editor.
There are 2 major goals:
Mobility (as in, travelling and coding)
VM as development environment
Tools I'd like to use are as follow:
Linux
Ruby, Python, PHP (and occasionally maybe even Java but definitely not Microsoft .NET)
Any RDBMS
Any build/dependency system
Unit-testing framework
What would you recommend: VIM? Emacs? Others? What about other tools? Gnu Screen, ctags, etc.
Help me build my dream environment: lightweight, productive, easily replicable :)
Thanks! | 0 | python,ruby,linux,vim,emacs | 2011-11-03T20:24:00.000 | 1 | 8,001,384 | I am an Emacs guy (using vi only to edit configuration files under /etc). I think that with Emacs, you should start it at most daily (and it is very different with vim), and you should configure it in your .emacs file. For example, I compile using the F12 key, with (global-set-key [f12] 'recompile) in my .emacs. | 0 | 1,355 | false | 1 | 1 | How to become productive using Vim/Emacs | 8,001,512 |
3 | 5 | 0 | 5 | 6 | 0 | 0.197375 | 0 | I've been an Eclipse user for the last 3 years or more. I do Java EE (and Spring) development in it and so far I've done 90% of my tasks without having to touch the mouse. Typically my Eclipse setup is as follow:
Subclipse (or alternatively I use command line)
m2clipse (Maven Eclipse plugin)
Data Source Explorer (dealing with SQL)
The typical Eclipse activities I do (and would like to transfer that to Vim/Emacs) are (this is for multi-module/multi-projects/multi-folder source code):
Refactor (rename method throughout the whole "open project")
Jump to class implementation
Search for all usage of a particular class or method
Updating dependencies (3rd party JARs) via maven pom.xml
Jump to the 3rd party library implementation (maven can download the source.jar if local repository does not have it, eclipse will bring me to the actual Java code for let say, Hibernate entity manager implementation).
Write and run unit-test
All of the above activities would not require me to use mouse. There are a few activities where I would need to use a little bit of mouse such as Global Search file
Lately I've been wanting to try development using VMs. The idea here is to create a barebone VM (let's say to use Ubuntu Server) and start coding there or use Putty/SSH.
I have a MacBook Pro 13" which would benefit of using VIM/Emacs or any lightweight editor.
There are 2 major goals:
Mobility (as in, travelling and coding)
VM as development environment
Tools I'd like to use are as follow:
Linux
Ruby, Python, PHP (and occasionally maybe even Java but definitely not Microsoft .NET)
Any RDBMS
Any build/dependency system
Unit-testing framework
What would you recommend: VIM? Emacs? Others? What about other tools? Gnu Screen, ctags, etc.
Help me build my dream environment: lightweight, productive, easily replicable :)
Thanks! | 0 | python,ruby,linux,vim,emacs | 2011-11-03T20:24:00.000 | 1 | 8,001,384 | If you ask a question which involves "vim OR emacs" you will never get an useful answer. It's a religious question, which does not have a correct answer! That said, you should clearly use Vim! ;-)
But seriously: Vim is much more lightweight, so it might better suite the scenario you are describing. Vim can be scripted in different languages and you can find many useful scripts at www.vim.org.
Emacs is "heavier", but Lisp is a very powerful scripting languages. So Emacs is much more of a general tool than just a text editor. IDE functionality (like project management) is something I'm missing from time to time in Vim. There are some scripts to do that, but I don't like them. If you need that, I would go for Emacs. | 0 | 1,355 | false | 1 | 1 | How to become productive using Vim/Emacs | 8,001,471 |
3 | 5 | 0 | 1 | 6 | 0 | 0.039979 | 0 | I've been an Eclipse user for the last 3 years or more. I do Java EE (and Spring) development in it and so far I've done 90% of my tasks without having to touch the mouse. Typically my Eclipse setup is as follow:
Subclipse (or alternatively I use command line)
m2clipse (Maven Eclipse plugin)
Data Source Explorer (dealing with SQL)
The typical Eclipse activities I do (and would like to transfer that to Vim/Emacs) are (this is for multi-module/multi-projects/multi-folder source code):
Refactor (rename method throughout the whole "open project")
Jump to class implementation
Search for all usage of a particular class or method
Updating dependencies (3rd party JARs) via maven pom.xml
Jump to the 3rd party library implementation (maven can download the source.jar if local repository does not have it, eclipse will bring me to the actual Java code for let say, Hibernate entity manager implementation).
Write and run unit-test
All of the above activities would not require me to use mouse. There are a few activities where I would need to use a little bit of mouse such as Global Search file
Lately I've been wanting to try development using VMs. The idea here is to create a barebone VM (let's say to use Ubuntu Server) and start coding there or use Putty/SSH.
I have a MacBook Pro 13" which would benefit of using VIM/Emacs or any lightweight editor.
There are 2 major goals:
Mobility (as in, travelling and coding)
VM as development environment
Tools I'd like to use are as follow:
Linux
Ruby, Python, PHP (and occasionally maybe even Java but definitely not Microsoft .NET)
Any RDBMS
Any build/dependency system
Unit-testing framework
What would you recommend: VIM? Emacs? Others? What about other tools? Gnu Screen, ctags, etc.
Help me build my dream environment: lightweight, productive, easily replicable :)
Thanks! | 0 | python,ruby,linux,vim,emacs | 2011-11-03T20:24:00.000 | 1 | 8,001,384 | Either of those text editors will have a learning curve. That being said I have successfully used emacs to do the following tasks that are in line w/ what you've asked:
Write PL/SQL and execute it on an oracle DB all from the editor.
Write, Compile, Run java.
Edit pom files.
Keep a pretty good TODO list in org mode.
You can launch a shell in emacs, and that feature alone does MOST of what you've asked for (SVN, make/ant/mvn/etc).
If you're jumping into one of these editors and hoping for pretty eclipse and vis studio features such as the green junit bar, i'm not sure that they exist. Eclipse' refactor tool works pretty well too and I don't know what is possible in emacs. Though with emacs, I've found that someone has typically written some extension to do what i want, you just need to be able to find it and learn how to use it. I'm an emacs neophyte at best but in scaled down projects I've found it to be pretty efficient and I don't have to take my hands off the keyboard very much.
Disclaimer(java/ee/spring eclipse developer by day that messes around with lua and the love framework using emacs at night) | 0 | 1,355 | false | 1 | 1 | How to become productive using Vim/Emacs | 8,001,983 |
3 | 4 | 0 | 2 | 2 | 0 | 0.099668 | 0 | If you are in the middle of a TDD iteration, how do you know which tests fail because the existing code is genuinely incorrect and which fail because either the test itself or the features haven't been implemented yet? Please don't say, "you just don't care, because you have to fix both." I'm ready to move past that mindset.
My general practice for writing tests is as follows:
First, I architect the general structure of the test suite, in whole or in part. That is - I go through and write only the names of tests, reminding me of the features that I intend to implement. I typically (at least in python) simply start with each testing having only one line: self.fail(). This way, I can ride a stream of consciousness through listing every feature I think I will want to test - say, 11 tests at a time.
Second, I pick one test and actually write the test logic.
Third, I run the test runner and see 11 failures - 10 that simply self.fail() and 1 that is a genuine AssertionError.
Fourth, I write the code that causes my test to pass.
Fifth, I run the test runner and see 1 pass and 10 failures.
Sixth, I go to step 2.
Ideally, instead of seeing tests in terms of passes, failures, and exceptions, I'd like to have a fourth possibility: NotImplemented.
What's the best practice here? | 0 | python,tdd | 2011-11-05T01:28:00.000 | 0 | 8,017,514 | I use a piece of paper to create a test list (scratchpad to keep track of tests so that I don't miss out on them). I hope you're not writing all the failing tests at one go (because that can cause some amount of thrashing as new knowledge comes in with each Red-Green-Refactor cycle).
To mark a test as TO-DO or Not implemented, you could also mark the test with the equivalent of a [Ignore("PENDING")] or [Ignore("TODO")]. NUnit for example would so such tests as yellow instead of failed. So Red implies test failure, Yellow implies TODO. | 0 | 1,102 | false | 0 | 1 | TDD practice: Distinguishing between genuine failures and unimplemented features | 8,017,802 |
3 | 4 | 0 | 1 | 2 | 0 | 0.049958 | 0 | If you are in the middle of a TDD iteration, how do you know which tests fail because the existing code is genuinely incorrect and which fail because either the test itself or the features haven't been implemented yet? Please don't say, "you just don't care, because you have to fix both." I'm ready to move past that mindset.
My general practice for writing tests is as follows:
First, I architect the general structure of the test suite, in whole or in part. That is - I go through and write only the names of tests, reminding me of the features that I intend to implement. I typically (at least in python) simply start with each testing having only one line: self.fail(). This way, I can ride a stream of consciousness through listing every feature I think I will want to test - say, 11 tests at a time.
Second, I pick one test and actually write the test logic.
Third, I run the test runner and see 11 failures - 10 that simply self.fail() and 1 that is a genuine AssertionError.
Fourth, I write the code that causes my test to pass.
Fifth, I run the test runner and see 1 pass and 10 failures.
Sixth, I go to step 2.
Ideally, instead of seeing tests in terms of passes, failures, and exceptions, I'd like to have a fourth possibility: NotImplemented.
What's the best practice here? | 0 | python,tdd | 2011-11-05T01:28:00.000 | 0 | 8,017,514 | Most projects would have a hierarchy (e.g. project->package->module->class) and if you can selectively run tests for any item on any of the levels or if your report covers these parts in detail you can see the statuses quite clearly. Most of the time, when an entire package or class fails, it's because it hasn't been implemented.
Also, In many test frameworks you can disable individual test cases by removing annotation/decoration from or renaming a method/function that performs a test. This has the disadvantage of not showing you the implementation progress, though if you decide on a fixed and specific prefix you can probably grep that info out of your test source tree quite easily.
Having said that, I would welcome a test framework that does make this distinction and has NOT_IMPLEMENTED in addition to the more standard test case status codes like PASS, WARNING and FAILED. I guess some might have it. | 0 | 1,102 | false | 0 | 1 | TDD practice: Distinguishing between genuine failures and unimplemented features | 8,017,731 |
3 | 4 | 0 | 0 | 2 | 0 | 0 | 0 | If you are in the middle of a TDD iteration, how do you know which tests fail because the existing code is genuinely incorrect and which fail because either the test itself or the features haven't been implemented yet? Please don't say, "you just don't care, because you have to fix both." I'm ready to move past that mindset.
My general practice for writing tests is as follows:
First, I architect the general structure of the test suite, in whole or in part. That is - I go through and write only the names of tests, reminding me of the features that I intend to implement. I typically (at least in python) simply start with each testing having only one line: self.fail(). This way, I can ride a stream of consciousness through listing every feature I think I will want to test - say, 11 tests at a time.
Second, I pick one test and actually write the test logic.
Third, I run the test runner and see 11 failures - 10 that simply self.fail() and 1 that is a genuine AssertionError.
Fourth, I write the code that causes my test to pass.
Fifth, I run the test runner and see 1 pass and 10 failures.
Sixth, I go to step 2.
Ideally, instead of seeing tests in terms of passes, failures, and exceptions, I'd like to have a fourth possibility: NotImplemented.
What's the best practice here? | 0 | python,tdd | 2011-11-05T01:28:00.000 | 0 | 8,017,514 | I also now realize that the unittest.expectedFailure decorator accomplishes functionality congruent with my needs. I had always thought that this decorator was more for tests that require certain environmental conditions that might not exist in the production environment where the test is being run, but it actually makes sense in this scenario too. | 0 | 1,102 | false | 0 | 1 | TDD practice: Distinguishing between genuine failures and unimplemented features | 8,021,286 |
2 | 3 | 0 | 3 | 1 | 0 | 0.197375 | 0 | I am looking for an algorithm that is implemented in C, C++, Python or Java that calculates the set of winning coalitions for n agents where each agent has a different amount of votes. I would appreciate any hints. Thanks! | 0 | java,c++,python,c,algorithm | 2011-11-05T08:59:00.000 | 0 | 8,019,172 | In other words, you have an array X[1..n], and want to have all the subsets of it for which sum(subset) >= 1/2 * sum(X), right?
That probably means the whole set qualifies.
After that, you can drop any element k having X[k] < 1/2 * sum(X), and every such a coalition will be fine as an answer, too.
After that, you can proceed dropping elements one by one, stopping when you've reached half of the sum.
This is obviously not the most effective solution: you don't want to drop k1=1,k2=2 if you've already tried k1=2,k2=1—but I believe you can handle this. | 1 | 373 | false | 0 | 1 | Coalition Search Algorithm | 8,019,217 |
2 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I am looking for an algorithm that is implemented in C, C++, Python or Java that calculates the set of winning coalitions for n agents where each agent has a different amount of votes. I would appreciate any hints. Thanks! | 0 | java,c++,python,c,algorithm | 2011-11-05T08:59:00.000 | 0 | 8,019,172 | Arrange the number of votes for each of the agents into an array, and compute the partial sums from the right, so that you can find out SUM_i = k to n Votes[i] just by looking up the partial sum.
Then do a backtrack search over all possible subsets of {1, 2, ...n}. At any point in the backtrack you have accepted some subset of agents 0..i - 1, and you know from the partial sum the maximum possible number of votes available from other agents. So you can look to see if the current subset could be extended with agents number >= i to form a winning coalition, and discard it if not.
This gives you a backtrack search where you consider a subset only if it is already a winning coalition, or you will extend it to become a winning coalition. So I think the cost of the backtrack search is the sum of the sizes of the winning coalitions you discover, which seems close to optimal. I would be tempted to rearrange the agents before running this so that you deal with the agents with most votes first, but at the moment I don't see an argument that says you gain much from that.
Actually - taking a tip from Alf's answer - life is a lot easier if you start from the full set of agents, and then use backtrack search to decide which agents to discard. Then you don't need an array of partial sums, and you only generate subsets you want anyway. And yes, there is no need to order agents in advance. | 1 | 373 | false | 0 | 1 | Coalition Search Algorithm | 8,019,235 |
2 | 2 | 0 | 4 | 8 | 0 | 1.2 | 0 | I have a server running nginx + UWSGI + python. UWSGI is running as a daemon with the flag set: --daemonize /var/log/uwsgi.log which logs all application errors.
I've noticed that on error if I use a python print statement it will write to the log but only on an error. The standard python logging library doesn't seem to affect the log in any situation.
How do I point the python logging libraries to use the UWSGI log? | 0 | python,logging,nginx,uwsgi | 2011-11-05T18:58:00.000 | 1 | 8,022,495 | use logging.StreamHandler as logging handler | 0 | 11,471 | true | 0 | 1 | How to write to log in python with nginx + uwsgi | 8,022,616 |
2 | 2 | 0 | 5 | 8 | 0 | 0.462117 | 0 | I have a server running nginx + UWSGI + python. UWSGI is running as a daemon with the flag set: --daemonize /var/log/uwsgi.log which logs all application errors.
I've noticed that on error if I use a python print statement it will write to the log but only on an error. The standard python logging library doesn't seem to affect the log in any situation.
How do I point the python logging libraries to use the UWSGI log? | 0 | python,logging,nginx,uwsgi | 2011-11-05T18:58:00.000 | 1 | 8,022,495 | uWSGI is a wsgi server, and as such passes a stream in the environ dict passed to the application callable it hosts, using the key wsgi.errors. If you are writing a bare wsgi app, then writing to that stream should do the job. If you are using a framework that abstracts the wsgi interface out (and by the sound of it, you are, print would ordinarily write to sys.stdout, which gets closed on a daemonized process and would never make it to any log file), you will probably need to look into how that framework handles error logging. | 0 | 11,471 | false | 0 | 1 | How to write to log in python with nginx + uwsgi | 8,022,729 |
2 | 18 | 0 | -4 | 240 | 1 | -1 | 0 | Is there a good way to check a form input using regex to make sure it is a proper style email address? Been searching since last night and everybody that has answered peoples questions regarding this topic also seems to have problems with it if it is a subdomained email address. | 0 | python,regex,email-validation,email-address | 2011-11-05T19:05:00.000 | 0 | 8,022,530 | The only really accurate way of distinguishing real, valid email addresses from invalid ones is to send mail to it. What counts as an email is surprisingly convoluted ("John Doe" <[email protected]>" actually is a valid email address), and you most likely want the email address to actually send mail to it later. After it passes some basic sanity checks (such as in Thomas's answer, has an @ and at least one . after the @), you should probably just send an email verification letter to the address, and wait for the user to follow a link embedded in the message to confirm that the email was valid. | 0 | 384,064 | false | 0 | 1 | How to check for valid email address? | 8,022,687 |
2 | 18 | 0 | 0 | 240 | 1 | 0 | 0 | Is there a good way to check a form input using regex to make sure it is a proper style email address? Been searching since last night and everybody that has answered peoples questions regarding this topic also seems to have problems with it if it is a subdomained email address. | 0 | python,regex,email-validation,email-address | 2011-11-05T19:05:00.000 | 0 | 8,022,530 | Use this filter mask on email input:
emailMask: /[\w.\-@'"!#$%&'*+/=?^_{|}~]/i` | 0 | 384,064 | false | 0 | 1 | How to check for valid email address? | 54,658,606 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I am trying out Mechanize to make some routine simpler. I have managed to bypass that error by using br.set_handle_robots(False). There are talks about how ethical it's to use it. What I wonder about is where this error is generated, on my side, or on server side? I mean does Mechanize throw the exception when it sees some robots.txt rule or does server decline the request when it detects that I use an automation tool? | 0 | python,mechanize | 2011-11-07T09:37:00.000 | 0 | 8,034,767 | The server blocks your activity with such response.
Is it your site? If not, follow the rules:
Obey robots.txt file
Put a delay between request, even if robots.txt doesn't require it.
Provide some contact information (e-mail or page URL) in the User-Agent header.
Otherwise be ready site owner blocking you based on User-Agent, IP or other information he thinks distinguish you from legitimate users. | 0 | 997 | false | 1 | 1 | On what side is 'HTTP Error 403: request disallowed by robots.txt' generated? | 8,035,293 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I've used easy_install to get one or two modules, then I used pip to install the Twitter module.
However the newer version of Python I downloaded can't see these modules, only the built in OSX version can.
Also, I am now unable to download NLTK which I need for some examples I'm working through on a really good book called "Mining the Social Web".
Any thoughts? | 0 | python,macos,python-3.x | 2011-11-07T10:50:00.000 | 0 | 8,035,412 | Install the packages with the binary from your version of python.
So for example if your version is in /usr/local/bin then installing would be either:
/usr/local/bin/python setup.py ...
/usr/local/bin/easy_install ...
/usr/local/bin/pip install ... | 0 | 840 | false | 0 | 1 | Install Python modules on new Python version | 8,035,989 |
1 | 2 | 0 | 0 | 2 | 0 | 1.2 | 0 | I have some mails in txt format, that have been forwarded multiple times.
I want to extract the content/the main body of the mail. This should be at the last position in the hierarchy..right? (Someone point this out if I'm wrong).
The email module doesn't give me a way to extract the content. if I make a message object, the object doesn't have a field for the content of the body.
Any idea on how to do it? Any module that exists for the same or any any particular way you can think of except the most naive one of-course of starting from the back of the text file and looking till you find the header.
If there is an easy or straightforward way/module with any other language ( I doubt), please let me know that as well!
Any help is much appreciated! | 0 | python,parsing,email | 2011-11-07T19:59:00.000 | 0 | 8,041,852 | The email module doesn't give me a way to extract the content. if I make a message object, the object doesn't have a field for the content of the body.
Of course it does. Have a look at the Python documentation and examples. In particular, look at the walk and payload methods. | 0 | 1,419 | true | 0 | 1 | Forwarded Email parsing in Python/Any other language? | 8,041,910 |
1 | 5 | 0 | 3 | 12 | 0 | 0.119427 | 0 | I first learned web programming with php a while back. It has some features that I find very helpful, but the overall language is not something I enjoy, just as a matter of personal preference. I am wondering what alternatives I could use to provide similar functionality using a different underlying programming language (Python? Ruby?).
What I am looking for:
general purpose programming capability
in-line server-side code embedded in HTML (i.e. I want to be able to make my documents pure HTML if desired, rather than demanding special syntax even where I don't want dynamic content)
access to request parameters
ability to send headers, set cookies, etc
Preferably:
does not require a separate server process
easy to connect with Apache
Does anyone have any suggestions?
One thing I tried to do was embedded Ruby (erb) through CGI. This looked like a good fit on paper. Unfortunately, I was not able to get it to work, because I was following a few different guides and the result of combining them did not work out. At any rate, it seems this would not allow me to set arbitrary headers (and more importantly, use sessions and cookies).
Note: I'm not looking for a full web framework at the moment. Just relatively small amounts of dynamic content among otherwise HTML pages.
Thanks! | 0 | php,python,ruby,web-applications | 2011-11-08T03:32:00.000 | 0 | 8,045,630 | I'd say given your requirement
Just relatively small amounts of dynamic content among otherwise HTML pages.
then, PHP is going to be hard to beat for getting going quickly and a minimum of learning overhead. It avoids all the CGI issues that you would otherwise have to deal with, and is in fact its own templating language. That's why so many get started with it. Once you get past the point of your goal of mixing a little programming logic into HTML pages, and developing more flexible, maintainable and testable applications, then frameworks such as Rails, Django and others will be worth your time to learn. | 0 | 7,993 | false | 1 | 1 | Alternatives to php for in-line web programming? | 8,045,724 |
1 | 2 | 0 | 2 | 2 | 1 | 0.197375 | 0 | In Python IDE, while we save the script, it will prompt the save Dialog. If we specify the filename as "Test". Then file will be saved without extension as "Test" and not "Test.py".
Is it possible to save the script with .py extension automatically (as Test.py)? | 0 | python,save,python-idle | 2011-11-08T10:03:00.000 | 0 | 8,048,702 | Unfortunately, IDLE doesn't and can't add the .py extension automatically; you will just have to get into the habit of adding it yourself, or use another IDE like Eclipse or Komodo that will do it for you. | 0 | 3,171 | false | 0 | 1 | Save Python scripts with .py extension automatically | 8,058,543 |
3 | 3 | 0 | 2 | 4 | 0 | 0.132549 | 0 | I plan to build a photo-sharing site like Flickr/Picasa for photographers, with features most suited for them. As you know, if that venture proves successful, many GB to TB of data transfers take place every day.
This question is not just about scalability of my application as it grows, but also performance. I would like to make an informed decision. I think I'd go with MySQL database, JavaScript/jQuery for client-side scripting, but what server-side language with it, is the question - - PHP, Python, Ruby or something else?
And there are definitely somethings to keep in mind when developing an application (i.e., scalable coding) that needs to scale over a period time. If any that you would like to suggest, what are they?
NOTE: I am specifying "Photo-sharing site" in order to give you an idea of my mission. Otherwise, this question wouldn't look as subjective. Kindly take it that way. | 0 | php,python,ruby,scalability | 2011-11-08T13:25:00.000 | 0 | 8,051,087 | PHP can do it well. Python also can do it using web frameworks like Django or turbogears.
That being said, language is not an issue as long as it has web capabilities which your post seems to dictate | 0 | 1,102 | false | 1 | 1 | Language best for a Photo-sharing site: PHP, Python, Ruby or something else? | 8,051,192 |
3 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | I plan to build a photo-sharing site like Flickr/Picasa for photographers, with features most suited for them. As you know, if that venture proves successful, many GB to TB of data transfers take place every day.
This question is not just about scalability of my application as it grows, but also performance. I would like to make an informed decision. I think I'd go with MySQL database, JavaScript/jQuery for client-side scripting, but what server-side language with it, is the question - - PHP, Python, Ruby or something else?
And there are definitely somethings to keep in mind when developing an application (i.e., scalable coding) that needs to scale over a period time. If any that you would like to suggest, what are they?
NOTE: I am specifying "Photo-sharing site" in order to give you an idea of my mission. Otherwise, this question wouldn't look as subjective. Kindly take it that way. | 0 | php,python,ruby,scalability | 2011-11-08T13:25:00.000 | 0 | 8,051,087 | I've done Web applications in PHP, ColdFusion, Java, and Ruby, with various frameworks. I find Rails to be the most powerful Web framework I've ever used. Nothing can really equal it, because the power comes from the Ruby language, and no other language (except maybe Smalltalk) can really equal that. That said, as long as you use proper development practice, you should be able to get it done in almost any language.
However, you do not want to use MySQL as a database. PostgreSQL is far more powerful and scalable, and doesn't have MySQL's silly limitations and gotchas. | 0 | 1,102 | false | 1 | 1 | Language best for a Photo-sharing site: PHP, Python, Ruby or something else? | 8,052,193 |
3 | 3 | 0 | 7 | 4 | 0 | 1.2 | 0 | I plan to build a photo-sharing site like Flickr/Picasa for photographers, with features most suited for them. As you know, if that venture proves successful, many GB to TB of data transfers take place every day.
This question is not just about scalability of my application as it grows, but also performance. I would like to make an informed decision. I think I'd go with MySQL database, JavaScript/jQuery for client-side scripting, but what server-side language with it, is the question - - PHP, Python, Ruby or something else?
And there are definitely somethings to keep in mind when developing an application (i.e., scalable coding) that needs to scale over a period time. If any that you would like to suggest, what are they?
NOTE: I am specifying "Photo-sharing site" in order to give you an idea of my mission. Otherwise, this question wouldn't look as subjective. Kindly take it that way. | 0 | php,python,ruby,scalability | 2011-11-08T13:25:00.000 | 0 | 8,051,087 | Any. The language doesn't matter. Ruby-fanatics (especially the RubyOnRails sort) will try and tell you that their language will do everything in only 10 lines and it'll make you dinner and pick the kids up from school. Others will tell you that their language is the most secure, fastest, quickest to develop in, etc. Ignore them.
I love Python and I'd love to recommend it - but seriously, it won't make a difference. Just pick the language you know the best and get writing. So if that's Java, start writing Java. If that's C++, hell, start writing C++.
I don't believe the people who say that [insert language here] is fastest to develop in. It's all about what you find comfortable. Some langauges provide extra functionality but you can always write a library that provides that if you need it - it shouldn't take too long and, chances are, someone has already done it.
Remember: Facebook is written in PHP (though they compile a lot of that PHP to C++ now for speed), MySpace was written in C#/ColdFusion (I believe), Twitter uses Ruby On Rails (though they plan to abandon it apparently), Google uses Java/Go (I think) and LinkedIn uses ASP.net or something I think. My point is - tonnes of services, tonnes of languages and they're all doing ok. Right now, any language will do.
My favourite little phrase is "just build it". Whilst it's a good idea to have a nice architecture and think about performance and scalability - if those things will make you abandon the project half way through, what's the point in bothering? Besides, chances are you'll need to recode a large part of it anyway later on, assuming the project grows. Really think that Facebook are using the same code they were at the start?
So, in summary, pick whichever language you want. It'll be fine. | 0 | 1,102 | true | 1 | 1 | Language best for a Photo-sharing site: PHP, Python, Ruby or something else? | 8,051,120 |
2 | 2 | 1 | 18 | 25 | 0 | 1.2 | 0 | I have a few functions written in C for a game project. These functions get called quite a lot (about 2000-4000 times per second). The functions are written in C for raw speed.
Now, the easiest way for me to include these functions into Python is to use ctypes. The alternative is to write a C extension to Python around these functions (which takes quite a bit of extra effort). So I wondered, not including the initial loading of the DLL, how big is the overhead of ctypes?
I'm using Python 2.7 (the standard CPython release), and I do not want to use an external library like Cython.
I know this question has been asked before, but I haven't seen much information about the performance comparison between the two options. | 0 | python,c,ctypes,overhead | 2011-11-09T15:21:00.000 | 0 | 8,067,171 | I've compared the performance of a C extension vs. a ctypes wrapper. In my particular test, the difference was about 250x. There were multiple calls into the C library so the ctypes wrapper was also executing Python code. The running time for the C library was very short which made the extra overhead for Python code even more significant. So the ratio will likely be different for you but was significant in my case. | 0 | 7,501 | true | 0 | 1 | ctypes vs C extension | 8,069,179 |
2 | 2 | 1 | 9 | 25 | 0 | 1 | 0 | I have a few functions written in C for a game project. These functions get called quite a lot (about 2000-4000 times per second). The functions are written in C for raw speed.
Now, the easiest way for me to include these functions into Python is to use ctypes. The alternative is to write a C extension to Python around these functions (which takes quite a bit of extra effort). So I wondered, not including the initial loading of the DLL, how big is the overhead of ctypes?
I'm using Python 2.7 (the standard CPython release), and I do not want to use an external library like Cython.
I know this question has been asked before, but I haven't seen much information about the performance comparison between the two options. | 0 | python,c,ctypes,overhead | 2011-11-09T15:21:00.000 | 0 | 8,067,171 | The directly C coded interface has the potential to be much much faster. The bottleneck is the interface from Python to C and marshalling arguments and results may for example involve copying strings or converting Python lists to/from C arrays. If you have a loop that makes several hundred of these calls and some of the data doesn't have to be marshalled separately for each call then all you have to do is recode the loop in C and you may be able to massively reduce the bottleneck. ctypes doesn't give you that option: all you can do is call the existing functions directly.
Of course that all depends on exactly what sort of functions you are calling and what sort of data you are passing around. It may be that you can't reduce the overheads in which case I would still expect ctypes to be slower but perhaps not significantly.
Your best best would be to put together some sample of your code written each way and benchmark it. Otherwise there are just too many variables for a definitive answer. | 0 | 7,501 | false | 0 | 1 | ctypes vs C extension | 8,067,399 |
1 | 6 | 0 | 1 | 8 | 1 | 0.033321 | 0 | Is there an equivalent of cons in Python? (any version above 2.5)
If so, is it built in? Or do I need easy_install do get a module? | 0 | python,lisp,cons | 2011-11-10T01:12:00.000 | 0 | 8,073,882 | No. cons is an implementation detail of Lisp-like languages; it doesn't exist in any meaningful sense in Python. | 0 | 8,394 | false | 0 | 1 | LISP cons in python | 8,073,914 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 1 | When I send credentials using the login method of the python SMTP library, do they go off the wire encrypted or as plaintext? | 0 | python,security,smtp,credentials | 2011-11-10T02:12:00.000 | 0 | 8,074,227 | They will only be encrypted if you use SMTP with TLS or SSL. | 0 | 232 | true | 0 | 1 | sending an email using python SMTP library credentials security | 8,074,236 |
2 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 0 | I hope this isn't knocked for being too general, but... I recently had occasion to learn web2py for a final year university project. In this subject teams of four had 8 weeks to design a web app. Ultimately i found that web2py was quite versatile, with it being very easy to get a site up and running fast, a lot of options (janrain etc) - but the end "style" result relied almost entirely on us.
Amongst the other teams, who used other frameworks (each team a different one on the whole), a few of the sites came out with a very slick polished look, without them having to spend much photoshop/css design time and effort. I got the impression that some frameworks are more "friendly" when it came to out of the box design elements (buttons, navigation options, widgets, base css etc) while others aren't.
I have a python (/C/java) background, and intend to learn PHP some point. What frameworks exist out there that provided a base for site design beyond the bare bones? And to emphasise, I have browsed the python page listing frameworks, i am more interested in the design aspect - even if just to see if my assumption was correct. | 0 | python,css,frameworks | 2011-11-10T10:12:00.000 | 0 | 8,077,886 | I feel your pain. As a developer coming from the desktop world and doing some web development, I'm used to setting up the appearance of my application at the same time I select and arrange my user interface widgets.
You will just have to accept that browser based software does not work that way. You must separately learn CSS. Hopefully, you'll learn to like this method of specifying the appearance of the application but whether you do or not there really isn't any alternative to this approach in the browser. | 0 | 249 | false | 1 | 1 | Web Frameworks with site style inbuilt | 8,082,137 |
2 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 0 | I hope this isn't knocked for being too general, but... I recently had occasion to learn web2py for a final year university project. In this subject teams of four had 8 weeks to design a web app. Ultimately i found that web2py was quite versatile, with it being very easy to get a site up and running fast, a lot of options (janrain etc) - but the end "style" result relied almost entirely on us.
Amongst the other teams, who used other frameworks (each team a different one on the whole), a few of the sites came out with a very slick polished look, without them having to spend much photoshop/css design time and effort. I got the impression that some frameworks are more "friendly" when it came to out of the box design elements (buttons, navigation options, widgets, base css etc) while others aren't.
I have a python (/C/java) background, and intend to learn PHP some point. What frameworks exist out there that provided a base for site design beyond the bare bones? And to emphasise, I have browsed the python page listing frameworks, i am more interested in the design aspect - even if just to see if my assumption was correct. | 0 | python,css,frameworks | 2011-11-10T10:12:00.000 | 0 | 8,077,886 | So far what I've seen about Yii Framework (PHP) is that it can generate an initial nice Styled Web Application backbone, ready for you to work in it adding your functionality, DBs, User roles, etc. and of course all the freedom to define your own Look and Feel by defining HTML views, CSS, JS, etc.
I'm about to start learning and using a PHP Framework for my next project. I have never yet used a Framework but I have several years using PHP/MySQL.
For some weeks I have researched on PHP Frameworks and there are CakePHP, CodeIgniter, Zend, Yii, Kohana, etc. and I'm leaning to Yii even though CodeIgniter seems to have more followers I'm stubborn on checking out Yii because of the high praise is getting specially in its quality built and performance.
I wouldn't know how good the other PHP frameworks are on the "default visual style" area. | 0 | 249 | false | 1 | 1 | Web Frameworks with site style inbuilt | 8,085,839 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have ArcGIS 9.3, Eclipse Indigo with PyDev plugin installed. I am unable to configure PyDev so the autocompletion of arcgis functions will work.
I have added python interpreter and path to arcgis bin folder. I am able to run script, import of arcgiscripting works but autocompletion shows only functions that i already used in code, not all possible functions.
Reading other posts i found that in arcgis 10 there is arcpy folder that should be added to pythonpath, i cant find similar folder in arcgis 9.3 version. | 0 | python,eclipse,autocomplete,pydev,arcgis | 2011-11-10T10:27:00.000 | 0 | 8,078,073 | There isn't a similar folder for ArcGIS 9.3. ESRI did a major refactor of the Python API when going from 9.3 to 10 and there are many differences, of which this is one. I found Eclipse very useful for geocoding but I don't recall autocomplete working with ArcGIS 9.3, but I do recall there was an ESRI folder you needed to list in the Eclipse paths though - probably wherever arcgisscript lives. I also remember having to tweak the PYTHONPATH environment variable. Sorry for being vague but my memory is a bit sketchy because I have long since moved to v10. | 0 | 620 | false | 0 | 1 | ArcGIS Eclipse PyDev - code autocomplete not working | 8,596,504 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I am trying to set up python environment on AIX 6.1 TL7, python-2.7.1-1.aix6.1.ppc.rpm installation was successful, however when I try to use BaseHttpServer I am getting following error:
ImportError: No module named _md5
Please advise
Thank You,
m. | 0 | python,aix,import | 2011-11-10T14:04:00.000 | 0 | 8,080,885 | I worked it around by extracting md5.so from hashlib.a. fortunately dynamic library was in the archive. | 0 | 1,049 | true | 0 | 1 | aix 6.1: python: ImportError: No module named _md5 | 8,165,999 |
1 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 1 | I write a python telnet client to communicate with a server through telnet. However, many people tell me that it's no secure. How can I convert it to ssh? Should I need to totally rewrite my program? | 0 | python,security,ssh,telnet | 2011-11-11T01:44:00.000 | 0 | 8,088,742 | While Telnet is insecure, it's essentially just a serial console over a network, which makes it easy to code for. SSH is much, much more complex. There's encryption, authentication, negotiation, etc to do. And it's very easy to get wrong in spectacular fashion.
There's nothing wrong with Telnet per se, but if you can change things over the network - and it's not a private network - you're opening yourself up for trouble.
Assuming this is running on a computer, why not restrict the server to localhost? Then ssh into the computer and telnet to localhost? All the security with minimal hassle. | 0 | 1,247 | false | 0 | 1 | python: convert telnet application to ssh | 8,088,775 |
1 | 4 | 0 | 0 | 3 | 0 | 0 | 1 | I have a page with a lot of ads being loaded in piece by piece.
I need to position an element relative to overall page height, which is changing during load, because of ads being added.
Question: Is there a jquery event or similar to detect, when all elements are loaded? I'm currently "waiting" with setTimeout, but this is far from nice.
An idle event would be nice, which fires once after pageload if no new http requests are made for xyz secs. | 0 | jquery,events,python-idle | 2011-11-11T11:26:00.000 | 0 | 8,093,297 | Ideally the answer would be $(function(){ }) or window.onload = function(){} that fires after all the DOM contents are loaded. But I guess, the ads on your page starts loading asynchronously after the DOM load.
So, assuming you know the number of 'ads' on your page (you said you are loading them piece by piece), my advise would be to increment a counter on each successful 'ad' load. When that counter reaches the total number of ads, you fire a 'all_adv_loaded' function. | 0 | 6,659 | false | 1 | 1 | jquery - can I detect once all content is loaded? | 8,093,470 |
1 | 3 | 0 | 0 | 2 | 1 | 0 | 0 | What tools or techniques can help avoid bugs, especially silly mistakes such as typos, coding in Python and Django?
I know unit-testing every line of code is the "proper" way, but are there any shortcuts?
I know of pylint, but unfortunately it doesn't check Django ORM named parameters, where a typo can go unnoticed. Is there any tool that can handle this kind of bugs?
A colleague thought of an idea to gather smart statistics on tokens (for example about named parameters to functions...), and when a once-in-a-code-base token is encountered it is warned as possible typo.
Do you know of any tool that does something similar? | 0 | python,django,debugging,static-analysis | 2011-11-13T10:31:00.000 | 0 | 8,110,952 | Thank you for your answers, I'll check these tools.
I wanted to share with you other ideas (none python/django specific):
Assert conditions in code - but remove from production code.
Run periodic checks on the data (eg. sending email to dev when found unexpected state) - in case a bug slips by it may be detected faster, before more data is corrupt (but alas after some of it is already corrupt).
Make a single bottom-line test (perhaps simulating user input), that covers most of the program. It may catch exceptions and asserts and is may be easier to maintain than many tests. | 0 | 189 | false | 1 | 1 | tools or techniques to help avoid mistakes in python/django | 8,152,166 |
2 | 3 | 0 | 2 | 5 | 0 | 1.2 | 0 | I've spent last 2 days trying to launch examples from Boost.Python with the "ImportError: DLL load failed: The specified module could not be found" error, while trying to load compiled (using bjam) pyd modules. I was using Windows 7 x64, Python 2.7 x64 with Boost 1.47. I've followed up different answers on StackOverflow and other sites incl. fresh installs (Python 32 and 64 bit, Boost precompiled), manual Boost's libraries building, DLL checks with dependency walker and so on, with no luck. I registered to share the solution, which worked here and which I hope may help someone, struggling with the same error ;) | 0 | python,windows-7,import,boost-python | 2011-11-13T12:56:00.000 | 0 | 8,111,664 | The problem was with the KB2264107 Windows update (http://support.microsoft.com/kb/2264107), "messing" with DLL search routine (security fix). Setting the registry value [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager] : CWDIllegalInDllSearch to 0, allowed to properly load DLL files and properly import .pyd modules. This may also happen on other Windows versions. | 0 | 5,048 | true | 0 | 1 | Boost.Python examples, Windows 7 x64, "ImportError: DLL load failed: The specified module could not be found." | 8,115,457 |
2 | 3 | 0 | 6 | 5 | 0 | 1 | 0 | I've spent last 2 days trying to launch examples from Boost.Python with the "ImportError: DLL load failed: The specified module could not be found" error, while trying to load compiled (using bjam) pyd modules. I was using Windows 7 x64, Python 2.7 x64 with Boost 1.47. I've followed up different answers on StackOverflow and other sites incl. fresh installs (Python 32 and 64 bit, Boost precompiled), manual Boost's libraries building, DLL checks with dependency walker and so on, with no luck. I registered to share the solution, which worked here and which I hope may help someone, struggling with the same error ;) | 0 | python,windows-7,import,boost-python | 2011-11-13T12:56:00.000 | 0 | 8,111,664 | Two solution, no need to use regedit
add BOOST_PYTHON_STATIC_LIB marco when build your dll. It will let
boost.python static link to your dll file rather than dynamic load
in runtime.
add boost.python dll to PATH or copy it to same dir where your dll locate | 0 | 5,048 | false | 0 | 1 | Boost.Python examples, Windows 7 x64, "ImportError: DLL load failed: The specified module could not be found." | 35,737,001 |
1 | 2 | 0 | 1 | 5 | 0 | 1.2 | 0 | I would have a quite simple question, but can't find any suitable automated solution for now.
I have developed an algorithm that performs a lot of stuff (image processing in fact) in Python.
What I want to do now is to optimize it. And for that, I would love to create a graph of my algorithm.
Kind of an UML chart or sequencial chart in fact, in which functions would be displayed with inputs and ouptuts.
My algorithm does not imply complex stuff, and is mainly based on a = f(b) operations (no databases, hardware stuff, server, . . . )
Would you have any hint?
Thanks by advance ! | 0 | python,coding-style | 2011-11-14T10:01:00.000 | 0 | 8,119,900 | UML generation is provided by pyreverse - it's part of pylint package
It generates UML in dot format - or png, etc.
It creates UML diagram, so you can easily see basic structure of your code
I'm not sure if it satisfy all your needs, but it might be helpful | 1 | 435 | true | 0 | 1 | Get the complete structure of a program? | 8,121,141 |
1 | 2 | 0 | 4 | 1 | 0 | 0.379949 | 0 | Some of the Apache modules are related to programming languages, like mod_php and mod_python. The description is basically "enables usage of php within apache" or "enables usage of python within apache". I'm trying to understand an overview of how these types of "language" modules work. | 0 | php,python,apache,mod-python,mod-php | 2011-11-14T22:26:00.000 | 0 | 8,129,088 | This is relatively simple; When the webserver starts, it will register modules within its core. Language interpreter modules, like mod_php, will register a hook within the page request handler.
This means when a user requests a page, the webserver will pass the request to the module, which checks if the requested file is a type that is registered to be executed by the parser behind the module. In PHP's case you are most likely adding "AddType application/x-httpd-php .php" or similar to the httpd.conf file, which mod_php, will take into account when parsing such requests.
PHP is now in control of the request, which will read the file, parse, compile and execute it and then return it to the request buffer which the webserver will serve as content.
Same goes for other modules, although their handling of a request is different, they all do the same thing. | 0 | 645 | false | 0 | 1 | How do mod_php, mod_python, mod_Language work | 8,129,667 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | I am building an extension of ORMLite to target Android.
What I want to do
I want to reproduce one of the behavior that Doctrine and Symfony are achieving in PHP with models.
In a word:
From a yml file generate a bunch of BaseModel class with accessors
and things that won't change.
Let the real model inherits from this
BaseModel so that the user changes could persist even if they regenerate the models from the yml.
My question
I was wondering if this is good in practice to try to achieve such an objective on Android or if this will be risky in terms of performance (the heavy usage of inheritance).
If you think that it is clumsy, how can I allow the user to change the .yml file, generate the model and do no start from scratch rebuilding the customized aspects of his model.
I know this can be done by some "trick" but I really would like not to reinvent the wheel.
EDIT
Sorry, I forgot to add: I am using python to do this.
Thanks | 0 | java,android,python,performance,inheritance | 2011-11-16T14:29:00.000 | 0 | 8,153,264 | It's the correct, and probably only, Java way. In Java all calls are virtual anyway unless you use final all over the place, but that means you couldn't even use interfaces. So most calls will probably be virtual dispatch whatever you do. Inheritance does not incur any other significant penalty.
Besides, Android devices are generally so powerful that trying to sqeeze out tiny bits of performance at the cost of readability and maintainability of the program is almost certainly not needed. In fact, most android devices are almost as powerful as web servers that do the same things in much slower PHP and still manage thousands of users while the Android device serves one. | 0 | 992 | true | 1 | 1 | Cost of Inheritance Java (Android) | 8,153,347 |
2 | 3 | 0 | 3 | 1 | 1 | 0.197375 | 0 | What is is the significance of doctest in Sphinx? Can someone help me understand its use with a simple example. | 0 | python,python-sphinx,restructuredtext | 2011-11-17T04:12:00.000 | 0 | 8,162,020 | Sphinx's doctest is for testing the documentation itself. In other words, it allows for the automatic verification of the documentation's sample code. While it might also verify whether the Python code works as expected, Sphinx is unnecessary for that purpose alone (you could more easily use the standard library's doctest module).
So, a real-world scenario (one I find myself in on a regular basis) goes something like this: a new feature is nearing completion, so I write some documentation to introduce the new feature. The new docs contain one or more code samples. Before publishing the documentation, I run make doctest in my Sphinx documentation directory to verify that the code samples I've written for the audience will actually work. | 0 | 308 | false | 0 | 1 | What is the real world use or significance of sphinx doctest? | 18,234,027 |
2 | 3 | 0 | 1 | 1 | 1 | 1.2 | 0 | What is is the significance of doctest in Sphinx? Can someone help me understand its use with a simple example. | 0 | python,python-sphinx,restructuredtext | 2011-11-17T04:12:00.000 | 0 | 8,162,020 | I haven't used it myself but it is my understanding that it extends the functionality of doctest. For example it adds testsetup and testcleanup directives which you can put your set-up and tear-down logic in. Making it possible for Sphinx to exclude that in the documentation. | 0 | 308 | true | 0 | 1 | What is the real world use or significance of sphinx doctest? | 18,168,477 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | How it possible to speed up debugging in PyDev in Eclipse for Google App Engine programs?
How to speed up code execution?
How to speed up application reloading?
Please share your experience or suggestions. | 0 | python,debugging,google-app-engine,optimization,pydev | 2011-11-17T15:18:00.000 | 1 | 8,169,561 | How often do you need to reload the application?, the dev server will update all your code and configuration changes without need to reload. | 0 | 291 | false | 1 | 1 | How to speed up debugging of python programs in PyDev in Eclipse (esspecially Google App Engine) | 8,169,698 |
1 | 4 | 0 | 1 | 1 | 0 | 0.049958 | 0 | We are currently using nginx as a web server along with PHP-FPM as the php application service. We have a small application which needs to be built but must use Python3. Is their a similar option to use for Python? | 0 | python,nginx,php | 2011-11-18T16:10:00.000 | 0 | 8,185,374 | You can try uwscgi. Easy to config and high performance. | 0 | 3,316 | false | 1 | 1 | using python like php with nginx | 10,713,125 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a share (on Machine-A) mounted via sshfs on Machine-B. From Machine-C, I have this share mounted also via sshfs (double sshfs) like so:
On Machine-C: /mnt/Machine-B/target_share
On Machine-B: /mnt/Machine-A/target_share
On Machine-A: /media/target_share
Now I have a Python program that runs fine in all places tested (including Machine-C on its local file system) except from Machine-C on the drive that lives on Machine-A, but is mounted on Machine-B.
The reason I am running the Python program from Machine-C is that it has the resources necessary to run it. I have run it on Machine-A and Machine-B and it has maxed the memory out on each, thereby failing each time. I have tried to mount the target_share on Machine-B with this type of command as well:
sudo mount -t cifs -o username=<username>,password=<password> //Machine-A/target_share /mnt/target_share
But this doesn't seem to work each way I have tried it, i.e., with different credentials, with and without credentials, etc.
To make matters worse, one caveat is that I can only SSH into Machine-B from Machine-C. I cannot directly access Machine-A from Machine-C, which, if I could, would probably make all this work just fine.
The Python program runs on Machine-C but the logic in the middle that I need to work doesn't run and gives no errors. It basically starts, and then ends a few seconds later.
I am relatively new to Python. Also, not sure if this post would be better on another board. If so, let me know or move as necessary.
I can post the Python code as well if I need to.
My apologies for the complicated post. I didn't know how else to explain it.
Thanks in advance. | 0 | python,filesystems,sshfs,mount-point | 2011-11-18T21:54:00.000 | 0 | 8,189,641 | I found that there may be a bug in sshfs, such that if a user on a Linux system has the same user ID as another, i.e., 1002, but different usernames, this causes problems.
The way I worked around this was to actually avoid sshfs for this case all together and mount the drives directly to a local system. I wanted to avoid this because I couldn't do this from a remote location, but it gets the job done. | 0 | 509 | true | 0 | 1 | Running Python Program via sshfs-mounted Share | 8,217,136 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am planning on writing a small program that interacts with a debian based repository - namely doing a partial mirror**. I am planning to write it in python.
What are some tips for working with the repository including already constructed 'wheels' (to save the invention of yet another one)?
Some issues I have identified
As it is going to be a partial mirror, I will need to regenerate the package lists (Release,Contents*, Packages.{bz2,gz}). (Maybe debian-installer can do it for me??)
How to get changes to package list (I already know that packages do not change, but that the lists only link to the latest file)?
** Already looked into apt-mirror and debmirror. Debmirror is the closest to what I want, however lacking in some features. If apt can deal with multiple releases and architectures then I will consider apt. | 0 | python,deb | 2011-11-19T01:49:00.000 | 1 | 8,191,239 | debian-installer doesn't generate repository metadata. For that, you want a tool like reprepro or mini-dinstall. They'll also handle the second point you raised. | 0 | 200 | false | 0 | 1 | Tips for interacting with debian based repositories | 8,816,534 |
1 | 2 | 0 | 0 | 4 | 1 | 0 | 0 | How do you manage generated source code files in you repositories and deployment routines with Git (PHP, Python, etc)?
For example, I have a repository named "interfaces" with Thrift definitions in it. They can be converted to Python, PHP, JS, etc skeletons/stubs. Other projects in different languages, each in its own repository, want to use those stubs. How to deliver the stubs to the projects?
I see only two ways:
Generate the stub files and store them in the "interfaces" repository, and this repository should be attached to the projects' ones (as readonly submodule or any other way). But this way introduces a lot of headaches when checking out the updates to the interfaces and the stubs due to overcomplicated "git submodules" concepts.
Attach pure "interfaces" repository to each project, and generate stub files as temporary git-ignorable(!) files (with "make stubs" or alike). This way, each project can have their own generation settings with their own patches applied (if needed at all). But you need to introduce some compilation commands to PHP/Python development and production environments (not just "git pull").
What are the pros and cons of these approaches? | 0 | php,python,git,deployment,thrift | 2011-11-20T11:26:00.000 | 0 | 8,201,189 | I think the #2 is way to go, if nothing else because you do not want git to auto merge generated files.
In your php or python app initialization code, you could check timestamp on idl files and generated stubs, and issue warning and/or abort, or start thrift compiler if available. | 0 | 526 | false | 0 | 1 | How to distribute Thrift-generated code in development and production environments with Git? | 8,208,766 |
1 | 5 | 0 | 8 | 10 | 1 | 1.2 | 0 | Assuming I have a class X, how do I check which is the base class/classes, and their base class/classes etc?
I'm using Eclipse with PyDev, and for Java for example you could type CTRL + T on a class' name and see the hierarchy, like:
java.lang.Object
java.lang.Number
java.lang.Integer
Is it possible for Python?
If not possible in Eclipse PyDev, where can I find this information? | 0 | python,class,inheritance,pydev,hierarchy | 2011-11-20T16:35:00.000 | 0 | 8,202,949 | Hit f4 with class name highlighted to open hierarchy view. | 0 | 4,686 | true | 0 | 1 | How do I inspect a Python's class hierarchy? | 8,202,983 |
2 | 2 | 0 | 1 | 5 | 0 | 0.099668 | 0 | I am currently working on a specialization project on simulating guitar effects with Evolutionary Algorithms, and want to use Python and CSound to do this.
The idea is to generate effect parameters in my algorithm in Python, send them to CSound and apply the filter to the audio file, then send the new audio file back to Python to perform frequency analysis for comparison with the target audio file (this will be done in a loop till the audio file is similar enough to the target audio file, so sending/receiving between CSound and Python will be done alot).
Shortly phrased, how do I get Python to send data to a CSound(.csd file), how do I read the data in the .csd file, and how do I send a .wav file from CSound to Python? It is also preferred that this can work dynamically on its own till the criterions for the audio file is met.
Thanks in advance | 0 | python,audio,evolutionary-algorithm,csound | 2011-11-21T15:41:00.000 | 0 | 8,214,339 | sending parameter values from python to csound could be done using the osc protocol
sending audio from csound to python could be done by routing jack channels between the two applications | 0 | 986 | false | 0 | 1 | CSound and Python communication | 8,214,378 |
2 | 2 | 0 | 1 | 5 | 0 | 0.099668 | 0 | I am currently working on a specialization project on simulating guitar effects with Evolutionary Algorithms, and want to use Python and CSound to do this.
The idea is to generate effect parameters in my algorithm in Python, send them to CSound and apply the filter to the audio file, then send the new audio file back to Python to perform frequency analysis for comparison with the target audio file (this will be done in a loop till the audio file is similar enough to the target audio file, so sending/receiving between CSound and Python will be done alot).
Shortly phrased, how do I get Python to send data to a CSound(.csd file), how do I read the data in the .csd file, and how do I send a .wav file from CSound to Python? It is also preferred that this can work dynamically on its own till the criterions for the audio file is met.
Thanks in advance | 0 | python,audio,evolutionary-algorithm,csound | 2011-11-21T15:41:00.000 | 0 | 8,214,339 | You can use Csound's python API, so you can run Csound within python and pass values using the software bus. See csound.h. You might also want to use the csPerfThread wrapper class which can schedule messages to and from Csound when it is running. All functionality is available from python. | 0 | 986 | false | 0 | 1 | CSound and Python communication | 8,218,664 |
2 | 3 | 0 | 0 | 1 | 0 | 1.2 | 0 | I wanted to install WSGI on a RedHat linux box in order to make a Python server interface, but the only way I could find to do that was to use modwsgi, which is an Apache module. The whole reason I'm using WSGI is that I don't want to use Apache, so this kinda defeats the purpose.
Does anyone know of actual WSGI packages for RedHat linux or is this the only way?
----Edit----
I just found out that WSGI is built into Python 2.5 and higher, so I don't need to install anything. I don't know how to mark this question as solved without answering it myself. Any tips will be appreciated. | 0 | python,packages,redhat | 2011-11-22T02:08:00.000 | 1 | 8,221,114 | I found out that WSGI is included in Python 2.5 and above, so you don't need to do any installs. Just say things like from wsgiref import make_server. | 0 | 591 | true | 0 | 1 | Using WSGI on Redhat Linux | 8,221,263 |
2 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I wanted to install WSGI on a RedHat linux box in order to make a Python server interface, but the only way I could find to do that was to use modwsgi, which is an Apache module. The whole reason I'm using WSGI is that I don't want to use Apache, so this kinda defeats the purpose.
Does anyone know of actual WSGI packages for RedHat linux or is this the only way?
----Edit----
I just found out that WSGI is built into Python 2.5 and higher, so I don't need to install anything. I don't know how to mark this question as solved without answering it myself. Any tips will be appreciated. | 0 | python,packages,redhat | 2011-11-22T02:08:00.000 | 1 | 8,221,114 | WSGI is a protocol. In order to use it you need a WSGI container such as mod_wsgi, Paste Deploy, CherryPy, or wsgiref. | 0 | 591 | false | 0 | 1 | Using WSGI on Redhat Linux | 8,221,832 |
2 | 3 | 0 | 0 | 5 | 0 | 0 | 0 | I'm trying to monitor log files that some process are running on linux(to create a joint log file where log entries are grouped together by when they happen). Currently I'm thinking of opening the files being logged, polling with inotify(or wrapper) and then checking if I can read any more of the file.
Is there any better way to do this? Perhaps some library which abstracts the reading/changes in the files watched? | 0 | python,linux,logging | 2011-11-22T13:07:00.000 | 1 | 8,227,308 | If you do it yourself, you might do something like this: If you detect file modification, get the size of the file. If it's larger than last time you can seek to the previous "last" position (i.e. the previous size) and read from there. | 0 | 3,207 | false | 0 | 1 | Is there a better way to monitor log files?(linux/python) | 8,227,859 |
2 | 3 | 0 | 3 | 5 | 0 | 0.197375 | 0 | I'm trying to monitor log files that some process are running on linux(to create a joint log file where log entries are grouped together by when they happen). Currently I'm thinking of opening the files being logged, polling with inotify(or wrapper) and then checking if I can read any more of the file.
Is there any better way to do this? Perhaps some library which abstracts the reading/changes in the files watched? | 0 | python,linux,logging | 2011-11-22T13:07:00.000 | 1 | 8,227,308 | Why won't a "tail -f" be sufficient? You could use popen and pipes to handle this from Python. | 0 | 3,207 | false | 0 | 1 | Is there a better way to monitor log files?(linux/python) | 8,235,200 |
2 | 2 | 0 | 3 | 2 | 1 | 1.2 | 0 | when im running C code to call python functions, there's error on Py_Initialize() The error is ImportError: No module named site. Ive tried to put Py_SetProgramName(argv[0]) but it doesnt work. The cmd call is cInterfacePython Test.py multiply 3 2 (exe is cInterfacePython) | 0 | python,c,python-c-api,python-c-extension,python-embedding | 2011-11-22T19:38:00.000 | 0 | 8,232,708 | I had to muck about a bit with the PATH env-var as well as PYTHONPATH to make things work better when embedding.
Py_SetProgramName is not important, it's mostly for internal reference etc...
So, I suggest you find where python is installed locally (this is available in the registry on Windows machines) and use setenv to set PATH and PYTHONPATH to something appropriate. That would be the python.exe directory for PATH (as in your comment above), as well setting PYTHONPATH to the dir with your own python code and related libraries that you're running from the embedding exe.
Then run Py_Initialize and see if the right thing happens. If you need to modify PYTHONPATH afterward initialization, modify sys.path using PySys_SetPath(). | 0 | 2,355 | true | 0 | 1 | embedding python error on initialization | 8,233,228 |
2 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | when im running C code to call python functions, there's error on Py_Initialize() The error is ImportError: No module named site. Ive tried to put Py_SetProgramName(argv[0]) but it doesnt work. The cmd call is cInterfacePython Test.py multiply 3 2 (exe is cInterfacePython) | 0 | python,c,python-c-api,python-c-extension,python-embedding | 2011-11-22T19:38:00.000 | 0 | 8,232,708 | I was having the same problem (Windows, both with Visual Studio and MinGW/g++), and I solved it by adding to PYTHONPATH the path to site.py.
For some reason, launching python.exe was possible even without it, and sys.path did contain that path (even when PYTHONPATH did not), and I could "import site", but Py_Initialize was not able to do the same thing that python.exe did. | 0 | 2,355 | false | 0 | 1 | embedding python error on initialization | 14,369,084 |
1 | 2 | 0 | 0 | 6 | 1 | 0 | 0 | Im using PyUnit to write unit tests for my code. The setup method is called everytime before any test is run.
Is there a way i can define a method that will be run just once at the beginning before any tests are run ?
Please Help
Thank You | 0 | python,unit-testing,tdd,python-unittest | 2011-11-23T10:15:00.000 | 0 | 8,240,444 | How about using the constructor of your test class? | 0 | 449 | false | 0 | 1 | Running a method just once at the beginning before any tests are run in PyUnit | 8,240,560 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I have some unit tests that are timing sensitive: an action is timed and an error is triggered if it takes too long. When run individually, these tests pass, but when running nosetest recursively on my modules, they often fail. I run concurrent tests, which likely is one reason why the timing is off. Is there any way to indicate that I want this test to be run with no interruptions? | 0 | python,nose,nosetests | 2011-11-23T12:29:00.000 | 0 | 8,242,209 | I think your problem is dependent from how you implemented the timing. The solution I would personally adopt would be to set an environment variable that controls the behaviour of the tests. Candidates could be:
if WITH_TIMING == False [turn off timing altogether]
TIME_STRETCH_FACTOR = ... [apply a time-stretching multiplier in case of concurrent test are run, so that for example a time limit of 5 would become 7.5 if TIME_STRETCH_FACTOR would be 1.5]
If this is not an option, a possible ugly workaround would be to mock the time.time() function, making it return a constant value [this would only work if you use time.time() in your tests directly of course]...
HTH | 0 | 163 | false | 0 | 1 | Timing issues in Python nose test | 8,242,339 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | Using Python, I am trying to write to a USB sensor using ioctl. I have loads of examples of reading from devices either directly or via pyusb, or simple file writes, but anything more complicated disappears off the radar.
I need to use a control_transfer to write Feature Report message
The command is ioctl(devicehandle, Operation, Args)
The issue I have is determining the correct Operation. The Args, I believe should be a buffer containing the Feature Report for the device? plus a Mutable flag set to true
Any help or advice would be greatly received
I should add; the reason for using Python is the code must be device independent. | 0 | python,usb | 2011-11-23T15:35:00.000 | 1 | 8,244,887 | According to the documentation, ioctl() in the fcntl module is unix specific, so it will not work in Windows. There seems to be a Windows variant named DeviceIoControl() that works similarly.
IOCTLs are declared by the device driver or operating system, so I very much doubt that there are IOCTL operations that have the same operation id (IOCTL number) and same parameters on different operating systems.
For Linux, you can check the header files for specific device drivers or possibly some usb core header file for valid IOCTLs. | 0 | 5,132 | false | 0 | 1 | Writing to USB device with Python using ioctl | 8,245,298 |
1 | 2 | 0 | 0 | 1 | 1 | 0 | 0 | I was wondering if it's possible to édit an existing pdf file with Pdfminer. It seens to be a powerful tool, but the documentation is poor/inexisting.
I found some exemples, but they don't match with my goal. I want to make a search engine which changes the color of my keywords in the pdf file. | 0 | python,pdf | 2011-11-23T20:28:00.000 | 0 | 8,248,622 | PDFMiner is not for altering existing PDF files, but for extracting text and metadata from them. The closest solution to what you're looking for using PDFMiner would probably be to use the included pdf2txt.py tool to extract the text and then mark that up to highlight your keywords.
There's also the simple option of just using a PDF viewer with the built-in ability to find and highlight multiple search terms. I think Adobe Acrobat can do it, but I'm not sure about others. | 0 | 2,097 | false | 0 | 1 | Edit pdf file with PDFMiner | 8,249,589 |
1 | 2 | 1 | 1 | 3 | 1 | 1.2 | 0 | I'm working on a monte carol pricer and I need to improve the efficiency of the engine.
MonteCarlo path are created by a third party library (in c++)
Pricing is done in IronPython (script created by the end user)
Everything else is driven by a c# application
the pricing process is as follow:
C# application request the path and collect them
C# application push the paths to the script, who price and return the values
C# application display the result to the end user
The number and size of the paths collected are know in advance.
I have 2 solutions with some advantages and drawback:
Request path generation, for each path, ask the script to return the result and finaaly aggregate the results once all paths are processed
Request path generation, collect all of them, request the script to process all of them at once and retrun me the final price
The first solutions work fine in all scenarios but as the number of path requested increase the performance decrease (I think it's due to the multiple call to ironpython)
The second solution is faster but can hit an "out of memory" exception (I think it's not enough virtual memory addressing space) if the number of path requested is too large
I choose the middle ground and process a bunch of path then aggregate the prices.
What I want now is to increase the performance futher by knowing in advance how many path I can process withou hitting the "out of memory" exception
I did the math and I know in advance the size (in memory) of path for a given request. However because I'm quiet sure it's not a memory problem but more virtual memory addressing issue
So all this text is summarize by the following 2 questions:
Is it possible to know in advance how much virtual memory address my
process wil need to store n instance of a class (size in memory and structure are known)?
Is it possible to know how much virtual memory address are still available for my process
btw I'm working on the 32 bit computer
Thanks in advance for the help | 0 | c#,memory-management,ironpython | 2011-11-24T13:16:00.000 | 0 | 8,257,686 | Finding out how much memory an object takes in .NET is a pretty difficult task. I've hit the same problem several times. There are some imperfect methods, but none are very precise.
My suggestion is to get some estimate of how much a path will take, and then pass a bunch of them leaving a good margin of safety. Even if you're processing them just 10 at a time, you've reduced the overhead 10 times already.
You can even make the margin configurable and then tweak it until you strike a good balance. An even more elegant solution would be to run the whole thing in another process and if it hits an OutOfMemoryException, restart the calculation with less items (and adjust the margin accordingly). However, if you have so much data that it runs out of memory, then it might be a bit slow to pass it across two processes (which will also duplicate the data).
Could it be that the memory overflow is because of some imperfections in the path processor? Memory leaks maybe? Those are possible both in C++ and .NET. | 0 | 871 | true | 0 | 1 | Virtual memory address management in c# | 8,257,843 |
1 | 3 | 0 | 4 | 0 | 1 | 0.26052 | 0 | I have some autogenerated python files that are extremely large (long mathematical equations). Vim slows to a crawl when I open them for editing because I have pyflakes-vim installed. I'd like to be able to disable pyflakes-vim only when I open these long files. Is there a simple way to do this, either before opening the file or even after? I do not want to turn off pyflakes-vim for all python files, just a case-by-case basis. | 0 | python,vim,pyflakes | 2011-11-25T23:28:00.000 | 0 | 8,275,095 | PyFlakes won't run if b:did_pyflakes_plugin is defined when the plugin is loaded, but once it's loaded I don't think there's an easy way to disable it.
What I would do is give the auto-generated files a specific file name pattern (say *_auto.py) and then add to my .vimrc: autocmd BufReadPre *_auto.py :let b:did_pyflakes_plugin=1. | 0 | 2,784 | false | 0 | 1 | How do I disable pyflakes-vim for a particular file? | 8,275,265 |
1 | 3 | 0 | 3 | 3 | 1 | 0.197375 | 0 | The t_error() function is used to handle lexing errors that occur when illegal characters are detected. My question is: How can I use this function to get more specific information on errors? Like error type, in which rule or section the error appears, etc. | 0 | python,error-handling,lexer,ply | 2011-11-27T07:11:00.000 | 0 | 8,284,169 | In general, there is only very limited information available to the t_error() function. As input, it receives a token object where the value has been set to the remaining input text. Analysis of that text is entirely up to you. You can use the t.lexer.skip(n) function to have the lexer skip ahead by a certain number of characters and that's about it.
There is no notion of an "error type" other than the fact that there is an input character that does not match the regular expression of any known token. Since the lexer is decoupled from the parser, there is no direct way to get any information about the state of the parsing engine or to find out what grammar rule is being parsed. Even if you could get the state (which would simply be the underlying state number of the LALR state machine), interpretation of it would likely be very difficult since the parser could be in the intermediate stages of matching dozens of possible grammar rules looking for reduce actions.
My advice is as follows: If you need additional information in the t_error() function, you should set up some kind of object that is shared between the lexer and parser components of your code. You should explicitly make different parts of your compiler update that object as needed (e.g., it could be updated in specific grammar rules).
Just as aside, there are usually very few courses of action for a bad token. Essentially, you're getting input text that doesn't any known part of the language alphabet (e.g., no known symbol). As such, there's not even any kind of token value you can give to the parser. Usually, the only course of action is to report the bad input, throw it out, and continue.
As a followup to Raymond's answer, I would also not advise modifying any attribute of the lexer object in t_error(). | 0 | 3,180 | false | 0 | 1 | lexer error-handling PLY Python | 8,291,315 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.