Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 |
We are mostly a .NET shop and want to cover everything with the Fitness acceptance testing framework. Recently we had to write a couple of scripts for unix and we used python. Now the suggestion has been made that we should write Fitness tests for these python scripts and integrate them into our automated test process.
What would be the general strategy for doing this? Should I start a python project in visual studio and add the python scripts to it and expect it to work? Should I use a normal c# project and some look for some sort of compiler or interpreter in IronPython that can load these python scripts and either run them as is, or generate a .net assembly out of them or something?
Does anyone with experience in IronPython have a good suggestion?
Also what is the latest version of IronPython (and visual studio integration tools) that supports .net 3.5 and visual studio 2008 without compiling anything?
I tried the latest but it only supports .net 4 and vs 2010. So I tried 2.6 but it doesn't seem to come with visual studio integration.
Thanks
| 0 |
visual-studio-2008,.net-3.5,ironpython,fitnesse,cpython
|
2011-04-18T08:30:00.000
| 0 | 5,700,318 |
You can do it either way. We use IronPython runtime embedded in our code so we use the hosting options to test any python via c# unit test classes. Remember you can use fire up an Iron Python Engine (3.5) or DLR based script host (4.0) and give it a string.
In 3.5 there is no DLR to the Iron Python 1.1 is the order of the day, whereas in 4.0 the DLR supports IronPython 2.6 out of the box, and there is a codeplex update that is python 2.7 level.
However of the key aspects of automated unit testing is to use the language that's close to the original language so the other way is probably 'more' classical!
| 0 | 342 | true | 0 | 1 |
Using IronPython so I can test normal Python scripts in .net
| 5,700,447 |
2 | 3 | 0 | 3 | 0 | 1 | 0.197375 | 0 |
I am a newbie in python.
I have a unicode in Tamil.
When I use the sys.getdefaultencoding() I get the output as "Cp1252"
My requirement is that when I use text = testString.decode("utf-8") I get the error "UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-8: character maps to undefined"
| 0 |
python
|
2011-04-18T10:30:00.000
| 0 | 5,701,569 |
When I use the
sys.getdefaultencoding() I get the
output as "Cp1252"
Two comments on that: (1) it's "cp1252", not "Cp1252". Don't type from memory. (2) Whoever caused sys.getdefaultencoding() to produce "cp1252" should be told politely that that's not a very good idea.
As for the rest, let me guess. You have a unicode object that contains some text in the Tamil language. You try, erroneously, to decode it. Decode means to convert from a str object to a unicode object. Unfortunately you don't have a str object, and even more unfortunately you get bounced by one of the very few awkish/perlish warts in Python 2: it tries to make a str object by encoding your unicode string using the system default encoding. If that's 'ascii' or 'cp1252', encoding will fail. That's why you get a Unicode*En*codeError instead of a Unicode*De*codeError.
Short answer: do text = testString.encode("utf-8"), if that's what you really want to do. Otherwise please explain what you want to do, and show us the result of print repr(testString).
| 0 | 879 | false | 0 | 1 |
Conversion of Unicode
| 5,702,742 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 |
I am a newbie in python.
I have a unicode in Tamil.
When I use the sys.getdefaultencoding() I get the output as "Cp1252"
My requirement is that when I use text = testString.decode("utf-8") I get the error "UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-8: character maps to undefined"
| 0 |
python
|
2011-04-18T10:30:00.000
| 0 | 5,701,569 |
you need to know which character-encoding is testString using. if not utf8, an error will occur when using decode('utf8').
| 0 | 879 | false | 0 | 1 |
Conversion of Unicode
| 9,630,980 |
1 | 3 | 0 | 2 | 1 | 0 | 0.132549 | 0 |
How can i change password of ubuntu root user by python script? Thanks.
| 0 |
python,linux,change-password
|
2011-04-18T17:31:00.000
| 1 | 5,706,597 |
You can modify /etc/passwd (/etc/shadow) with Python script which will need root permissions sudo python modify.py /etc/passwd (where modify.py is your script that will change password)
| 0 | 4,855 | false | 0 | 1 |
Changing password, python, linux
| 5,706,671 |
1 | 2 | 0 | 4 | 8 | 1 | 1.2 | 0 |
I don't know a whole lot about the technical details for constructing and sending email (I figure that's what libraries are for). Seems like both of these classes can be used to construct a basic text email, so which one should I use?
What are the differences between these? When is appropriate to use one vs. the other?
| 0 |
python,email,mime
|
2011-04-18T22:38:00.000
| 0 | 5,709,688 |
One difference I found was that MIMEText has the Content-Type header set to something like 'text/plain'; whereas, Message does not set this header. For me, that's a good enough reason to default to MIMEText, but I'd be interested to know if there are other differences.
| 0 | 783 | true | 0 | 1 |
When should I use email.message.Message vs. email.mime.text.MIMEText when constructing an email in Python?
| 5,710,604 |
1 | 3 | 0 | 1 | 4 | 0 | 0.066568 | 0 |
I'm trying to change my remote server's timezone via Fabric like so:
run("export TZ=\":Pacific/Auckland\"")
run("date")
This doesn't seem to work. run("date") gives me:
Tue Apr 19 00:19:58 CDT 2011 which is not the timezone I just set.
If I just log into the server and run the same bash commands, everything's just as expected:
[lazo@lazoweb]$ date
Tue Apr 19 00:20:00 CDT 2011
[lazo@lazoweb]$ export TZ=":Pacific/Auckland"
[lazo@lazoweb]$ date
Tue Apr 19 17:20:20 NZST 2011
Can anyone shed some light on this? What am I missing?
| 0 |
python,bash,fabric
|
2011-04-19T05:39:00.000
| 1 | 5,712,062 |
This only works for the current shell. Close the shell, start a new one and type date, you will see that the TZ has reset to the default timezone. Even for Fabric if you capture the output, you'd see that the TimeZone does get set correctly but as the script ends, so does the shell and hence the TZ variable is no longer available.
| 0 | 733 | false | 0 | 1 |
How do I set remote server TimeZone via Fabric?
| 5,712,086 |
1 | 2 | 0 | 0 | 4 | 0 | 0 | 0 |
I have a project which is mostly written in C, but it also has a Python API which uses Python extension modules written in C.
What is the best way to write installation/deployment scripts for a Linux/UNIX environment? Usually, I use the make utility to compile and install projects written in C. Most of the time, I just have the make utility compile all the source code into executables, and then copy the executables to /usr/local/bin.
However, my Python API requires the compilation/installation of shared library (.so) files for use with Python. This basically involves compiling the necessary C files, and then copying the shared libraries to some directory that is part of the Python sys.path, such as /usr/local/lib/pythonX.X/dist-packages/.
But how can the appropriate directory for Python extension modules be detected by the Make utility? Is there an environment variable or something that lists the directories in Python's sys.path?
| 0 |
python,c,linux,unix,makefile
|
2011-04-19T16:19:00.000
| 1 | 5,719,506 |
I would separate the project out into two parts. Your C part can use make as usual. Your python module can use the python setup tools, which are capable of building extensions.
(You can also write install targets, so you don't have to copy things manually)
| 0 | 146 | false | 0 | 1 |
Install script for C project with Python API
| 5,727,575 |
1 | 2 | 0 | 2 | 1 | 1 | 0.197375 | 0 |
Yes, I've searched. So after spending about 4-5 hours struggling just to get Python files running, I recently stumbled over the solution to get it running through the environment variables like this: cmd -> python -> Python starts, yay yay
Since it didn't work to do it through the command line and similar I had to do it manually through the Windows interface. Now that it's working, however I cannot open .py files without typing out the full path like this: python C:\X\X\X\test.py which is obviously also starting to get annoying.
So now I'm trying to find out which variable I have to change (yet again) to only be able to type 'python test.py' and have it running. Sorry if I come off vague, but it's always a major pain to setup a new programming language for me and it kills my mood.
Thanks for help, it'll be really appreciated.
| 0 |
python,windows,development-environment
|
2011-04-19T19:58:00.000
| 1 | 5,721,948 |
To make python executable on your command line, you need to add it to your PATH environment variable, which it sounds like you have done on the command line. It is quite simple to add directories to the PATH in Windows if you know where to look. Essentially, you need to get to the Environment Variables dialog box, which is slightly different for each version of Windows.
For Windows XP: Start -> Control Panel -> System -> Advanced -> Environment Variables
For Windows Vista, 7: Click the Start Orb, right-click Computer and select Properties -> Advanced -> Environment Variables
Then, in the lower of the two boxes, find Path and click Edit. Change it so that C:\Python27 (or whichever version of Python you have) is at one end of the list, separated from the other entries by a semicolon (e.g. C:\Python27;C:\Program Files ...)
Once you've done this, python will work at the command line whenever you open a command window.
Regarding your second issue, however, there isn't much you can do. You must either specify the complete path to your script or already be in the same directory as the script. That is, if the script is in C:\X\X\X you will either need to invoke it as C:\X\X\X\test.py or first cd C:\X\X\X.
| 0 | 694 | false | 0 | 1 |
Setting up a Python development environment on Windows
| 5,722,080 |
2 | 2 | 0 | 0 | 3 | 0 | 0 | 0 |
I have a python program that sets up a wordpress site on my server. It downloads the zip and unzips it into a directory, sets up the database and user, configures the config file. Now I would like to call the the wp_install function in wp-admin/include/upgrade.php and pass it the parameters it needs $weblog_title, $user_name, $admin_email ...
My question is how can I call this function from python? Can I do a urllib.urlopen and if so how do I call the wp_install function with the right parameters?
| 0 |
python,wordpress
|
2011-04-20T14:56:00.000
| 1 | 5,732,384 |
Urllib is an option, but because your script is running on the local machine anyway, I would probably use os.system. That way you can execute the php script like from a shell. You have to look into the php file on how to pass the parameters.
| 0 | 1,206 | false | 1 | 1 |
Automate WordPress Install from python
| 5,732,608 |
2 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 |
I have a python program that sets up a wordpress site on my server. It downloads the zip and unzips it into a directory, sets up the database and user, configures the config file. Now I would like to call the the wp_install function in wp-admin/include/upgrade.php and pass it the parameters it needs $weblog_title, $user_name, $admin_email ...
My question is how can I call this function from python? Can I do a urllib.urlopen and if so how do I call the wp_install function with the right parameters?
| 0 |
python,wordpress
|
2011-04-20T14:56:00.000
| 1 | 5,732,384 |
It looks like wp_install() gets called inside of /wp-admin/install.php during step 1, and after form data has been validated. If you submit ?step=1& ... (all of the other required form fields) it should result in calling wp_install. So yes, you should be able to use urllib(2) for this.
| 0 | 1,206 | false | 1 | 1 |
Automate WordPress Install from python
| 5,732,698 |
4 | 8 | 0 | 2 | 1 | 0 | 0.049958 | 0 |
please advise me some good Python IDE, I was using netbeans but it does not have suitable code completion (when I press "." it gives me methods of all classes of python. It would be nice if netbeans would work as for ex. for PHP..
Thank you.
| 0 |
python,ide
|
2011-04-21T09:59:00.000
| 0 | 5,742,472 |
Eclipse with Pydev
nothing better out there
| 0 | 1,928 | false | 0 | 1 |
Python IDE with auto completion
| 5,742,542 |
4 | 8 | 0 | 1 | 1 | 0 | 1.2 | 0 |
please advise me some good Python IDE, I was using netbeans but it does not have suitable code completion (when I press "." it gives me methods of all classes of python. It would be nice if netbeans would work as for ex. for PHP..
Thank you.
| 0 |
python,ide
|
2011-04-21T09:59:00.000
| 0 | 5,742,472 |
well, many IDEs now come with pretty good code completion. Eclipse with pydev is nice, or you can get aptana studio 3 to perform similar to it.
Theres also jetbrain's PyCharm, if you don't mind paying for a licence (they do give a trial version too if you want to test it before buying). There are a lot of such IDEs, guess you have to try them out to see which suits your code completion tastes better.
| 0 | 1,928 | true | 0 | 1 |
Python IDE with auto completion
| 5,742,663 |
4 | 8 | 0 | 1 | 1 | 0 | 0.024995 | 0 |
please advise me some good Python IDE, I was using netbeans but it does not have suitable code completion (when I press "." it gives me methods of all classes of python. It would be nice if netbeans would work as for ex. for PHP..
Thank you.
| 0 |
python,ide
|
2011-04-21T09:59:00.000
| 0 | 5,742,472 |
PyCharm for pay or Komodo Edit for free.
| 0 | 1,928 | false | 0 | 1 |
Python IDE with auto completion
| 5,743,360 |
4 | 8 | 0 | 1 | 1 | 0 | 0.024995 | 0 |
please advise me some good Python IDE, I was using netbeans but it does not have suitable code completion (when I press "." it gives me methods of all classes of python. It would be nice if netbeans would work as for ex. for PHP..
Thank you.
| 0 |
python,ide
|
2011-04-21T09:59:00.000
| 0 | 5,742,472 |
Try Geany and Ctrl+Enter. Foo bar <= wrote this because SO said answer was to short ;)
| 0 | 1,928 | false | 0 | 1 |
Python IDE with auto completion
| 5,742,508 |
2 | 2 | 0 | 2 | 0 | 0 | 1.2 | 0 |
Let's say Tight Ars & Co. is a company with incredibly tight security policies, and lets assume I work for this company. Assume they've one task that requires a python script to write to excel files, and I find this incredibly wonderful library called xlwt. Now my script is able to write to excel files, everything is wonderful and the sun is shining, I release the code, and suddenly I'm asked what is this thingamajig setup.py, why should we run it? wait, we'll not even run it, we want the environment to be clean from third party code etc etc, since I'm unaware of any wizardry or voo doo is there any way I can package the dependent libraries and import them in my script?
| 0 |
python,module,package,archive,xlwt
|
2011-04-21T20:14:00.000
| 0 | 5,749,326 |
All setup.py typically does with any pure-Python package is copy files into a standard place and compile the .py files to .pyc. I can't imagine why your employer would regard that as (nasty) third-party software, but the source of the package is OK, your IDE is OK, Python itself is OK, etc ...
Options:
(1) Copy the xlwt directory from a source distribution to somewhere that's listed in sys.path
(2) Make a ZIP file xlwt.zip containing the contents of the xlwt directory and copy it to ditto.
(3) As (2) but compile the .py files to .pyc first.
If somebody points out that the above involves error-prone manual steps, you can:
(a) write a script to do that
or
(b) copy setup.py, change its name, pretend that you wrote it yourself, use it, ...
| 0 | 458 | true | 0 | 1 |
Python import module (xlwt) from archive
| 5,750,297 |
2 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 |
Let's say Tight Ars & Co. is a company with incredibly tight security policies, and lets assume I work for this company. Assume they've one task that requires a python script to write to excel files, and I find this incredibly wonderful library called xlwt. Now my script is able to write to excel files, everything is wonderful and the sun is shining, I release the code, and suddenly I'm asked what is this thingamajig setup.py, why should we run it? wait, we'll not even run it, we want the environment to be clean from third party code etc etc, since I'm unaware of any wizardry or voo doo is there any way I can package the dependent libraries and import them in my script?
| 0 |
python,module,package,archive,xlwt
|
2011-04-21T20:14:00.000
| 0 | 5,749,326 |
Unless I am misunderstanding the question you should be able to obtain the source archive and simply copy the "xlwt" directory to the same directory as your script and it should be importable from the local directory.
| 0 | 458 | false | 0 | 1 |
Python import module (xlwt) from archive
| 5,749,670 |
3 | 5 | 0 | 1 | 12 | 0 | 0.039979 | 0 |
I have a GNU Radio application which utilizes both Python and C++ code. I want to be able to signal the C++ code of an event. If they were in the same scope I would normally use a simple boolean, but the code is separate to the point where some form of shared memory is required. The code in question is performance-critical so an efficient method is required.
I was initially thinking about a shared memory segment that is accessible by both Python and C++. Therefore I could set a flag in the python code and check it from C++. Since I just need a simple flag to pause the C++ code, would a semaphore suffice?
To be clear, I need to set a flag from Python and the C++ code will simply check this flag, and if it is set enter a busy loop.
So would trying to implement a shared memory segment between Python/C++ be a reasonable approach? How about a semaphore? On Linux, which is easier to implement?
Thanks!
| 0 |
c++,python,linux,ipc
|
2011-04-22T15:11:00.000
| 0 | 5,756,813 |
DBus looks promising. It supports signals, so you should be able to stop an application on demand. However, I'm not sure if it's performance will be enough for you.
| 0 | 16,031 | false | 0 | 1 |
Simple but fast IPC method for a Python and C++ application?
| 5,756,950 |
3 | 5 | 0 | 5 | 12 | 0 | 0.197375 | 0 |
I have a GNU Radio application which utilizes both Python and C++ code. I want to be able to signal the C++ code of an event. If they were in the same scope I would normally use a simple boolean, but the code is separate to the point where some form of shared memory is required. The code in question is performance-critical so an efficient method is required.
I was initially thinking about a shared memory segment that is accessible by both Python and C++. Therefore I could set a flag in the python code and check it from C++. Since I just need a simple flag to pause the C++ code, would a semaphore suffice?
To be clear, I need to set a flag from Python and the C++ code will simply check this flag, and if it is set enter a busy loop.
So would trying to implement a shared memory segment between Python/C++ be a reasonable approach? How about a semaphore? On Linux, which is easier to implement?
Thanks!
| 0 |
c++,python,linux,ipc
|
2011-04-22T15:11:00.000
| 0 | 5,756,813 |
Why not open a unix socket? Or use DBus
| 0 | 16,031 | false | 0 | 1 |
Simple but fast IPC method for a Python and C++ application?
| 5,756,983 |
3 | 5 | 0 | 1 | 12 | 0 | 0.039979 | 0 |
I have a GNU Radio application which utilizes both Python and C++ code. I want to be able to signal the C++ code of an event. If they were in the same scope I would normally use a simple boolean, but the code is separate to the point where some form of shared memory is required. The code in question is performance-critical so an efficient method is required.
I was initially thinking about a shared memory segment that is accessible by both Python and C++. Therefore I could set a flag in the python code and check it from C++. Since I just need a simple flag to pause the C++ code, would a semaphore suffice?
To be clear, I need to set a flag from Python and the C++ code will simply check this flag, and if it is set enter a busy loop.
So would trying to implement a shared memory segment between Python/C++ be a reasonable approach? How about a semaphore? On Linux, which is easier to implement?
Thanks!
| 0 |
c++,python,linux,ipc
|
2011-04-22T15:11:00.000
| 0 | 5,756,813 |
You can try using custom signals. I don't know about Python code being able to send custom signals, but your C/C++ can certainly define custom signals with SIGIO.
If you have stringent response-time requirements, you might need to look beyond your application code and into some time of OS with support for real-time signals (rt-linux, muOs, etc.)
| 0 | 16,031 | false | 0 | 1 |
Simple but fast IPC method for a Python and C++ application?
| 5,757,091 |
1 | 1 | 0 | 8 | 10 | 0 | 1 | 0 |
In default installation of cedet-1.0 completion can only track global scope symbols in current file. This is not much differs from built-in completion functions (dabbrev-expand or hippie-expand).
It can complete symbols from neither imported modules, nor class properties.
Not saying it cannot handle 'self'.
Is it possible to tweak semantic to do the things?
P.S.
ECB code browser sucesfully sees all imports/base classess and stuff.
It is symbol completion workd incorrectly, or not properly set up.
| 0 |
python,emacs,code-completion,cedet
|
2011-04-23T20:46:00.000
| 1 | 5,766,832 |
CEDET support for each language is slightly different. In the case of python, the 1.0 release for CEDET hadn't been configured to convert a python import into a file-name. In addition, 'self' is similar to 'this' in c++, which needs to be added by completion logic since it isn't declared. These two features were added to the bzr repository in January of this year. I am not a python programmer, but I recall reports that this fixed a range of the most basic features of smart completion so that symbols from imported libraries works. There was also new code in bzr for python system paths.
Thus, I recommend downloading CEDET from bzr to get these features to see if it now does what you would expect for smart completion.
| 0 | 2,830 | false | 0 | 1 |
using emacs CEDET completion for python
| 5,770,424 |
1 | 1 | 0 | 4 | 1 | 1 | 0.664037 | 0 |
I'm starting up a distributed computing project, somewhat like the various @home projects out there (though not doing simple scientific computing, but instead occasionally engaging the remote user in tasks involving presentation of audio and visual stimuli) and I need to get a sense of the relative system performance across machines that run my app so i can exclude data from machines that are very sub par (because these might not have presented the stimuli faithfully). The app is written in python, and I see that the pystone module provides a benchmark of sorts, but I also see that pystone has been disparaged as a benchmark in some cases. To my relatively novice understanding of benchmarking, pystone may not be good for general benchmarking because it collapses performance to a single score, but for my purposes where all I want is a single score to compare across machines, I think it should suffice. Are there any downsides I'm missing to using pystone for obtaining relative overall system performance?
| 0 |
python,benchmarking
|
2011-04-25T02:27:00.000
| 0 | 5,774,685 |
The big problem with Pystone as a benchmark of anything (whether it be Python interpreter versions or the underlying hardware) is that it simply doesn't exercise enough different aspects of the computing environment.
Integer arithmetic, floating point arithmetic, vector operations, dedicated media hardware, memory throughput, I/O throughput, cache sizes, threading architecture, pipelining architecture... the list of hardware features that can vary across machines goes on and on, and is the biggest reason why the first question in reply to "Which is faster, A or B?" will usually be "Well, what do you plan to use them for?". The answer to the speed question is likely to be different depending on whether you're building a home media centre or a web server or a database server, etc.
Modern computer systems are complex beasts, and the layering of interpreter virtual machines with their own complex object and execution models on top don't make things any easier. A naive benchmark like Pystone will let you get a general idea of the basic computing grunt of the CPU, but won't tell you anything about the other potentially limiting factors of the machine.
| 0 | 447 | false | 0 | 1 |
What are the arguments against using pystone to estimate overall relative system performance across multiple systems?
| 5,775,608 |
1 | 1 | 0 | 4 | 1 | 0 | 1.2 | 0 |
I have simple example:
import netsnmp
var = netsnmp.Varbind('ifHCInOctets','0')
res = netsnmp.snmpgetnext(var,Version = 2,DestHost='localhost',Community='public',Timeout=1000000)
print res[0]
time python2 test.py show me:
real 0m4.086s
user 0m0.073s
sys 0m0.007s
Why 4 seconds = 1000000 ? snmpd server not work on localhost
| 0 |
python,net-snmp
|
2011-04-26T15:17:00.000
| 0 | 5,792,497 |
When you pass Timeout=? you are setting the maximum time that snmp's internal select loop should wait before registering a timeout. Setting this to 1000000 means "wait 1 million microseconds", which is 1 second.
However there is also a Retries=? argument that specifies the number of times the snmp client will re-attempt the request after a timeout, so for Timeout=1000000, Retries=0 select will attempt only 1 request and timeout in 1 second. If Retries=1 it will try twice and timeout in 2 seconds.
So depending on the combination of Timeout and Retries you will see different amounts of delay.
The default number of Retries is 3, so 1 try + 3 retries of 1 seconds each = 4 seconds.
| 0 | 3,013 | true | 0 | 1 |
problem with timeout in netsnmp lib
| 5,794,510 |
1 | 4 | 0 | 0 | 2 | 1 | 0 | 0 |
I'm relatively new to programming, and I would like to write a simple scripting language as an exercise, and to learn a bit. I have experience with Python, C, and Ruby, and would like to learn to write a scripting language in Python. What should be my first step? How should I start?
| 0 |
python,scripting,scripting-language
|
2011-04-27T00:56:00.000
| 0 | 5,798,173 |
draw out a finite state automata of how your language is going to work, write a syntax analyzer, draw some diagrams. Hack on!
| 0 | 2,946 | false | 0 | 1 |
write a scripting language in python
| 5,798,276 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 |
I'm wandering what the best way is to send a fully scaled (1:1) dxf drawing to a cad plotter using python. Has anyone here ever done this?
For those who want to know why:
I've written a program for my employer that automates the drawing of detailed
schematics, apparently so our engineering dept can spend more time doing nothing. The issue now is that they would like to completely eliminate acad since it's only used to plot the finished drawing.
Mind you these drawings are used for non-trivial things like checking the dimensions of critical components used in commercial jetliners.
| 0 |
python,printing,cad
|
2011-04-27T17:41:00.000
| 0 | 5,808,236 |
In case anyone else runs into this problem (pretty unlikely) I though I'd post briefly what I did in the end:
1.) Wrote a short script to capture the dxf as a BMP (basicaly just a screen grab that appends the scale to the drawing)
2.) Wrote a print dialog with PyQt4 that's a clone of Autocads plot window except that it has to pull the scaling info from the BMP.
My python skills are awfull so there's likely better solutions but this worked.
| 0 | 686 | true | 0 | 1 |
Cad plotters and Python
| 5,861,459 |
1 | 1 | 0 | 1 | 4 | 0 | 1.2 | 0 |
I have two systems running the same set of Django unittests. Some of the tests use the @unittest.expectedFailure decorator.
On one system, these are running fine and reporting at the end of the test run OK (expected failures=10, unexpected successes=2).
On the other system, the same tests error, but raise _ExpectedFailure and _UnexpectedSuccess without tracebacks.
Has anyone seen this behavior before? Is it a configuration issue? Both systems are running Python 2.7, Django 1.3, and have unittest and unittest2 installed.
| 0 |
python,unit-testing
|
2011-04-27T19:16:00.000
| 0 | 5,809,333 |
I have the problem and I got it to work by deleting the /usr/local/lib/python2.7 and then reinstalling everything from scratch.
The reason for this I believe is that python may not have cleared it's python object and cache files(*.pyc, *.pyo) from it's working directory. That is, not YOUR project's directory but where python actually runs from.
Not sure if that's it but it worked for me!!
| 0 | 665 | true | 1 | 1 |
Python raising _ExpectedFailure for unittests with @unittest.expectedFailure
| 5,810,407 |
1 | 1 | 0 | 0 | 3 | 1 | 0 | 0 |
My program does a lot of file processing, and as the files are large I prefer to write them as GZIP. One challenge is that I often need to read files as they are being written. This is not a problem without GZIP compression, but when compression is on, the reading complains about failed CRC, which I presume might have something to do with compression info not being flushed properly when writing. Is there any way to use GZIP with Python such that, when I write and flush to a file (but not necessarily close the file), that it can be read as well?
| 0 |
python,file,io,gzip
|
2011-04-29T08:48:00.000
| 0 | 5,829,964 |
I think flushing data to file (compressed) just writes the data into file, but headers are written only on close(), so you need to close the file first, and only after you can open it and read all data you need. If you need to write large data ammounts, you can try to use database, like PostgreSQL or MySQL where you can specify table with compression (archive, compressed), and you will be able to insert data into the table, and read it, database software will do all rest for you (compression, decompression on inserts, selects).
| 0 | 586 | false | 0 | 1 |
Reading gzip file that is currently being written to
| 5,832,082 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
I need exchange data between python daemon (cluster nods send data to this daemon) and php script (apache) which, is accessed by webbrowsers. What do you recommend as technology which could establish some connection between them. Both, python daemon and apache/php is on the same machine.
Thank you.
| 0 |
php,python
|
2011-04-29T12:58:00.000
| 1 | 5,832,346 |
If you want things to be synchronous use a named socket(Amazing feature on Unix systems.)
If you want things to be asynchronous use pickle(there is a php version of it too.)
| 0 | 568 | false | 0 | 1 |
Python data passing
| 5,832,407 |
1 | 5 | 0 | 2 | 9 | 0 | 0.07983 | 0 |
Is it possible to write Python scripts in HTML code similarly as you write PHP between <?php ... ?> tags?
I'd like to achieve that my Python application will run in the browser.
thank you for help
| 0 |
python,html
|
2011-04-30T14:47:00.000
| 0 | 5,842,487 |
You are mixing up client-side and server-side execution of code.
Browsers support only Javascript.
Any application-server or Python-based webframework support template language where you can mix HTML and Python in some way or the other.
| 0 | 49,400 | false | 1 | 1 |
Python scripts in HTML
| 5,842,534 |
2 | 5 | 0 | 0 | 2 | 0 | 0 | 1 |
Is there a good high level library that can be used for IP address manipulation? I need to do things like:
Given a string find out if it is a valid IPv4/IPv6 address.
Have functionality like ntop and pton
etc
I can use the low level inet_ntop() etc. But is there a better library that handles these better and fast (c/c++/python)?
| 0 |
c++,python,c,freebsd,ipv6
|
2011-05-02T12:50:00.000
| 0 | 5,857,320 |
I have the mind boogling ipv4 / ipv6 validating regexps around, which are quite long and non-trivial to produce. I can share if you want.
| 0 | 1,307 | false | 0 | 1 |
Efficient IP address c/c++ library on unix
| 5,857,748 |
2 | 5 | 0 | 1 | 2 | 0 | 0.039979 | 1 |
Is there a good high level library that can be used for IP address manipulation? I need to do things like:
Given a string find out if it is a valid IPv4/IPv6 address.
Have functionality like ntop and pton
etc
I can use the low level inet_ntop() etc. But is there a better library that handles these better and fast (c/c++/python)?
| 0 |
c++,python,c,freebsd,ipv6
|
2011-05-02T12:50:00.000
| 0 | 5,857,320 |
If you are writing a sockets app it's highly unlikely that address manipulation is going to be your most important consideration. Don't waste time on this when you have network I/O to worry about.
| 0 | 1,307 | false | 0 | 1 |
Efficient IP address c/c++ library on unix
| 5,857,539 |
5 | 8 | 0 | 3 | 31 | 0 | 0.07486 | 0 |
Hey I've been using Linux for a while and thought it was time to finally dive into shell scripting.
The problem is I've failed to find any significant advantage of using Bash over something like Perl or Python. Are there any performance or power differences between the two? I'd figure Python/Perl would be more well suited as far as power and efficiency goes.
| 0 |
python,linux,perl,bash,scripting
|
2011-05-02T15:12:00.000
| 1 | 5,858,877 |
If you want to execute programs installed on the machine, nothing beats bash. You can always make a system call from Perl or Python, but I find it to be a hassle to read return values, etc.
And since you know it will work pretty much anywhere throughout all of of time...
| 0 | 26,872 | false | 0 | 1 |
Is there an advantage to using Bash over Perl or Python?
| 5,858,924 |
5 | 8 | 0 | 2 | 31 | 0 | 0.049958 | 0 |
Hey I've been using Linux for a while and thought it was time to finally dive into shell scripting.
The problem is I've failed to find any significant advantage of using Bash over something like Perl or Python. Are there any performance or power differences between the two? I'd figure Python/Perl would be more well suited as far as power and efficiency goes.
| 0 |
python,linux,perl,bash,scripting
|
2011-05-02T15:12:00.000
| 1 | 5,858,877 |
The advantage of shell scripting is that it's globally present on *ix boxes, and has a relatively stable core set of features you can rely on to run everywhere. With Perl and Python you have to worry about whether they're available and if so what version, as there have been significant syntactical incompatibilities throughout their lifespans. (Especially if you include Python 3 and Perl 6.)
The disadvantage of shell scripting is everything else. Shell scripting languages are typically lacking in expressiveness, functionality and performance. And hacking command lines together from strings in a language without strong string processing features and libraries, to ensure the escaping is correct, invites security problems. Unless there's a compelling compatibility reason you need to go with shell, I would personally plump for a scripting language every time.
| 0 | 26,872 | false | 0 | 1 |
Is there an advantage to using Bash over Perl or Python?
| 5,858,956 |
5 | 8 | 0 | 4 | 31 | 0 | 0.099668 | 0 |
Hey I've been using Linux for a while and thought it was time to finally dive into shell scripting.
The problem is I've failed to find any significant advantage of using Bash over something like Perl or Python. Are there any performance or power differences between the two? I'd figure Python/Perl would be more well suited as far as power and efficiency goes.
| 0 |
python,linux,perl,bash,scripting
|
2011-05-02T15:12:00.000
| 1 | 5,858,877 |
For big projects use a language like Perl.
There are a few things you can only do in bash (for example, alter the calling environment (when a script is sourced rather than run). Also, shell scripting is commonplace. It is worthwhile to learn the basics and learn your way around the available docs.
Plus there are times when knowing a shell well can save your bacon (on a fork-bombed system where you can't start any new processes, or if /usr/bin and or /usr/local/bin fail to mount).
| 0 | 26,872 | false | 0 | 1 |
Is there an advantage to using Bash over Perl or Python?
| 5,860,436 |
5 | 8 | 0 | 10 | 31 | 0 | 1 | 0 |
Hey I've been using Linux for a while and thought it was time to finally dive into shell scripting.
The problem is I've failed to find any significant advantage of using Bash over something like Perl or Python. Are there any performance or power differences between the two? I'd figure Python/Perl would be more well suited as far as power and efficiency goes.
| 0 |
python,linux,perl,bash,scripting
|
2011-05-02T15:12:00.000
| 1 | 5,858,877 |
bash isn't a language so much as a command interpreter that's been hacked to death to allow for things that make it look like a scripting language. It's great for the simplest 1-5 line one-off tasks, but things that are dead simple in Perl or Python like array manipulation are horribly ugly in bash. I also find that bash tends not to pass two critical rules of thumb:
The 6-month rule, which says you should be able to easily discern the purpose and basic mechanics of a script you wrote but haven't looked at in 6 months.
The 'WTF per minute' rule. Everyone has their limit, and mine is pretty small. Once I get to 3 WTFs/min, I'm looking elsewhere.
As for 'shelling out' in scripting languages like Perl and Python, I find that I almost never need to do this, fwiw (disclaimer: I code almost 100% in Python). The Python os and shutil modules have most of what I need most of the time, and there are built-in modules for handling tarfiles, gzip files, zip files, etc. There's a glob module, an fnmatch module... there's a lot of stuff there. If you come across something you need to parallelize, then indent your code a level, put it in a 'run()' method, put that in a class that extends either threading.Thread or multiprocessing.Process, instantiate as many of those as you want, calling 'start()' on each one. Less than 5 minutes to get parallel execution generally.
Best of luck. Hope this helps.
| 0 | 26,872 | false | 0 | 1 |
Is there an advantage to using Bash over Perl or Python?
| 5,860,163 |
5 | 8 | 0 | 4 | 31 | 0 | 0.099668 | 0 |
Hey I've been using Linux for a while and thought it was time to finally dive into shell scripting.
The problem is I've failed to find any significant advantage of using Bash over something like Perl or Python. Are there any performance or power differences between the two? I'd figure Python/Perl would be more well suited as far as power and efficiency goes.
| 0 |
python,linux,perl,bash,scripting
|
2011-05-02T15:12:00.000
| 1 | 5,858,877 |
The most important advantage of POSIX shell scripts over Python or Perl scripts is that a POSIX shell is available on virtually every Unix machine. (There are also a few tasks shell scripts happen to be slightly more convenient for, but that's not a major issue.) If the portability is not an issue for you, I don't see much need to learn shell scripting.
| 0 | 26,872 | false | 0 | 1 |
Is there an advantage to using Bash over Perl or Python?
| 5,858,911 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 |
In netbeans the toggle comments shortcut is control + /, which works well for php and ruby but for python it simply does nothing, can someone help?
| 0 |
python,netbeans,ide,keyboard-shortcuts
|
2011-05-03T23:31:00.000
| 0 | 5,876,909 |
Which version of NetBeans are you using?
I just tried it in NetBeans 6.9 and it works perfectly in Python source
(I presume you have Python plugin installed )
Version: 0.105 Source: NetBeans Beta
Plugin Description
Python support: editing, refactoring, hints, etc.
| 0 | 1,001 | true | 0 | 1 |
Toggle comment shortcut in netbeans python?
| 5,876,927 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
I have a webserver running IIS (Machine A) that is running PHP for me. When a user points their browser to a web page that is hosted on the webserver with a PHP script on it, they need to populate a few forms, and then hit a button that will then run the PHP script, which will fire off a python script I've already built. I am using the exec() command in PHP to call my Python script which is stored locally on the webserver (still Machine A). The idea here is that any user on any machine (with python installed on it) can run the script when they navigate to the webpage.
Unfortunately, one of the forms that is needed for the python script to work is a path to an external drive plugged into the user's machine (Machine B).
My question then is: Is there a way that PHP can execute a python script (stored on Machine A) that is then ran locally (on Machine B) so that when the user has entered in the location of the drive (Win: F:\, Linux: \dev\sda2\, etc ), the python script will know to be looking at the user's local machine (Machine B) rather than the server the script is stored on (Machine A)?
EDIT: Hopefully I have clarified the question above.
| 0 |
php,python,iis
|
2011-05-04T01:50:00.000
| 1 | 5,877,621 |
Is there a way that PHP can execute a python script (stored on Machine A) that is then ran locally (on Machine B)
Never. The browsers forbid this kind of security hole.
| 0 | 937 | false | 0 | 1 |
Execute a python script stored on a server/network location on a user's local machine using PHP
| 5,877,854 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 |
I have a webserver running IIS (Machine A) that is running PHP for me. When a user points their browser to a web page that is hosted on the webserver with a PHP script on it, they need to populate a few forms, and then hit a button that will then run the PHP script, which will fire off a python script I've already built. I am using the exec() command in PHP to call my Python script which is stored locally on the webserver (still Machine A). The idea here is that any user on any machine (with python installed on it) can run the script when they navigate to the webpage.
Unfortunately, one of the forms that is needed for the python script to work is a path to an external drive plugged into the user's machine (Machine B).
My question then is: Is there a way that PHP can execute a python script (stored on Machine A) that is then ran locally (on Machine B) so that when the user has entered in the location of the drive (Win: F:\, Linux: \dev\sda2\, etc ), the python script will know to be looking at the user's local machine (Machine B) rather than the server the script is stored on (Machine A)?
EDIT: Hopefully I have clarified the question above.
| 0 |
php,python,iis
|
2011-05-04T01:50:00.000
| 1 | 5,877,621 |
The question is not exactly clear, but from my understanding, you're trying to execute code on the local user's machine and you can't do that via Python.
Your best bet is to write JavaScript that will do the job for you (a few browsers only as you're working with local storage due to HTML5), or you can have your user upload the files.
| 0 | 937 | false | 0 | 1 |
Execute a python script stored on a server/network location on a user's local machine using PHP
| 5,877,648 |
1 | 3 | 0 | 5 | 12 | 0 | 0.321513 | 0 |
How can I run a python script in Terminal on Mac without using the "python" keyword, without having to edit my existing python files?
Right now I have to do this:
python script.py
What I like to do is this:
script.py
| 0 |
python,macos,terminal
|
2011-05-04T07:17:00.000
| 1 | 5,879,869 |
Try ./script.py instead of script.py ... or ensure your current directory is in your path and script.py should work....
| 0 | 8,958 | false | 0 | 1 |
Run python script without the "python" keyword
| 5,879,906 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 |
So, I created a Directory in Ubuntu called Pymouse and I put all the related Pymouse files from Github in there including setup.py. When I go to terminal I cd the directory and then once i have done that I type python setup.py install or python setup.py build and each time I enter that command I receive the following input: error: package directory 'pymouse' does not exist.
How do I install and set this module to path? I'm new to Ubuntu by the way.
| 0 |
python,linux,ubuntu,build,installation
|
2011-05-04T22:54:00.000
| 0 | 5,890,802 |
Go back to the PyMouse Github page, click on "Downloads", pick one of the options from the window that pops up, extract the archive to your hard drive, and try again.
| 0 | 3,996 | false | 0 | 1 |
Trying to install the Pymouse module on Ubuntu and receiving an error message
| 5,890,850 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 1 |
I am looking to create a simple graph showing 2 numbers of time for my personal twitter. They are:
Number of followers per day
Number of mentions per day
From my research so far, the search API does not provide a date so I am not about to do a GROUP BY. The only way I can have access to dates is through the OAuth Api but that requires interaction from the end user which I am trying to avoid.
Can someone point me in the right direction in order to achieve this? Thanks.
| 0 |
python,twitter
|
2011-05-09T03:12:00.000
| 0 | 5,932,111 |
The best way is to use a cron to record the data daily.
However, you can query the mentions using the search api with a untill tag. Which should do the trick.
| 0 | 438 | false | 0 | 1 |
Twitter API: Getting Data for Analytics
| 7,361,262 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 1 |
I am looking to create a simple graph showing 2 numbers of time for my personal twitter. They are:
Number of followers per day
Number of mentions per day
From my research so far, the search API does not provide a date so I am not about to do a GROUP BY. The only way I can have access to dates is through the OAuth Api but that requires interaction from the end user which I am trying to avoid.
Can someone point me in the right direction in order to achieve this? Thanks.
| 0 |
python,twitter
|
2011-05-09T03:12:00.000
| 0 | 5,932,111 |
We can although use the search api to fetch mentions but there is a limit in it.
At a given point of time you can only fetch 200 mentions.
Any one knows how to get total mentions count?
| 0 | 438 | false | 0 | 1 |
Twitter API: Getting Data for Analytics
| 8,756,753 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 |
I've wrote several perl scripts during my internship, and I would like to simplify the use of them. The scripts asks in arg, a mac address, and returns which switch is connected, speed...etc.
Instead of giving a mac address, I would like to give a host name of a computer. So, how can I resolve the hostname to mac address ?
Thanks, bye.
Edit -> Solution could be : bash command or perl module or something powerfull like that...
| 0 |
python,perl,ip,mac-address,hostname
|
2011-05-09T12:20:00.000
| 1 | 5,936,781 |
The ethers file on a UNIX system maps Ethernet address to IP-number (or hostname). If your /etc/ethers is properly maintained, you can look it up in there.
| 0 | 2,670 | false | 0 | 1 |
Resolve mac address by host name
| 5,936,797 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 |
I have a python script using Pyinotify that does some stuff on IN_MOVED_TO. What's the easiest way to trigger the script on specific files, using another python script, without actually moving the files out and back in?
| 0 |
python,pyinotify
|
2011-05-09T15:29:00.000
| 0 | 5,939,078 |
you can avoid moving file by simply renaming the file (which is very similar on linux), for example a mv file file.sav && mv file.sav file
| 0 | 782 | false | 0 | 1 |
Trigger inotify events
| 5,940,630 |
2 | 2 | 0 | 3 | 0 | 0 | 0.291313 | 1 |
urllib.urlencode could encode url's params. It seems no likely function in Mechanize.
So, I have to use urllib and Mechanize, because I only need urlencode.
Any function could implement the same task like urllib.urlencode in Mechanize
| 0 |
python,mechanize,urllib
|
2011-05-09T17:45:00.000
| 0 | 5,940,520 |
Why would mechanize have it? It's already in urllib, which comes with Python.
| 0 | 612 | false | 0 | 1 |
which function of mechanize is equal with urllib.urlencode
| 5,940,566 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 |
urllib.urlencode could encode url's params. It seems no likely function in Mechanize.
So, I have to use urllib and Mechanize, because I only need urlencode.
Any function could implement the same task like urllib.urlencode in Mechanize
| 0 |
python,mechanize,urllib
|
2011-05-09T17:45:00.000
| 0 | 5,940,520 |
mechanize actually uses urllib and urllib2 for most tasks that involve urls.
Since this functionality already exists in urllib/2 (as mentioned by Ignacio Vazquez-Abrams) there's no need for it to be implemented elsewhere. When coding you import all the libraries that have functionality you need to use.
| 0 | 612 | false | 0 | 1 |
which function of mechanize is equal with urllib.urlencode
| 5,992,039 |
1 | 3 | 0 | 6 | 11 | 1 | 1 | 0 |
I have found many posts where solutions to read PDFs has been proposed. I want to read a PDF file word by word and do some processing on it. people suggest pdfMiner which converts entire PDF file into text file. But what i want is that to read PDFs word by word. Can anyone suggest a library that does this?
| 0 |
python,pdf
|
2011-05-10T05:52:00.000
| 0 | 5,945,764 |
I'm using pdfminer and it is an excellent lib especially if you're comfortable programming in python. It reads PDF and extracts every character, and it provides its bounding box as a tuple (x0,y0,x1,y1). Pdfminer will extract rectangles, lines and some images, and will try to detect words. It has an unpleasant O(N^3) routine that analyses bounding boxes to coalesce them, so it can get very slow on some files. Try to convert your typical file - maybe it'll be fast for you, or maybe it'll take 1 hour, depends on the file.
You can easily dump a pdf out as text, that's the first thing you should try for your application. You can also dump XML (see below), but you can't modify PDF. XML is the most complete representation of the PDF you can get out of it.
You have to read through the examples to use it in your python code, it doesn't have much documentation.
The example that comes with PdfMiner that transforms PDF into xml shows best how to use the lib in your code. It also shows you what's extracted in human-readable (as far as xml goes) form.
You can call it with parameters that tell it to "analyze" the pdf. If you do, it'll coalesce letters into blocks of text (words and sentences; sentences will have spaces so it's easy to tokenize into words in python).
| 0 | 16,310 | false | 0 | 1 |
Python to read PDF files
| 6,460,926 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 |
i have python web app build on top of BaseHTTPServer, which runs on specyfic port. It runs system commands and shows output. I want do limit access to this app. What are posible ways to do it? Requirements:
it must not be limited to LAN
simple to implement/deploy
| 0 |
python,security,web-applications
|
2011-05-10T09:16:00.000
| 1 | 5,947,849 |
Easiest and most secure: Put Apache or Nginx in front of it with an HTTPS proxy.
Update: Or VPN access as suggested by Jakob. Good idea.
| 0 | 517 | true | 1 | 1 |
Secure python web app
| 5,947,923 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
i have python web app build on top of BaseHTTPServer, which runs on specyfic port. It runs system commands and shows output. I want do limit access to this app. What are posible ways to do it? Requirements:
it must not be limited to LAN
simple to implement/deploy
| 0 |
python,security,web-applications
|
2011-05-10T09:16:00.000
| 1 | 5,947,849 |
Common methods: VPN access. Firewalls, logging, denyhosts style defences, complicated root passwords, no su, run as its own user.
(if it was my personal server)
Logic bombs
| 0 | 517 | false | 1 | 1 |
Secure python web app
| 5,947,916 |
3 | 5 | 0 | 0 | 4 | 1 | 0 | 0 |
I have a python script which is constantly polling data. The script is constantly running and should never stop.
The script polls data from a track of keywords which are passed to it when the script is first run.
What would be the best way to update this track without stopping the script from another python script?
The only solution I can think of is to store the track in a txt file and check for any updates to the file on a set timer. Seems kind of messy.
| 0 |
python
|
2011-05-10T11:22:00.000
| 0 | 5,949,242 |
You can communicate both scripts using sockets
| 0 | 862 | false | 0 | 1 |
Python - update configuration while running script
| 5,949,332 |
3 | 5 | 0 | 5 | 4 | 1 | 1.2 | 0 |
I have a python script which is constantly polling data. The script is constantly running and should never stop.
The script polls data from a track of keywords which are passed to it when the script is first run.
What would be the best way to update this track without stopping the script from another python script?
The only solution I can think of is to store the track in a txt file and check for any updates to the file on a set timer. Seems kind of messy.
| 0 |
python
|
2011-05-10T11:22:00.000
| 0 | 5,949,242 |
It's better to encapsulate this settings file in a database. A simple SQLite DB file is enough - SQLite support is built-in with Python so no extra effort is required.
The advantage of a DB is that you won't run into race conditions of partially-written files, etc. The "configuration-adding" script adds keywords using a transaction, and the other script reading from the DB will only see it when it's wholly done. Just remember to not hold the DB open all the time in the periodic script. Once every some time, open it, read the keywords, and close it.
| 0 | 862 | true | 0 | 1 |
Python - update configuration while running script
| 5,949,341 |
3 | 5 | 0 | 3 | 4 | 1 | 0.119427 | 0 |
I have a python script which is constantly polling data. The script is constantly running and should never stop.
The script polls data from a track of keywords which are passed to it when the script is first run.
What would be the best way to update this track without stopping the script from another python script?
The only solution I can think of is to store the track in a txt file and check for any updates to the file on a set timer. Seems kind of messy.
| 0 |
python
|
2011-05-10T11:22:00.000
| 0 | 5,949,242 |
Polling a configuration-file is not messy, but a very common solution to this problem. You should go with it.
| 0 | 862 | false | 0 | 1 |
Python - update configuration while running script
| 5,949,274 |
1 | 2 | 0 | 3 | 6 | 0 | 0.291313 | 0 |
I want to write some install scripts by python, it should know the OS to choose either apt command or yum command.
It seems sys.platform can tell 'win32' or the others, but how to know it is working on Debian or CentOS in Python?
| 0 |
python,debian,centos,yum,apt
|
2011-05-10T14:49:00.000
| 1 | 5,951,930 |
If you just need to know whether to use yum or apt, one approach is simply to pick one of those commands and try it. If it works, it works; if not, catch the exception and try the other command.
| 0 | 1,337 | false | 0 | 1 |
How to know the system is Debian or CentOS in Python?
| 5,952,044 |
3 | 4 | 0 | 0 | 2 | 1 | 0 | 0 |
how can i run my program using test files on my desktop without typing in the specific pathname. I just want to be able to type the file name and continue on with my program. Since i want to be able to send it to a friend and not needing for him to change the path rather just read the exact same file that he has on his desktop.
| 0 |
python
|
2011-05-10T17:02:00.000
| 0 | 5,953,657 |
You can tell your friend to make *.py files to be executed by the interpreter. Change it from Explorer:Tools:Folder Options:File Types.
| 0 | 312 | false | 0 | 1 |
Python path help
| 5,953,799 |
3 | 4 | 0 | 1 | 2 | 1 | 1.2 | 0 |
how can i run my program using test files on my desktop without typing in the specific pathname. I just want to be able to type the file name and continue on with my program. Since i want to be able to send it to a friend and not needing for him to change the path rather just read the exact same file that he has on his desktop.
| 0 |
python
|
2011-05-10T17:02:00.000
| 0 | 5,953,657 |
f = open(os.path.join(os.environ['USERPROFILE'], 'DESKTOP', my_filename))
| 0 | 312 | true | 0 | 1 |
Python path help
| 5,953,805 |
3 | 4 | 0 | 0 | 2 | 1 | 0 | 0 |
how can i run my program using test files on my desktop without typing in the specific pathname. I just want to be able to type the file name and continue on with my program. Since i want to be able to send it to a friend and not needing for him to change the path rather just read the exact same file that he has on his desktop.
| 0 |
python
|
2011-05-10T17:02:00.000
| 0 | 5,953,657 |
If you place your Python script in the same directory as the files your script is going to open, then you don't need to specify any paths. Be sure to allow the Python installer to "Register Extensions", so Python is called when you double-click on a Python script.
| 0 | 312 | false | 0 | 1 |
Python path help
| 5,953,763 |
1 | 4 | 0 | 2 | 10 | 0 | 0.099668 | 0 |
I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl_fork will not work.
LATER EDIT:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you
| 0 |
php,python,ajax,ipc
|
2011-05-11T14:12:00.000
| 0 | 5,965,655 |
I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities.
| 0 | 16,013 | false | 0 | 1 |
Communication between PHP and Python
| 5,965,679 |
2 | 6 | 0 | 1 | 3 | 0 | 0.033321 | 0 |
is it possible to check if a file is done copying of if its complete using python?
or even on the command line.
i manipulate files programmatically in a specific folder on mac osx but i need to check if the file is complete before running the code which makes the manipulation.
| 0 |
python,macos,file-io
|
2011-05-11T16:23:00.000
| 1 | 5,967,521 |
It seems like you have control of the (python?) program doing the copying. What commands are you using to copy? I would think writing your code such that it blocks until the copy operation is complete would be sufficient.
Is this program multi-threaded or processed? If so you could add file paths to a queue when they are complete and then have the other thread only act on items in the queue.
| 0 | 6,644 | false | 0 | 1 |
check if a file is 'complete' (with python)
| 5,967,726 |
2 | 6 | 0 | 2 | 3 | 0 | 1.2 | 0 |
is it possible to check if a file is done copying of if its complete using python?
or even on the command line.
i manipulate files programmatically in a specific folder on mac osx but i need to check if the file is complete before running the code which makes the manipulation.
| 0 |
python,macos,file-io
|
2011-05-11T16:23:00.000
| 1 | 5,967,521 |
If you know where the files are being copied from, you can check to see whether the size of the copy has reached the size of the original.
Alternatively, if a file's size doesn't change for a couple of seconds, it is probably done being copied, which may be good enough. (May not work well for slow network connections, however.)
| 0 | 6,644 | true | 0 | 1 |
check if a file is 'complete' (with python)
| 5,967,724 |
1 | 3 | 0 | 1 | 4 | 0 | 0.066568 | 0 |
What's the best way to monitor a python daemon to determine the cause of it quitting unexpectedly? Is strace my best option or is there something Python specific that does the job?
| 0 |
python,strace
|
2011-05-11T19:08:00.000
| 1 | 5,969,337 |
I would generally start by adding logging to it. At a minimum, have whatever is launching it capture stdout/stderr so that any stack traces are saved. Examine your except blocks to make sure you're not capturing exceptions silently.
| 0 | 2,169 | false | 0 | 1 |
Troubleshoot python daemon that quits unexpectedly?
| 5,969,373 |
2 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 1 |
I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out?
| 0 |
asp.net,python,asp.net-ajax,screen-scraping,urllib2
|
2011-05-12T04:17:00.000
| 0 | 5,973,245 |
When scraping a web application, I use either:
1) WireShark ... or...
2) A logging proxy server (that logs headers as well as payload)
I then compare what the real application does (in this case, how your browser interacts with the site) with the scraper's logs. Working through the differences will bring you to a working solution.
| 0 | 1,706 | false | 1 | 1 |
Scraping ASP.NET with Python and urllib2
| 5,974,002 |
2 | 2 | 0 | 2 | 2 | 0 | 1.2 | 1 |
I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out?
| 0 |
asp.net,python,asp.net-ajax,screen-scraping,urllib2
|
2011-05-12T04:17:00.000
| 0 | 5,973,245 |
For anyone else that might be in a similar predicament in the future:
I'd just like to note that I've had a lot of success with a Greasemonkey user script in Chrome to do all of my scraping and automation. I found it to be a lot easier than Python + urllib2 (at least for this particular case). The user scripts are written in 100% Javascript.
| 0 | 1,706 | true | 1 | 1 |
Scraping ASP.NET with Python and urllib2
| 6,035,498 |
1 | 4 | 0 | 2 | 2 | 0 | 0.099668 | 1 |
I want to write a program that sends an e-mail to one or more specified recipients when a certain event occurs. For this I need the user to write the parameters for the mail server into a config. Possible values are for example: serveradress, ports, ssl(true/false) and a list of desired recipients.
Whats the user-friendliest/best-practice way to do this?
I could of course use a python file with the correct parameters and the user has to fill it out, but I wouldn't consider this user friendly. I also read about the 'config' module in python, but it seems to me that it's made for creating config files on its own, and not to have users fill the files out themselves.
| 0 |
python,configuration
|
2011-05-12T15:05:00.000
| 0 | 5,980,101 |
I doesn't matter technically proficient your users are; you can count on them to screw up editing a text file. (They'll save it in the wrong place. They'll use MS Word to edit a text file. They'll make typos.) I suggest making a gui that validates the input and creates the configuration file in the correct format and location. A simple gui created in Tkinter would probably fit your needs.
| 0 | 1,174 | false | 0 | 1 |
Userfriendly way of handling config files in python?
| 5,980,640 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
I'm developing a firefox addon which is depended on Python (which means that the user must install PyXpcomExt on his firefox). On the other hand I used PyCrypto lib (based on python) for encryption purposes.
So when firefox is loaded I have registered path to this library. However when the extension is run I get the following error:
File "/home/.../.mozilla/firefox/qvpgc3wq.default/extensions/..../pylib/mycryptoclass.py", line 4, in
from Crypto.Cipher import AES
ImportError: /home/.../.mozilla/firefox/qvpgc3wq.default/extensions/.../platform/Linux_x86-gcc3/pylib/Crypto/Cipher/AES.so: undefined symbol: PyExc_ValueError
I also tried:
import Crypto
from Crypto import Cipher
No error is thrown!
Any Ideas?
Thanks
| 0 |
python,firefox-addon,xpcom,pycrypto
|
2011-05-12T16:17:00.000
| 0 | 5,981,117 |
AES.so has not been linked against the Python dynamic library. It's finding other symbols it needs in the process's symbol table, but it can't find that one and doesn't know where it is.
| 0 | 434 | false | 0 | 1 |
PyExc_ValueError and Firefox extension
| 5,989,716 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 |
I'm using Pydev on Eclipse. I have created a new project, added __init__.py and a module in a package in the src folder. The problem is, I can't see classes and functions outlines when I try to expand the module by clicking the arrow left to it. Nothing expands. I expect to see a list of classes with a capital "C" letter left to each class name, and "F" letter with functions. But nothing is displayed.
Another problem is, when I Ctrl+click on a function or method, it just plays a ring sound, and does not go to definiton. Under "Preferences -> Pydev -> Interpreter -python" menu, I added the "src" folder to "Libraries" but it again does not go to definition.
Could you please help me with these two problems?
Thanks,
Best regards,
| 0 |
python,eclipse,package,definition,pydev
|
2011-05-13T06:32:00.000
| 0 | 5,988,099 |
add your code dir into the project->properties->PyDev - PYTHONPATH->sourceFolders, you can find the C mark in the package Explorer
| 0 | 1,261 | false | 0 | 1 |
Pydev problems with "Go to definition" and "package Explorer"
| 6,316,500 |
2 | 4 | 0 | 3 | 34 | 1 | 0.148885 | 0 |
When you do something like "test" in a where a is a list does python do a sequential search on the list or does it create a hash table representation to optimize the lookup? In the application I need this for I'll be doing a lot of lookups on the list so would it be best to do something like b = set(a) and then "test" in b? Also note that the list of values I'll have won't have duplicate data and I don't actually care about the order it's in; I just need to be able to check for the existence of a value.
| 0 |
python,list,search,find,set
|
2011-05-13T14:45:00.000
| 0 | 5,993,621 |
I think it would be better to go with the set implementation. I know for a fact that sets have O(1) lookup time. I think lists take O(n) lookup time. But even if lists are also O(1) lookup, you lose nothing with switching to sets.
Further, sets don't allow duplicate values. This will make your program slightly more memory efficient as well
| 0 | 59,073 | false | 0 | 1 |
Fastest way to search a list in python
| 5,993,682 |
2 | 4 | 0 | 12 | 34 | 1 | 1 | 0 |
When you do something like "test" in a where a is a list does python do a sequential search on the list or does it create a hash table representation to optimize the lookup? In the application I need this for I'll be doing a lot of lookups on the list so would it be best to do something like b = set(a) and then "test" in b? Also note that the list of values I'll have won't have duplicate data and I don't actually care about the order it's in; I just need to be able to check for the existence of a value.
| 0 |
python,list,search,find,set
|
2011-05-13T14:45:00.000
| 0 | 5,993,621 |
"test" in a with a list a will do a linear search. Setting up a hash table on the fly would be much more expensive than a linear search. "test" in b on the other hand will do an amoirtised O(1) hash look-up.
In the case you describe, there doesn't seem to be a reason to use a list over a set.
| 0 | 59,073 | false | 0 | 1 |
Fastest way to search a list in python
| 5,993,671 |
4 | 4 | 0 | 2 | 4 | 0 | 0.099668 | 0 |
I am preparing a Test or Quiz in Django. The quiz needs to be completed in certain time frame. Say 30 minutes for 40 questions.I can always initiate a clock at start of the test, and then calculate time by the time the Quiz is completed. However it's likely that during the attempt, there may be issues such as internet connection drops, or system crashes/power outages etc.
I need a strategy to figure out when such an accident happened, and stop the clock, then let the user take the test again from where it stopped, and start the clock again.
What is the right strategy? Any help including sample code/examples/ideas are most welcome
| 0 |
python,django,session,timer
|
2011-05-13T17:36:00.000
| 0 | 5,995,674 |
Either you do the clock on the client side, in which case they can always cheat somehow, or you do it on the server side, and then you aren't taking into account these interruptions.
To reduce cheating somewhat and still allow for interruptions, you could do a 'keep alive'.
Here the client side code announces to the server that it is still there every so often, say every 5 seconds. The server side notes when it stops getting these messages, and pauses/stops the clock. However it still has the start and end time, so you know how long it really took in wall time, and also how long it took while the client was supposedly there.
With these two pieces of information you could very easily track down odd behaviour and blacklist people. Blacklisted people might not be aware that they are blacklisted, but their quiz scores don't show up for other users of your quiz system.
| 0 | 466 | false | 1 | 1 |
Timed Quiz: How to consider internet interruptions?
| 5,995,769 |
4 | 4 | 0 | 2 | 4 | 0 | 0.099668 | 0 |
I am preparing a Test or Quiz in Django. The quiz needs to be completed in certain time frame. Say 30 minutes for 40 questions.I can always initiate a clock at start of the test, and then calculate time by the time the Quiz is completed. However it's likely that during the attempt, there may be issues such as internet connection drops, or system crashes/power outages etc.
I need a strategy to figure out when such an accident happened, and stop the clock, then let the user take the test again from where it stopped, and start the clock again.
What is the right strategy? Any help including sample code/examples/ideas are most welcome
| 0 |
python,django,session,timer
|
2011-05-13T17:36:00.000
| 0 | 5,995,674 |
The simplest way would be to add a timestamp when the person starts the quiz and then compare that to when they submit. Of course, this doesn't take into account connection drops, crashes, etc... like you mentioned.
To account for these issues I'd probably use something like node.js. Each client has "check-in" when they connect to the quiz. Then at regular intervals (every 1s, 10s, 1m, etc...) the client checks in. If at these intervals the client doesn't check-in you can assume they've had the connection drop. You could keep track of when they connect again and start the timer from where they left off.
This is my initial thought on how to keep track of connection drops and crashes. The same could be done with a front-end ajax call to a Django view.
| 0 | 466 | false | 1 | 1 |
Timed Quiz: How to consider internet interruptions?
| 5,995,763 |
4 | 4 | 0 | 2 | 4 | 0 | 1.2 | 0 |
I am preparing a Test or Quiz in Django. The quiz needs to be completed in certain time frame. Say 30 minutes for 40 questions.I can always initiate a clock at start of the test, and then calculate time by the time the Quiz is completed. However it's likely that during the attempt, there may be issues such as internet connection drops, or system crashes/power outages etc.
I need a strategy to figure out when such an accident happened, and stop the clock, then let the user take the test again from where it stopped, and start the clock again.
What is the right strategy? Any help including sample code/examples/ideas are most welcome
| 0 |
python,django,session,timer
|
2011-05-13T17:36:00.000
| 0 | 5,995,674 |
Your strategy should depend on importance of the test and ability to retake whole test.
Is test/quiz for fun or competence/knowledge checking?
Are you dealing with logged users?
Are tests generated randomly from large poll of available questions?
these are the questions you need to answer yourself first.
Remember that:
malicious user CAN simulate connection outage / power failure,
only clock you can trust is one on server side,
everything on browser side can be manipulated (think firebug/console js injection)
My approach would be:
Inform users that TIME is important factor and connection issues may not be taken into account when grade will be given...,
Serve only one question, wait for answer, serve another one,
Whole test time should be calculated as SUM of each answer time:
save each "question send" / "answer received" timestamps and calculate answer time from it,
time between questions wouldn't count,
you'd get extra scope on which questions was harder / took longer to answer.
Add some kind of heartbeat to your question page (like ajax request every X seconds), when heartbeat stops you can (depending on options you have):
invalidate question and notify user via dialog that he has connection issues and have to refresh to get new question instead if you have larger poll of questions to use,
pause time on server side (and for example dim question page so user cannot answer until his connection is restored) IMO only for games/fun quiz/tests
save information on server side on each interruption which would later ease decision to allow retake whole test e.g. he was fine until 20th question and then on 3-4 easy questions in a row he was dropping...
| 0 | 466 | true | 1 | 1 |
Timed Quiz: How to consider internet interruptions?
| 6,015,282 |
4 | 4 | 0 | 0 | 4 | 0 | 0 | 0 |
I am preparing a Test or Quiz in Django. The quiz needs to be completed in certain time frame. Say 30 minutes for 40 questions.I can always initiate a clock at start of the test, and then calculate time by the time the Quiz is completed. However it's likely that during the attempt, there may be issues such as internet connection drops, or system crashes/power outages etc.
I need a strategy to figure out when such an accident happened, and stop the clock, then let the user take the test again from where it stopped, and start the clock again.
What is the right strategy? Any help including sample code/examples/ideas are most welcome
| 0 |
python,django,session,timer
|
2011-05-13T17:36:00.000
| 0 | 5,995,674 |
The problem with pausing the clock when the connection to the user drops, is that the user could just disconnect their computer from the internet each time they received a new question, and then reconnect once they had worked out the answer.
One thing you could do, is give the user a certain amount of time for each question.
The clock is started when the user successfully receives the question to their browser, and if the user submits an answer before the time limit, it is accepted, otherwise it is void.
That would mean if a user lost connection it would only affect the question they are currently on. But it would also mean that the user would have no flexibility in how much time they want to allot to each question, you decide for them.
I was thinking you could do something like removing the question from the screen unless the connection to the server was still alive, but the user could always just screen-shot the question before disconnecting.
| 0 | 466 | false | 1 | 1 |
Timed Quiz: How to consider internet interruptions?
| 5,996,776 |
1 | 1 | 0 | 3 | 3 | 1 | 0.53705 | 0 |
I have 1000s of custom (compiled to '.so') modules that I'd like to use in python at the same time. Each such module is of size (100 [KB]) on average.
Does anyone know what is the overhead (on the OS -- assuming python is not handling this) of every .so import? meaning, is the overhead equal to the size of the .so file on disk? or is it a fixed, regardless of the size of the .so file?
I haven't yet gotten there, but would be curious to know what is the impact on the OS when one wants to import, say 10,000-50,000 custom modules at once.
| 0 |
python
|
2011-05-16T00:00:00.000
| 0 | 6,012,105 |
There would be a large time overhead of importing that many shared libraries - the dynamic linker would spend a significant amount of time during the loading phase. The dynamic linker is really optimized for tens to hundreds of shared objects, not thousands to tens of thousands.
If at all possible, combine your shared code objects.
However, the size once loaded is likely somewhat smaller than the one disk size, depending on what other information is in the file (DWARF debug symbols, extra ELF sections not required, etc).
| 0 | 210 | false | 0 | 1 |
Python -- Overhead of `.so` Imports?
| 6,012,123 |
1 | 2 | 0 | 1 | 6 | 0 | 0.099668 | 0 |
I have search for a while, and there is a function call get_image_dimensions(), however, as to my understanding, it works for the images which are downloaded or say local. So, any functions or solution like getimagesize in PHP, that we can just get the dimension of an image via URL, instead of path to local?
| 0 |
python,image,url,dimension
|
2011-05-16T06:46:00.000
| 0 | 6,013,996 |
PHP can open a URL as it does a file. This could be a boon (as in your case), or a bane (as in remote file inclusion vulnerability).
Python opts to be explicit in that a file is a file, and a remote resource (URL, for example), is a remote one.
If you need some utility function to get image size from a remote resource, you probably need to write a wrapper to the local one. Usually you only need to read about 4096 bytes to determine the image size.
A little more work, yes, but there's no magic like in PHP.
| 0 | 2,428 | false | 0 | 1 |
Is there a function for Python which like getimagesize in PHP?
| 6,014,083 |
1 | 3 | 0 | 4 | 7 | 0 | 0.26052 | 0 |
I have a python script that is using the SIGSTOP and .SIGCONT commands with os.kill to pause or resume a process. Is there a way to determine whether the related PID is in the paused or resumed state?
| 0 |
python,linux,process,controls,pid
|
2011-05-16T18:35:00.000
| 1 | 6,021,771 |
call ps and check the STAT value.
D Uninterruptible sleep (usually IO)
R Running or runnable (on run queue)
S Interruptible sleep (waiting for an event to complete)
T Stopped, either by a job control signal or because it is being traced.
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z Defunct ("zombie") process, terminated but not reaped by its parent.
| 0 | 2,298 | false | 0 | 1 |
Is there a way to determine if a Linux PID is paused or not?
| 6,021,798 |
1 | 1 | 0 | 3 | 0 | 0 | 1.2 | 0 |
Is there any IDE that allows to run a script in testing mode, allowing to replace at runtime, some values, like a folder or else?
I have a program that will have to run on a network i have no access to where I develop. Since it will use some specific folders to pick up files, I was wondering if i.e. I could use an IDE that using some parameters will translate all that is like \corporate\disk-c\myfolder into a c:\myfolder.
Thanks!
M
| 0 |
python,ide
|
2011-05-16T21:00:00.000
| 1 | 6,023,377 |
In absence of some other file based config, you could just keep the variable definitions in a a file that you import in the main script (e.g, config.py), then have two different versions of that file for 'on' and 'off' network, (or ' development' and 'production', whatever) with the appropriate settings. No IDE needed.
| 0 | 110 | true | 0 | 1 |
IDE for Python: test a script
| 6,024,347 |
2 | 5 | 0 | 0 | 71 | 0 | 0 | 0 |
I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide.
| 0 |
python,amazon-ec2
|
2011-05-17T11:29:00.000
| 0 | 6,030,115 |
simply add your code to Github and take clone on EC2 instance and run that code.
| 0 | 66,912 | false | 1 | 1 |
How to run a code in an Amazone's EC2 instance?
| 71,252,207 |
2 | 5 | 0 | 4 | 71 | 0 | 0.158649 | 0 |
I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide.
| 0 |
python,amazon-ec2
|
2011-05-17T11:29:00.000
| 0 | 6,030,115 |
Launch your instance through Amazon's Management Console -> Instance Actions -> Connect
(More details in the getting started guide)
Launch the Java based SSH CLient
Plugins-> SCFTP File Transfer
Upload your files
run your files in the background (with '&' at the end or use nohup)
Be sure to select an AMI with python included, you can check by typing 'python' in the shell.
If your app require any unorthodox packages you'll have to install them.
| 0 | 66,912 | false | 1 | 1 |
How to run a code in an Amazone's EC2 instance?
| 12,026,840 |
3 | 4 | 0 | 2 | 10 | 1 | 0.099668 | 0 |
I'm soon to start on a new project where I am going to do lots of text processing tasks like searching, categorization/classifying, clustering, and so on.
There's going to be a huge amount of documents that need to be processed; probably millions of documents. After the initial processing, it also has to be able to be updated daily with multiple new documents.
Can I use Python to do this, or is Python too slow? Is it best to use Java?
If possible, I would prefer Python since that's what I have been using lately. Plus, I would finish the coding part much faster. But it all depends on Python's speed. I have used Python for some small scale text processing tasks with only a couple of thousand documents, but I am not sure how well it scales up.
| 0 |
java,python,nlp,information-retrieval,text-mining
|
2011-05-17T11:46:00.000
| 0 | 6,030,291 |
it's not language you have to evaluate, but frameworks and app servers for clustering, data storage/retrieval etc available for the language.
you can use jython and use all the java enterprise technologies for high load system and do text parsing with python.
| 0 | 10,439 | false | 1 | 1 |
Python or Java for text processing (text mining, information retrieval, natural language processing)
| 6,030,342 |
3 | 4 | 0 | 3 | 10 | 1 | 0.148885 | 0 |
I'm soon to start on a new project where I am going to do lots of text processing tasks like searching, categorization/classifying, clustering, and so on.
There's going to be a huge amount of documents that need to be processed; probably millions of documents. After the initial processing, it also has to be able to be updated daily with multiple new documents.
Can I use Python to do this, or is Python too slow? Is it best to use Java?
If possible, I would prefer Python since that's what I have been using lately. Plus, I would finish the coding part much faster. But it all depends on Python's speed. I have used Python for some small scale text processing tasks with only a couple of thousand documents, but I am not sure how well it scales up.
| 0 |
java,python,nlp,information-retrieval,text-mining
|
2011-05-17T11:46:00.000
| 0 | 6,030,291 |
Just write it, the biggest flaw in programming people have is premature optimization. Work on a project, write it out and get it working. Then go back and fix the bugs and ensure that its optimized. There are going to be a number of people harping on about speed of x vs y and y is better than x but at the end of a day its just a language. Its not what a language is but how it does it.
| 0 | 10,439 | false | 1 | 1 |
Python or Java for text processing (text mining, information retrieval, natural language processing)
| 6,030,330 |
3 | 4 | 0 | 9 | 10 | 1 | 1 | 0 |
I'm soon to start on a new project where I am going to do lots of text processing tasks like searching, categorization/classifying, clustering, and so on.
There's going to be a huge amount of documents that need to be processed; probably millions of documents. After the initial processing, it also has to be able to be updated daily with multiple new documents.
Can I use Python to do this, or is Python too slow? Is it best to use Java?
If possible, I would prefer Python since that's what I have been using lately. Plus, I would finish the coding part much faster. But it all depends on Python's speed. I have used Python for some small scale text processing tasks with only a couple of thousand documents, but I am not sure how well it scales up.
| 0 |
java,python,nlp,information-retrieval,text-mining
|
2011-05-17T11:46:00.000
| 0 | 6,030,291 |
It's very difficult to answer questions like this without trying. So why don't you
Figure out what would be a difficult operation
Implement that (and I mean the simplest, quickest hack that you can make work)
Run it with a lot of data, and see how long it takes
Figure out if it's too slow
I've done this in the past and it's really the way to see if something performs well enough for something.
| 0 | 10,439 | false | 1 | 1 |
Python or Java for text processing (text mining, information retrieval, natural language processing)
| 6,030,370 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 |
Is there a way beside checking for known signatures in the site content to find out what kind of software is the website running e.g vbbuliten,WP etc, preferably python.
| 0 |
python
|
2011-05-17T21:29:00.000
| 0 | 6,037,379 |
Some sites will set the 'generator' meta-tag in the html head.
| 0 | 649 | false | 1 | 1 |
Detecting blog or forum software using python?
| 6,037,452 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 |
Where should I see the logging output on Eclipse while debugging ? and when running ?
| 0 |
python,eclipse,logging
|
2011-05-18T12:10:00.000
| 0 | 6,044,443 |
It will depends on how you configure your logging system. If you use only print statement, it should be shown in the console view of eclipse.
If you use logging and you configured a Console handler it should also be displayed in the console eclipse view.
If you configured only file handler in the logging configuration, you'll have to tail the log files ;)
| 0 | 1,139 | false | 1 | 1 |
Python logging module on Eclipse
| 6,045,239 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 |
I'm creating data dumps from my site for others to download and analyze. Each dump will be a giant XML file.
I'm trying to figure out the best compression algorithm that:
Compresses efficiently (CPU-wise)
Makes the smallest possible file
Is fairly common
I know the basics of compression, but haven't a clue as to which algo fits the bill. I'll be using MySQL and Python to generate the dump, so I'll need something with a good python library.
| 0 |
python,algorithm,compression,data-dump
|
2011-05-20T05:33:00.000
| 0 | 6,067,836 |
GZIP with standard compression level should be fine for most cases. Higher compression levels=more CPU time. BZ2 is packing better but is also slower. Well, there is always a trade-off between CPU consumption/running time and compression efficiency...all compressions with default compression levels should be fine.
| 0 | 1,128 | true | 0 | 1 |
What's the best compression algorithm for data dumps
| 6,067,866 |
2 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 0 |
I have a website where people post comments, pictures, and other content. I want to add a feature that users can like/unlike these items.
I use a database to store all the content.
There are a few approaches I am looking at:
Method 1:
Add a 'like_count' column to the table, and increment it whenever someone likes an item
Add a 'user_likes' table to keep a track that everything the user has liked.
Pros: Simple to implement, minimal queries required.
Cons: The item needs to be refreshed with each change in like count. I have a whole list of items cached, which will break.
Method 2:
Create a new table 'like_summary' and store the total likes of each item in that table
Add a 'user_likes' table to keep a track that everything the user has liked.
Cache the like_summary data in memcache, and only flush it if the value changes
Pros: Less load on the main items table, it can be cached without worrying.
Cons: Too many hits on memcache (a page shows 20 items, which needs to be loaded from memcache), might be slow
Any suggestions?
| 1 |
python,architecture
|
2011-05-20T05:46:00.000
| 0 | 6,067,919 |
You will actually only need the user_likes table. The like_count is calculated from that table. You will only need to store that if you need to gain performance, but since you're using memcached, It may be a good idea to not store the aggregated value in the database, but store it only in memcached.
| 0 | 104 | false | 1 | 1 |
What would be a good strategy to implement functionality similar to facebook 'likes'?
| 6,067,968 |
2 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 0 |
I have a website where people post comments, pictures, and other content. I want to add a feature that users can like/unlike these items.
I use a database to store all the content.
There are a few approaches I am looking at:
Method 1:
Add a 'like_count' column to the table, and increment it whenever someone likes an item
Add a 'user_likes' table to keep a track that everything the user has liked.
Pros: Simple to implement, minimal queries required.
Cons: The item needs to be refreshed with each change in like count. I have a whole list of items cached, which will break.
Method 2:
Create a new table 'like_summary' and store the total likes of each item in that table
Add a 'user_likes' table to keep a track that everything the user has liked.
Cache the like_summary data in memcache, and only flush it if the value changes
Pros: Less load on the main items table, it can be cached without worrying.
Cons: Too many hits on memcache (a page shows 20 items, which needs to be loaded from memcache), might be slow
Any suggestions?
| 1 |
python,architecture
|
2011-05-20T05:46:00.000
| 0 | 6,067,919 |
One relation table that does a many-to-many mapping between user and item should do the trick.
| 0 | 104 | false | 1 | 1 |
What would be a good strategy to implement functionality similar to facebook 'likes'?
| 6,067,953 |
1 | 3 | 0 | 2 | 2 | 1 | 0.132549 | 0 |
I am trying to use shutil.make_archive, but I get a "module not found" error.
Then I tried using Python 2.7 and it worked.
What is the lowest Python version that contains that module and function?
| 0 |
python,linux,shutil
|
2011-05-20T17:08:00.000
| 0 | 6,075,361 |
Python 2.7 is the earliest release to include make_archive in shutils. shutils in general existed since at least 2.0.
| 0 | 816 | false | 0 | 1 |
What is the lowest version of Python that has the shutil module?
| 6,075,406 |
3 | 4 | 0 | 0 | 0 | 1 | 0 | 0 |
How to clear Python shell ?
I am writing a module in python, I want to save it in a file. what is the best way to do it?
| 0 |
python-3.x,python-idle
|
2011-05-20T22:27:00.000
| 0 | 6,078,181 |
Just copy and paste the code into a new file (In file), and save it. To run, you can go to the run section and select "Run Module", or you can simply press F5.
| 0 | 7,935 | false | 0 | 1 |
Clearing Python shell
| 23,234,123 |
3 | 4 | 0 | 2 | 0 | 1 | 0.099668 | 0 |
How to clear Python shell ?
I am writing a module in python, I want to save it in a file. what is the best way to do it?
| 0 |
python-3.x,python-idle
|
2011-05-20T22:27:00.000
| 0 | 6,078,181 |
Python shell does not get cleared or saved. Perhaps you are using IDLE. It's a confusing piece of software. I'd recommend you to get a real IDE, or at least a proper text editor.
| 0 | 7,935 | false | 0 | 1 |
Clearing Python shell
| 6,080,090 |
3 | 4 | 0 | 2 | 0 | 1 | 1.2 | 0 |
How to clear Python shell ?
I am writing a module in python, I want to save it in a file. what is the best way to do it?
| 0 |
python-3.x,python-idle
|
2011-05-20T22:27:00.000
| 0 | 6,078,181 |
File -> New Window. Put your module in this new window, than save it. To run, just press F5.
| 0 | 7,935 | true | 0 | 1 |
Clearing Python shell
| 6,078,245 |
4 | 4 | 0 | 0 | 0 | 0 | 0 | 1 |
It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response?
| 0 |
java,c++,python,performance,web-crawler
|
2011-05-21T01:26:00.000
| 0 | 6,079,020 |
If you're using tomcat search for "Crawler Session Manager Valve"
| 0 | 210 | false | 0 | 1 |
In what scenarios might a web crawler be CPU limited as opposed to IO limited?
| 6,081,622 |
4 | 4 | 0 | 2 | 0 | 0 | 1.2 | 1 |
It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response?
| 0 |
java,c++,python,performance,web-crawler
|
2011-05-21T01:26:00.000
| 0 | 6,079,020 |
Only when you are doing extensive processing on each page. eg if you are running some sort of AI to try to guess the semantics of the page.
Even if your crawler is running on a really fast connection, there is still overhead creating connections, and you may also be limited by the bandwidth of the target machines
| 0 | 210 | true | 0 | 1 |
In what scenarios might a web crawler be CPU limited as opposed to IO limited?
| 6,079,060 |
4 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 1 |
It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response?
| 0 |
java,c++,python,performance,web-crawler
|
2011-05-21T01:26:00.000
| 0 | 6,079,020 |
If the page contains pictures and you are trying to do face recognition on the pictures (ie to form a map of pages that have pictures of each person). That may be CPU bound because of the processing involved.
| 0 | 210 | false | 0 | 1 |
In what scenarios might a web crawler be CPU limited as opposed to IO limited?
| 6,080,282 |
4 | 4 | 0 | 0 | 0 | 0 | 0 | 1 |
It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response?
| 0 |
java,c++,python,performance,web-crawler
|
2011-05-21T01:26:00.000
| 0 | 6,079,020 |
Not really. It takes I/O to download these additional links, and you're right back to I/O-limited again.
| 0 | 210 | false | 0 | 1 |
In what scenarios might a web crawler be CPU limited as opposed to IO limited?
| 6,079,035 |
1 | 5 | 0 | 2 | 13 | 1 | 0.07983 | 0 |
I am doing backups in python script but i need to get the size of tar.gz file created in MB
How can i get the size in MB of that file
| 0 |
python,linux,file-io
|
2011-05-21T08:10:00.000
| 1 | 6,080,477 |
Use the os.stat() function to get a stat structure. The st_size attribute of that is the size of the file in bytes.
| 0 | 28,504 | false | 0 | 1 |
How to get the size of tar.gz in (MB) file in python
| 6,080,484 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 |
From what I know, when seeding or leeching torrent, your IP is on tracker and it remains there for some few hours or days How do I manually tell my the tracker using Libtorrent I am no longer going to be connected to the tracker and it should forget my IP as I am neither seeding nore leeching. Any code bits or advices would be appreciated, currently I am using Python binding provided by rasterbar but I am okay with C++ code too.
| 0 |
c++,python,bittorrent,tracker
|
2011-05-21T12:43:00.000
| 0 | 6,081,815 |
libtorrent automatically does this when stopping a torrent, or stopping the session. If it seems to fail, you might want to increase the tracker timeout when shutting down. This will add to the shutdown delay, but will give some more overloaded trackers some more time. See session_settings::stop_tracker_timeout. By default this is 5 seconds, but sometimes trackers take much longer than that to respond, up to 30 seconds.
Trackers typically time out peers in about an hour, and you need to re-announce every 30 minutes to stay alive.
If you're trying to just send the stopped event to trackers, using a separate bittorrent client (in this case, assuming whatever client you're using fails to send stopped events to the trackers), it might be a bit less reliable.
You're supposed to include the info-hash (i.e. the unique identifier for the torrent), your key which the client generates on startup, peer-id (which is also generated by the client) and transfer statistics, in the tracker request.
You can get away with omitting the statistics, but if you don't know the info-hash or the client key, and in some cases the peer-id, the tracker won't be able to figure out that your request actually refers to your client's tracker request, and it won't remove your IP.
In practice, for the most part you might be able to get it to work by just knowing the info-hash and tracker URL. You can get the info-hash by loading the .torrent file, grabbing the info-hash and tracker URLs out of it.
| 0 | 659 | true | 0 | 1 |
reporting end of seed or leeching to tracker Libtorrent
| 6,093,419 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 1 |
I was wondering is there any tutorial out there that can teach you how to push multiple files from desktop to a PHP based web server with use of Python application?
Edited
I am going to be writing this so I am wondering in general what would be the best method to push files from my desktop to web server. As read from some responses about FTP so I will look into that (no sFTP support sadly) so just old plain FTP, or my other option is to push the data and have PHP read the data thats being send to it pretty much like Action Script + Flash file unloader I made which pushes the files to the server and they are then fetched by PHP and it goes on from that point on.
| 0 |
php,python,file-upload
|
2011-05-22T00:27:00.000
| 0 | 6,085,280 |
I think you're referring to a application made in php running on some website in which case thats just normal HTTP stuff.
So just look at what name the file field has on the html form generated by that php script and then do a normal post. (urllib2 or whatever you use)
| 0 | 2,176 | false | 0 | 1 |
how to upload files to PHP server with use of Python?
| 6,085,309 |
1 | 4 | 0 | 4 | 3 | 0 | 0.197375 | 0 |
Working with Rasterbar libtorrent I dont want the downloaded data to sit on my hard drive rather a pipe or variable or something Soft so I can redirect it to somewhere else, Mysql, or even trash if it is not what I want, is there anyway of doing this in preferably python binding if not in C++ using Libtorrent?
EDIT:--> I like to point out this is a libtorrent question not a Linux file handling or Python file handling question. I need to tell libtorrent to instead of save the file traditionally in a normal file save it to my python pipe or variable or etc.
| 0 |
c++,python,bittorrent
|
2011-05-22T18:11:00.000
| 1 | 6,089,806 |
If you're on Linux, you could torrent into a tmpfs mount; this will avoid writing to disk. That said, this obviously means you're storing large files in RAM; make sure you have enough memory to deal with this.
Note also that most Linux distributions have a tmpfs mount at /dev/shm, so you could simply point libtorrent to a file there.
| 0 | 1,739 | false | 0 | 1 |
Keeping the downloaded torrent in memory rather than file libtorrent
| 6,090,293 |
2 | 4 | 0 | 4 | 2 | 0 | 0.197375 | 0 |
I want to customize robot framework test report, in order to fit my need.
Where can I find the related python source that handle this feature?
Or I need to create a 3rd party library to handle this?
| 0 |
python,testing,robotframework
|
2011-05-25T07:23:00.000
| 0 | 6,120,893 |
One method, kind of lame but workable, is to use the keyword, 'Set Test Message'. This lets you put text into the test message column of the report. Whenever the test passes, you will see the message. If it fails, you see the normal failure message.
It would be great to be able to dynamically insert a documentation line, though. I'd love to be able to have the keyword, "Set Documentation Message" so that in the keyword logic I could set it, instead of copying a '[Documentation] blah, blah, blah' onto every line that it applies to.
| 0 | 23,583 | false | 0 | 1 |
how to customize robot framework test reports
| 9,087,623 |
2 | 4 | 0 | 4 | 2 | 0 | 0.197375 | 0 |
I want to customize robot framework test report, in order to fit my need.
Where can I find the related python source that handle this feature?
Or I need to create a 3rd party library to handle this?
| 0 |
python,testing,robotframework
|
2011-05-25T07:23:00.000
| 0 | 6,120,893 |
One solution is to create your own report from scratch. The XML output is very easy to parse. You can turn off the generation of reports with command line options (eg: --log NONE and --report NONE). Then, create a script that generates any type of report that you want.
| 0 | 23,583 | false | 0 | 1 |
how to customize robot framework test reports
| 7,626,579 |
1 | 2 | 0 | 0 | 9 | 0 | 0 | 1 |
I'm using imaplib for my project because I need to access gmails accounts.
Fact: With gmail's labels each message may be on an arbitrary number of folders/boxes/labels.
The problem is that I would like to get every single label from every single message.
The first solution it cames to my mind is to use "All Mail" folder to get all messages and then, for each message, check if that message is in each one of the available folders.
However, I find this solution heavy and I was wondering if there's a better way to do this.
Thanks!
| 0 |
python,gmail,imaplib
|
2011-05-25T10:41:00.000
| 0 | 6,123,164 |
in imap you don't have labels, gmail 'emulates' them on imap, you can low at the raw source of a message picked from imap an check if it has some custom header with the label
| 0 | 5,550 | false | 0 | 1 |
Python/imaplib - How to get messages' labels?
| 6,128,926 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.