Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 17 | 14 | 1 | 1 | 0 | Is there any runtime-logic difference between these two methods? Or any behviour differences?
If not, then should I forget about __init__ and use only setUpClass thinking here about unittests classes like about namespaces instead of language OOP paradigm? | 0 | python,oop,unit-testing | 2013-08-05T10:55:00.000 | 0 | 18,056,464 | The two are quite different.
setUpClass is a class method, for one, so it'll only let you set class attributes.
They are also called at different times. The test runner creates a new instance for every test. If your test class contains 5 test methods, 5 instances are created and __init__ is called 5 times.
setUpClass is normally called only once. (If you shuffle up test ordering and test methods from different classes are intermingled, setUpClass can be called multiple times, use tearDownClass to clean up properly and that won't be a problem).
Also, a test runner usually creates all test instances at the start of the test run; this is normally cheap, as test instances don't hold (much) state so won't take up much memory.
As a rule of thumb, you should not use __init__ at all. Use setUpClass to create state shared between all the tests, and use setUp to create per-test state. setUp is called just before a test is run, so you can avoid building up a lot of memory-intensive state until it is needed for a test, and not before. | 0 | 3,648 | false | 0 | 1 | When should I use setUpClass and when __init__? | 18,056,870 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Basically, I need to write a Python script that can download all of the attachment files in an e-mail, and then organize them based on their name. I am new to using Python to interact with other applications, so I was wondering if you had any pointers regarding this, mainly how to create a link to Lotus Notes (API)? | 0 | python,api,lotus-notes,email-attachments | 2013-08-05T20:12:00.000 | 0 | 18,066,856 | You can do this in LotusScript as an export of data. This could be an agent that walks down a view in Notes, selects a document, document attachments could be put into a directory. Then with those objects in the directory(ies) you can run any script you like like a shell script or whatever.
With LotusScript you can grab out meta data or other meaningful text for your directory name. Detach the objects you want from richtext then move to the next document. The view that you travel down will effect the type of documents that you are working with. | 0 | 1,295 | false | 0 | 1 | Using Python To Access E-mail (Lotus Notes) | 18,662,488 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | We are developing a numerical simulation program in FORTRAN90 (procedural, not OO and unfortunately some COMMON blocks are present but no GOTO's :-) ) and are thinking of using Python to help us in unit-testing (retroactively) and verification testing. We would like to set up a testing environment in Python to a) do unit-testing and b) do verification testing (i.e. run small test cases with well-known solutions). We would like to be able to group different tests together (by FORTRAN90 procedure for unit-testing or by problem topic for verification testing) and allow tests to be run either individually or by group.
The simulation program is text-input/output based, so we could come up with some input files to be run and compared to verified output files. For unit testing, however, I guess we will probably need to write wrappers for each FORTRAN90 subroutine.
Has anybody been in a likewise situation before? What tips can you give us?
thanks.
(btw rewriting the FORTRAN90 code in Python is not (yet) an option) | 0 | python,unit-testing,testing,fortran,fortran90 | 2013-08-06T06:19:00.000 | 0 | 18,072,977 | If you use the "os.system()" function, this can be used to call linux/unix commands from the python script directly. You can also use the "subprocess" module.
Use it like this:
os.system("ls -G")
This will call 'ls -G' from python just like if you were calling it yourself. You can easily compile and call fortran code using this command as well. Or, if you're familiar with bash scripting, you could use that as a wrapper for your unit testing as well. The scientific computing community seems to like perl for these types of tasks, but python should work just fine.
At least you're working with fortran90 and not fortran77. Those goto statements can make debugging a code excessively interesting. :P | 0 | 377 | false | 0 | 1 | Using Python for testing non-python code | 18,076,400 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | I am trying to get / set screen pixels (draw picture, line, circle, box, etc.) without starting x session. i tried google it but no success.
I am new to python. please help | 0 | python,python-2.7,graphics,pixel,raspberry-pi | 2013-08-06T08:04:00.000 | 0 | 18,074,758 | I think the is no way to have graphics without x-session.
The best solution to this is to set boot to desktop and using pygame library create full screen window to draw graphics. | 0 | 770 | true | 0 | 1 | Python graphics on raspberry pi command line | 22,917,790 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have a small problem running a python script as a specific user account in my CentOS 6 box.
My cron.d/cronfile looks like this:
5 17 * * * reports /usr/local/bin/report.py > /var/log/report.log 2>&1
The account reports exists and all the files that are to be accessed by that script are chowned and chgrped to reports. The python script is chmod a+r. The python script starts with a #!/usr/bin/env python.
But this is not the problem. The problem is that I see nothing in the logfile. The python script doesn't even start to run! Any ideas why this might be?
If I change the user to root instead of reports in the cronfile, it runs fine. However I cannot run it as root in production servers.
If you have any questions please ask :)
/e:
If I do sudo -u reports python report.py it works fine. | 0 | python,cron,cron-task | 2013-08-08T16:18:00.000 | 1 | 18,131,050 | Cron jobs run with the permissions of the user that the cron job was setup under.
I.E. Whatever is in the cron table of the reports user, will be run as the reports user.
If you're having to so sudo to get the script to run when logged in as reports, then the script likely won't run as a cron job either. Can you run this script when logged in as reports without sudo? If not, then the cron job can't either. Make sense?
Check your logs - are you getting permissions errors?
There are a myriad of reasons why your script would need certain privs, but an easy way to fix this is to set the cron job up under root instead of reports. The longer way is to see what exactly is requiring elevated permissions and fix that. Is it file permissions? A protected command? Maybe adding reports to certain groups would allow you to run it under reports instead of root.
*be ULTRA careful if/when you setup cron jobs as root | 0 | 1,721 | false | 0 | 1 | Running python cron script as non-root user | 18,131,872 |
2 | 4 | 0 | 0 | 1 | 0 | 0 | 0 | Currently developping a RPG, I'm asking how could I protect the saved data so that the player/user can't read or modify it easily. I mean yes a person that is experienced with computers and programming could modify it but I don't want the lambda user to be able to modify it, as easily as one could modify a plaintext xml file.
Is there a way I could do that with python? | 0 | python,pygame | 2013-08-09T07:32:00.000 | 0 | 18,142,023 | pickle may be too big. And work strange in some cases.
ZIP allows to read-write data from different parts of archive.
try zip with password or change first bytes of file to prevent normal unpack
PS. Make some different variants and check size/speed | 0 | 285 | false | 0 | 1 | Protecting a Save file from modification in a game? | 18,142,439 |
2 | 4 | 0 | 2 | 1 | 0 | 0.099668 | 0 | Currently developping a RPG, I'm asking how could I protect the saved data so that the player/user can't read or modify it easily. I mean yes a person that is experienced with computers and programming could modify it but I don't want the lambda user to be able to modify it, as easily as one could modify a plaintext xml file.
Is there a way I could do that with python? | 0 | python,pygame | 2013-08-09T07:32:00.000 | 0 | 18,142,023 | Just pickle or cpickle a configuration object with the compression set to max is a quick and easy option. | 0 | 285 | false | 0 | 1 | Protecting a Save file from modification in a game? | 18,142,235 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have used standart ide for python - IDLE for a long time. It has convinient debug. I can write script, press F5 and it is possible to use all objects in terminal.
Now i want working with eclipse and pydev plugin. Is there any similar way to debug in eclipse? | 0 | python,eclipse,pydev | 2013-08-09T23:14:00.000 | 1 | 18,157,029 | Yes, there is.
Just start debugging - as far as I know, you have to set breakpoint, otherwise program just run to the end. And when stopped at breakpoint, in console window, click open console icon -> choose pydev console -> PyDev Debug Console.
Let me know if it works for you. | 0 | 459 | false | 0 | 1 | debug script with pydev in eclipse | 18,157,426 |
1 | 9 | 0 | 10 | 226 | 1 | 1 | 0 | I have a Python module that uses the argparse library. How do I write tests for that section of the code base? | 0 | python,unit-testing,argparse | 2013-08-10T08:25:00.000 | 0 | 18,160,078 | Populate your arg list by using sys.argv.append() and then call
parse(), check the results and repeat.
Call from a batch/bash file with your flags and a dump args flag.
Put all your argument parsing in a separate file and in the if __name__ == "__main__": call parse and dump/evaluate the results then test this from a batch/bash file. | 0 | 91,355 | false | 0 | 1 | How do you write tests for the argparse portion of a python module? | 18,160,281 |
1 | 6 | 0 | 30 | 36 | 0 | 1 | 1 | How can I get the number of milliseconds since epoch?
Note that I want the actual milliseconds, not seconds multiplied by 1000. I am comparing times for stuff that takes less than a second and need millisecond accuracy. (I have looked at lots of answers and they all seem to have a *1000)
I am comparing a time that I get in a POST request to the end time on the server. I just need the two times to be in the same format, whatever that is. I figured unix time would work since Javascript has a function to get that | 0 | python,benchmarking | 2013-08-11T05:37:00.000 | 0 | 18,169,099 | time.time() * 1000 will give you millisecond accuracy if possible. | 0 | 72,159 | false | 0 | 1 | Comparing times with sub-second accuracy | 18,169,127 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Is there a way to force pylint to see sublime and sublime_plugin modules?
I have tried adding sublime folder to pythonpath but it hasn't worked out.
These two errors really annoy me:
PyLinter: F0401: Unable to import 'sublime'
PyLinter: F0401: Unable to import 'sublime_plugin'
Thanks. | 0 | python,sublimetext2,pylint | 2013-08-13T07:28:00.000 | 0 | 18,203,079 | It is possible, but utmost difficult.
You can try to see if these modules are available as .py files in Sublime Text 2 source code and drop then to PYTHONPATH Pylint reads.
If the modules in the question are native modules, Pylint can see only if they are distributed as shared libraries and you point PYTHONPATH/LD_LIBRARY_PATH to this modules. If the modules are embedded inside Sublime Text 2 binary you have little hope to make Pylint understand them (unless you provide hinting by hand). In this case the behavior is operating system specific. | 0 | 335 | false | 0 | 1 | Pylint sublime plugin development | 18,387,411 |
1 | 2 | 1 | 0 | 4 | 1 | 0 | 0 | How do I run C++ and Boost::Python code in parallel without problems?
Eg in my game I'd want to execute Python code in parallel with C++ code; if the embedded Python interpreter's code executes a blocking loop, like while(True): pass, the C++ code would still be running and processing frames to render with its own loop.
I tried with boost::thread and std::thread but unless I joined these threads with the main thread the program would crash...
Any suggestions or examples? | 0 | c++,python,multithreading,loops,boost-python | 2013-08-13T15:25:00.000 | 0 | 18,213,159 | You need to use the multiprocessing module in python so that you get a separate GIL for each python thread. | 0 | 1,171 | false | 0 | 1 | Embedded Boost::Python and C++ : run in parallel | 18,213,302 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I'm betting there's a simple solution to this problem that I don't know, and from googling and stackoverflowing around it seems to have something to do with setting a path.
I have anaconda installed on my computer and it seems to use python 2.7.4. I also have python 2.7.3 installed, which seems to be the version being used when I open up IDLE. When I installed fuzzywuzzy using 'python setup.py install' it's installed in the anaconda folder and using python in powershell, the command 'from fuzzywuzzy import fuzz' works fine, but when doing the same thing in IDLE I get a missing module error.
Is there a way to reconcile the two versions of Python? Can I get them to share packages, or delete one of the versions without ruining everything?
I tried doing this:
'''
Setting the PYTHONPATH / PYTHONHOME variables
Right click the Computer icon in the start menu, go to properties. On the left tab, go to Advanced system settings. In the window that comes up, go to the Advanced tab, then at the bottom click Environment Variables. Click in the list of user variables and start typing Python, and repeat for System variables, just to make certain that you don't have mis-set variables for PYTHONPATH or PYTHONHOME. Next, add new variables (I did in System rather than User, although it may work for User too): PYTHONPATH, set to C:\Python27\Lib. PYTHONHOME, set to C:\Python27.
'''
then reinstalled fuzzywuzzy, and it installed in the C:Python27 folder and works in IDLE, but now Kivy doesn't work!
Do I need to reinstall that too? Or is there a Path sharing fix? | 0 | python,windows,module,fuzzywuzzy | 2013-08-13T17:26:00.000 | 0 | 18,215,466 | Try to wrap one of your conflict programs in CMD file. Like python-virtualenv. | 0 | 623 | false | 0 | 1 | Python missing module v 2.7.3 and Windows 7: Installed fuzzywuzzy, imports in powershell, not in IDLE | 18,218,012 |
4 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | 0 | python | 2013-08-13T22:00:00.000 | 0 | 18,219,951 | I sometimes compress credentials with zlib and compile to pyo file.
It protect from "open in editor and press ctrl+f" and from not-programmers only.
Sometimes I used PGP cryptography.) | 0 | 103 | false | 0 | 1 | Disguising username & password on distributed python scripts | 18,220,247 |
4 | 4 | 0 | 2 | 0 | 0 | 0.099668 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | 0 | python | 2013-08-13T22:00:00.000 | 0 | 18,219,951 | The answer is no. You can't put the authentication details into the program and make it impossible for users to get those same authentication details. You can try to obfuscate them, but it is not possible to ensure that they cannot be read.
Compiling the code will not even obfuscate them very much.
One approach to the problem would be to implement a REST web interface and supply each distribution of the program with an API key of some sort. Then set up the program to connect to the interface over SSL using its key and put whatever information it needs there. Then you could track which version is connecting from where and limit each distribution of the program to updating a restricted set of resources on the server. Furthermore you could use server heuristics to guess if an api key has leaked and block an account if that occurs.
Another way would be if all of the hosts/users of the program are trusted, then you could set up user accounts on a server node and each script could authenticate with its own username and password or SSH key. Your server node would then have to restrict access based on what each user is allowed to update. Using SSH key based authentication allows you to avoid leaving the passwords around while still allowing authenticated access to your server. | 0 | 103 | false | 0 | 1 | Disguising username & password on distributed python scripts | 18,219,982 |
4 | 4 | 0 | 2 | 0 | 0 | 0.099668 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | 0 | python | 2013-08-13T22:00:00.000 | 0 | 18,219,951 | Just set the name to "username" and password to "password", and then when you give it to your friends, provision an account/credential that's only for them, and tell them to change the script and be done with it. That's the best/easiest way to do this. | 0 | 103 | false | 0 | 1 | Disguising username & password on distributed python scripts | 18,220,112 |
4 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | 0 | python | 2013-08-13T22:00:00.000 | 0 | 18,219,951 | to add onto jmh's comments and answer another part of your question, it is possible to decompile the java from the .class byte code and get almost exactly what the .java file contains so that won't help you. C is more difficult to piece back together but again, its certainly possible. | 0 | 103 | false | 0 | 1 | Disguising username & password on distributed python scripts | 18,220,062 |
1 | 1 | 0 | 2 | 3 | 0 | 0.379949 | 0 | I am spawning some processes with Popen (Python 2.7, with Shell=True) and then sending SIGINT to them. It appears that the process group leader is actually the Python process, so sending SIGINT to the PID returned by Popen, which is the PID of bash, doesn't do anything.
So, is there a way to make Popen create a new process group? I can see that there is a flag called subprocess.CREATE_NEW_PROCESS_GROUP, but it is only for Windows.
I'm actually upgrading some legacy scripts which were running with Python2.6 and it seems for Python2.6 the default behavior is what I want (i.e. a new process group when I do Popen). | 0 | python,linux | 2013-08-15T15:10:00.000 | 1 | 18,255,730 | bash does not handle signals while waiting for your foreground child process to complete. This is why sending it SIGINT does not do anything. This behaviour has nothing to do with process groups.
There are a couple of options to let your child process receive your SIGINT:
When spawning a new process with Shell=True try prepending exec to the front of your command line, so that bash gets replaced with your child process.
When spawning a new process with Shell=True append the command line with & wait %-. This will cause bash to react to signals while waiting for your child process to complete. But it won't forward the signal to your child process.
Use Shell=False and specify full paths to your child executables. | 0 | 3,257 | false | 0 | 1 | Popen new process group on linux | 18,255,933 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | As a undergraduate in CS, I started with C, where pointer is an important data type. Thereafter I touched on Java, JavaScript, PHP, Python. None of them have pointer per se.
So why? Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
I roughly have some idea but I'd love to hear from more experienced programmers regarding this. | 0 | php,javascript,python,c,pointers | 2013-08-15T16:13:00.000 | 0 | 18,256,915 | In Java:
Instead of having a pointer to a struct that you allocate with malloc, you have a reference to an instance of a class that you instantiate with "new". (In Java, you cannot allocate memory for objects on the heap directly as you can in C/C++)
Primitives have no pointers, BUT there are libraries built into the main library for wrapping int,double, etc. in objects (Integer, Double). | 0 | 188 | false | 1 | 1 | C pointer equivalents on other languages | 18,257,064 |
2 | 3 | 0 | 5 | 0 | 0 | 1.2 | 0 | As a undergraduate in CS, I started with C, where pointer is an important data type. Thereafter I touched on Java, JavaScript, PHP, Python. None of them have pointer per se.
So why? Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
I roughly have some idea but I'd love to hear from more experienced programmers regarding this. | 0 | php,javascript,python,c,pointers | 2013-08-15T16:13:00.000 | 0 | 18,256,915 | So why?
In general, pointers are considered too dangerous, so modern languages try to avoid their direct use.
Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
The functionality is VERY important. But to make them less dangerous, the pointer has been abstracted into less virulent types, such as references.
Basically, this boils down to stronger typing, and the lack of pointer arithmetic. | 0 | 188 | true | 1 | 1 | C pointer equivalents on other languages | 18,257,037 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | In raspian I can make virtual envs in my home directory but when I try to make a virtual env in a folder on my thumb drive it says the os prevents it ("operation not permitted"). Is this a known issue? | 0 | python,virtualenv,raspberry-pi | 2013-08-16T20:58:00.000 | 1 | 18,282,042 | Needed to format my drive with linux partition - not fat partition. | 0 | 121 | false | 0 | 1 | raspbian python virtualenv not working on thumb drive | 18,550,607 |
3 | 4 | 0 | 1 | 3 | 1 | 0.049958 | 0 | The title is a little hard to understand, but my question is simple.
I have a program that needs to take the sqrt() of something, but that's the only thing I need from math. It seems a bit wasteful to import the entire module to grab a single function.
I could say from math import sqrt, but then sqrt() would be added to my program's main namespace and I don't want that (especially since I plan to alter the program to be usable as a module; would importing like that cause problems in that situation?). Is there any way to import only that one function while still retaining the math.sqrt() syntax?
I'm using Python 2.7 in this specific case, but if there's a different answer for Python 3 I'd like to hear that too for future reference. | 0 | python,python-2.7,code-readability | 2013-08-18T14:28:00.000 | 0 | 18,300,122 | Use from math import sqrt. You can protect which functions you export from the module using an __all__ statement. __all__ should be a list of names you want to export from your module. | 0 | 139 | false | 0 | 1 | From-Import while retaining access by module | 18,300,147 |
3 | 4 | 0 | 6 | 3 | 1 | 1.2 | 0 | The title is a little hard to understand, but my question is simple.
I have a program that needs to take the sqrt() of something, but that's the only thing I need from math. It seems a bit wasteful to import the entire module to grab a single function.
I could say from math import sqrt, but then sqrt() would be added to my program's main namespace and I don't want that (especially since I plan to alter the program to be usable as a module; would importing like that cause problems in that situation?). Is there any way to import only that one function while still retaining the math.sqrt() syntax?
I'm using Python 2.7 in this specific case, but if there's a different answer for Python 3 I'd like to hear that too for future reference. | 0 | python,python-2.7,code-readability | 2013-08-18T14:28:00.000 | 0 | 18,300,122 | Either way you "import" the complete math module in a sense that it's compiled and stored in sys.modules. So you don't have any optimisation benefits if you do from math import sqrt compared to import math. They do exactly the same thing. They import the whole math module, store it sys.modules and then the only difference is that the first one brings the sqrt function into your namespace and the second one brings the math module into your namespace. But the names are just references so you wont benefit memory wise or CPU wise by just importing one thing from the module.
If you want the math.sqrt syntax then just use import math. If you want the sqrt() syntax then use from math import sqrt.
If your concern is protecting the user of your module from polluting his namespace if he does a star import: from your_module import * then define a __all__ variable in your module which is a list of strings representing objects that will be imported if the user of your module does a start import. | 0 | 139 | true | 0 | 1 | From-Import while retaining access by module | 18,300,189 |
3 | 4 | 0 | 0 | 3 | 1 | 0 | 0 | The title is a little hard to understand, but my question is simple.
I have a program that needs to take the sqrt() of something, but that's the only thing I need from math. It seems a bit wasteful to import the entire module to grab a single function.
I could say from math import sqrt, but then sqrt() would be added to my program's main namespace and I don't want that (especially since I plan to alter the program to be usable as a module; would importing like that cause problems in that situation?). Is there any way to import only that one function while still retaining the math.sqrt() syntax?
I'm using Python 2.7 in this specific case, but if there's a different answer for Python 3 I'd like to hear that too for future reference. | 0 | python,python-2.7,code-readability | 2013-08-18T14:28:00.000 | 0 | 18,300,122 | The short answer is no. Just do from math import sqrt. It won't cause any problems if you use the script as a module, and it doesn't make the code any less readable. | 0 | 139 | false | 0 | 1 | From-Import while retaining access by module | 18,300,146 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I use Symantec Encryption Desktop v.10.3.0 and Microsoft Outlook v. 14.0.6129.5000 (32bit) in my pc.
I use SEC to encrypt a zip file containing a text document and then I attach the encrypted archive (filename.zip.pgp) and send it through Microsoft Exchange Server.
If I do this procedure manually the receiver gets a *.pgp attachment containing a zip, that contains a *.txt file.
If a use python's smtplib and email modules for sending the e-mail and gnupg module for the encryption I have the following problem:
If the receiver saves the .pgp archive in her disk and then uses SEC, the file opens fine.
But if the receiver double-clicks in the attachment inside Outlook the pgp file opens showing a *.txt file (and not a zip file) with the following filename: "filename zip.txt"
This is of course the zip file but with a different extension (txt).
Anyone knows why is this happening? | 0 | python,outlook,email-attachments,pgp,symantec | 2013-08-19T14:33:00.000 | 0 | 18,316,438 | As far as I recall, when Symantec Encryption Desktop creates a PGP file, it is also zipping. This is how I used the Symantec Command Line API tool, as I would select multiple files for encryption and they would end up in a single file (like a zip).
So, you would probably remove any Outlook quirks by just PGPing the txt file, without the zip step in the middle. | 0 | 591 | false | 0 | 1 | If I open pgp attachments in Outlook the file extension changes | 22,923,765 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I'm working on a Python tool for wide distribution (as .exe/.app) that will email reports to the user. Currently (in testing), I'm using smtplib to build the message and send it via GMail, which requires a login() call. However, I'm concerned as to the security of this - I know that Python binaries aren't terribly secure, and I'd rather not have the password stored as plaintext in the executable.
I'm not terribly familiar with email systems, so I don't know if there's something that could securely be used by the .exe. I suppose I could set up a mail server without authentication, but I'm concerned that it'll end up as a spam node. Is there a setup that will allow me to send mail from a distributed Python .exe/.app without opening it to potential attacks? | 0 | python,email,smtp | 2013-08-19T15:23:00.000 | 0 | 18,317,455 | One possible solution is to create a web backend mantained by you which accepts a POST call and sends the passed message only to authorized addresses.
This way you can also mantain the list of email addresses on your server.
Look at it like an online error alerter. | 0 | 241 | false | 0 | 1 | Securely Send Email from Python Executable | 18,317,530 |
1 | 1 | 0 | 2 | 3 | 0 | 0.379949 | 0 | I have two processes one C and one python. The C process spends its time passing data to a named pipe which the python process then reads. Should be pretty simple and it works fine when I'm passing data (currently a time stamp such as "Mon Aug 19 18:30:59 2013") once per second.
Problems occur when I take out the sleep(1); command in the C process. When there's no one second delay the communication quickly gets screwed up. The python process will read more than one message or report that it has read data even though its buffer is empty. At this point the C process usually bombs.
Before I go posting any sample code I'm wondering if I need to implement some sort of synchronisation on both sides. Like maybe telling the C process not to write to the fifo if it's not empty?
The C process opens the named pipe write only and the python process opens as read only.
Both processes are intended to be run as loops. The C process continually reads data as it comes in over a USB port and the python process takes each "message" and parses it before sending it to a SQL Db.
If I'm going to be looking at up to 50 messages per second, will named pipes be able to handle that level of transaction rate? The size of each transaction is relatively small (20 bytes or so) but the frequency makes me wonder if I should be looking at some other form of inter-process communication such as shared memory?
Any advice appreciated. I can post code if necessary but at the moment I'm just wondering if I should be syncing between the two processes somehow.
Thanks! | 0 | python,c,named-pipes,fifo,mkfifo | 2013-08-19T18:03:00.000 | 1 | 18,320,199 | A pipe is a stream.
The number of write() calls on the sender side does not necessarily need to correspond to the number of read()s on the receiver's side.
Try to implement some sort of synchronisation protocol.
If sending plain text you could do so for example by adding new-lines between each token and make the receiver read up until one of such is found.
Alternatively you could prefix each data sent, with a fixed length number representing the amount of the data to come. The receiver then can parse this format. | 0 | 1,407 | false | 0 | 1 | Named pipe race condition? | 18,320,287 |
1 | 5 | 1 | 1 | 2 | 0 | 0.039979 | 0 | I've a c++ code on my mac that uses non-standard lybraries (in my case, OpenCV libs) and need to compile this so it can be called from other computers (at least from other mac computers). Runned from python. So I've 3 fundamental questions:
How to compile my project so it can be used from python? I've read
that I should create a *.so file but how to do so?
Should it work like a lib, so python calls some specific functions,
chosen in python level?
Or should it contain a main function that is executed from
command line?
Any ideas on how to do so? PS: I'm using the eclipse IDE to compile my c++ project.
Cheers, | 0 | c++,python,c,opencv,compilation | 2013-08-20T18:33:00.000 | 1 | 18,342,535 | How to compile my project so it can be used from python? I've read
that I should create a *.so file but how to do so?
That depends on your compiler. By example with g++:
g++ -shared -o myLib.so myObject.o
Should it work like a lib, so python calls some specific functions,
chosen in python level?
Yes it is, in my opinion. It seems do be the "obvious" way, since it's great for the modularity and the evolution of the C++ code. | 0 | 3,206 | false | 0 | 1 | Calling c++ function, from Python script, on a Mac OSX | 18,342,743 |
1 | 4 | 0 | 3 | 1 | 1 | 1.2 | 0 | I'm developing a twitter app on google appengine - for that I want to use the Twython library. I tried installing it using pip - but it either installs it in the main python dir, or doesn't import all dependencies.
I can simply copy all the files of Twython to the appengine root dir, and also import manually all the dependency libraries, but that seems awfully wrong. How do I install a package in a specific folder including all it's dependencies?
Thanks | 0 | python,google-app-engine,twitter,package,twython | 2013-08-21T13:46:00.000 | 1 | 18,359,184 | If you put the module files in a directory, for example external_modules/, and then use sys.path.insert(0, 'external_modules') you can include the module as it would be an internal module.
You would have to call sys.path.insert before the first import of the module.
Example: If you placed a "module.pyd" in external_modules/ and want to include it with import module, then place the sys.path.insert before.
The sys.path.insert() is an app-wide call, so you have to call it only once. It would be the best to place it in the main file, before any other imports (except import sys of course). | 0 | 2,148 | true | 0 | 1 | How to install python package in a specific directory | 18,362,592 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 1 | I'm writing a Python script which will be run on many different servers. A vital part of the script relies on the paramiko module, but it's likely that the servers do not have the paramiko package installed already. Everything needs to automated, so all the user has to do is run the script and everything will be completed for them. They shouldn't need to manually install anything.
I've seen that people recommend using Active Python / PyPM, but again, that requires an installation.
Is there a way to download and install Paramiko (and any package) from a Python script? | 0 | python,paramiko | 2013-08-21T16:11:00.000 | 0 | 18,362,568 | Wrap your Python program in a shell script that checks that paramiko is installed and if it isn't installs it before running your program. | 0 | 5,173 | false | 0 | 1 | How to download/install paramiko on any sever from Python script? | 18,363,186 |
1 | 2 | 0 | 2 | 2 | 0 | 1.2 | 0 | I have a tiny Python script that needs to read/write to a file. It works when I run it from the command line (since I am root, it will) , but when the cron job runs it cannot access the file.
The file is in the same folder as the script and is (should) be created from the script.
I'm not sure if this is really a programming question... | 0 | python,linux,file-io,cron | 2013-08-22T08:31:00.000 | 1 | 18,375,308 | Please use absolute path in your script when using crontab to run it | 0 | 390 | true | 0 | 1 | Python cron job file access | 18,375,496 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I have recently moved from Python on Windows to Python on Ubuntu. In Windows I could just hit F5 in the IDLE editor to run the script. However, in Ubuntu I have to run the script by typing python /path/to/file.py to execute.
The thing is it seems the imports within the file are not working when I run from command line.
It gives me the error:
NameError: global name 'open_file' is not defined
This is the open_file method of Pytables. In the python file I have:
from tables import *
I have made the file executable and all.
Appreciate your help. | 0 | python,ubuntu | 2013-08-22T17:30:00.000 | 1 | 18,387,093 | The pytables on my ubuntu system is 2.3.1. I think that open_file is a version 3 thing. I'm not sure where you can pick up the latest package, but you could always install the latest with pip. | 0 | 546 | true | 0 | 1 | Running Python script from Ubuntu terminal NameError | 18,387,374 |
1 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | Python newbiew here. Please bear with me.
Having written some good chunk of code one thing bothers me is that I cannot find a way to tell eclipse that this file/funciton is the starting point for my project (test).
I step into code during debugging and end up in some file deep in the code. Then if I want to run it again I go to the tab containing the start file and run it again. It would be nice to be able to specify a "main" function for a python project like we do in C for e.g.
Is something like that possible? If not can I at least tell eclipse to use that one file as the starting point for the project? | 0 | python,eclipse | 2013-08-23T07:47:00.000 | 0 | 18,397,534 | You can also name one of your files __main__.py. | 0 | 126 | false | 0 | 1 | Setting a default start function for your python project | 18,397,967 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I'm using the python packages xlrd and xlwt to read and write from excel spreadsheets using python. I can't figure out how to write the code to solve my problem though.
So my data consists of a column of state abbreviations and a column of numbers, 1 through 7. There are about 200-300 entries per state, and i want to figure out how many ones, twos, threes, and so on exist for each state. I'm struggling with what method I'd use to figure this out.
normally i would post the code i already have but i don't even know where to begin. | 1 | python,excel,xlrd,xlwt | 2013-08-24T00:09:00.000 | 0 | 18,413,606 | Prepare a dictionary to store the results.
Get the numbers of line with data you have using xlrd, then iterate over each of them.
For each state code, if it's not in the dict, you create it also as a dict.
Then you check if the entry you read on the second column exists within the state key on your results dict.
4.1 If it does not, you'll create it also as a dict, and add the number found on the second column as a key to this dict, with a value of one.
4.2 If it does, just increment the value for that key (+1).
Once it has finished looping, your result dict will have the count for each individual entry on each individual state. | 0 | 247 | false | 0 | 1 | Python Programming approach - data manipulation in excel | 18,413,675 |
1 | 1 | 0 | 4 | 0 | 0 | 0.664037 | 0 | Since Firebase can do user login as well as hold a lot of other stuff about users and their interactions with my app.
What are some of the advantages and disadvantages of using Firebase solely as a web framework, instead of using django, pyramids, bottle, etc etc?
http routing, etc etc.... I have that sorta stuff handle by another process...
So, if I'm looking basically to hold some user stuff and allow for user logins and user to user private/personal communications.
It seems firebase is an almost total solution, no?
I know this isn't a technical question, but I'm just looking for opinions from a realtime crowd....stackoverflow seems the best fit. | 0 | python,django,pyramid,firebase,bottle | 2013-08-24T16:48:00.000 | 0 | 18,420,854 | Some contras of using Firebase:
Your data is in an external server (deal-breaker for sensitive data)
It costs money
You have an additional dependency that you don't fully control (if they go out of service/business you might be in trouble)
You know the pros. If you think these are not relevant to you then go for it. | 0 | 2,929 | false | 1 | 1 | Python Frameworks vs Firebase | 18,420,953 |
1 | 1 | 0 | 5 | 1 | 0 | 0.761594 | 0 | i've written a few unittests for a Django project. I'd like to debug them. I've set a break point on the server side. what should I click to run the Django unittest with debugging enabled in PyDev Eclipse?
It seems I can run the manage.py test command from Pydev, but then there's no debugging. If I run the unittest with right-click debug unittest, then I get all sort Internal Server errors presumably because the test envrionment wasnt set up correctly. | 0 | python,django,unit-testing,debugging,pydev | 2013-08-25T00:28:00.000 | 0 | 18,424,495 | Setup a new debug configuration.
Run -> Debug Configurations...
Select 'PyDev Django'
Click 'New Launch Configuration (top left corner)
Name your new configuration
Set the project to your project
Set the module to your manage.py (browse to your manage.py)
Go to the 'Arguments' tab and enter 'test' under 'Program arguments'
Click 'Apply'
This will allow you to run 'manage.py test' and be able to stop on your breakpoints.
Unfortunately, you'll have to create different configurations if you only want to run a subset of tests. | 0 | 1,988 | false | 1 | 1 | How to debug Django unittests with PyDev? | 19,337,234 |
1 | 2 | 0 | 6 | 5 | 0 | 1.2 | 0 | I'm trying to determine if the operating system is Unix-based from a Python script. I can think of two ways to do this but both of them have disadvantages:
Check if platform.system() is in a tuple such as ("Linux", "Darwin"). The problem with this is that I don't want to provide a list of every Unix-like system every made, in particular there are many *BSD varieties.
Check if the function os.fchmod exists, as this function is only available on Unix. This doesn't seem like a clean or "Pythonic" way to do it. | 0 | python,unix | 2013-08-27T18:32:00.000 | 1 | 18,472,956 | The Pythonic way to do it is not to care what platform you are on.
If there are multiple different facilities to accomplish something depending on the platform, then abstract them behind a function or class, which should try a facility and move on to another if that facility is not available on the current platform. | 0 | 2,875 | true | 0 | 1 | How can I determine if the operating system a Python script is running on is Unix-like? | 18,473,032 |
3 | 5 | 0 | 3 | 13 | 1 | 0.119427 | 0 | Is there a emacs plugin which lists all the methods in the module in a side pane.
I am looking for a plugin which has keyboard shortcuts to show/hide all the methods in python module file currently opened. | 0 | python,emacs | 2013-08-28T04:00:00.000 | 0 | 18,479,208 | For the first question, use M-xspeed-bar, like Alex suggested.
For the second, enable hs-minor-mode, M-xhs-minor-mode, and use C-cC-@C-S-h to hide all methods, and C-cC-@C-S-s to show. | 0 | 3,818 | false | 0 | 1 | Emacs plugin to list all methods in a python module | 18,484,021 |
3 | 5 | 0 | 1 | 13 | 1 | 0.039979 | 0 | Is there a emacs plugin which lists all the methods in the module in a side pane.
I am looking for a plugin which has keyboard shortcuts to show/hide all the methods in python module file currently opened. | 0 | python,emacs | 2013-08-28T04:00:00.000 | 0 | 18,479,208 | For me, the easiest and most convenient method to quickly lookup methods is the command helm-occur (C-x c M-s o). You start typing the name of the method you want to jump to and suggestions start popping in as you type. Then you hit enter to select the one you want and your cursor jumps right there in the code. Helm-occur wasn't strictly written for this purpose, but works quite well that way. | 0 | 3,818 | false | 0 | 1 | Emacs plugin to list all methods in a python module | 33,194,710 |
3 | 5 | 0 | 0 | 13 | 1 | 0 | 0 | Is there a emacs plugin which lists all the methods in the module in a side pane.
I am looking for a plugin which has keyboard shortcuts to show/hide all the methods in python module file currently opened. | 0 | python,emacs | 2013-08-28T04:00:00.000 | 0 | 18,479,208 | Speedbar is good, and another nice alternative is helm-imenu. I've bind several keys to access it quicky from different contexts and use it most of the time | 0 | 3,818 | false | 0 | 1 | Emacs plugin to list all methods in a python module | 40,116,159 |
1 | 2 | 0 | 4 | 5 | 0 | 0.379949 | 0 | I am using Python to develop an application that does the following:
Monitors a particular directory and watches for file to be
transferred to it. Once the file has finished its transfer, run some
external program on the file.
The main issue I have developing this application is knowing when the file has finished transferring. From what I know the file will be transferred via SFTP to a particular directory. How will Python know when the file has finished transferring? I know that I can use the st_size attribute from the object returned by os.stat(fileName) method. Are there more tools that I need to use to accomplish these goals? | 0 | python,ftp | 2013-08-29T00:32:00.000 | 1 | 18,500,496 | The best way to solve this would be to have the sending process SFTP to a holding area, and then (presumably using SSH) execute a mv command to move the file from the holding area to the final destination area. Then, once the file appears in the destination area, your script knows that it is completely transferred. | 0 | 5,218 | false | 0 | 1 | Using Python to Know When a File Has Completely Been Received From an FTP Source | 18,500,555 |
1 | 1 | 0 | -1 | 1 | 0 | -0.197375 | 0 | I hear that the permissions level via crontab and terminal is totally different.
More specifically, my python script has a command to write a file into the /tmp/ directory. On a linux machine, everything works, both cron and regular shell.
However on OSX, the terminal runs fine but when this command is set on the crontab, an error appears saying that we don't have permissions to write to the /tmp directory.
How should I handle this?
Thanks. | 0 | python,linux,macos,permissions,crontab | 2013-08-29T06:18:00.000 | 1 | 18,503,561 | @Lucas Ou-Yang @Hyperboreus
as Hyperboreus said it depends on the user privilege who run it . i think that if you give the /tmp/ dir a 777 permission from the root account it'll be fixed :
chmod 777 -R /tmp/
okay try with : chmod 777 /tmp/ if the error still check if the directory /tmp/ exist ! | 0 | 510 | false | 0 | 1 | Running python script on crontab is causing permissions errors but running via terminal is fine | 18,503,894 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am developing my first Flask application (with sqlite as a database). It takes a single name from user as a query, and displays information about this name as a response.
All working well, but I want to implement typeahead.js to make better user experience. Typeahead.js sends requests to server as user types, and suggests possible names in dropdown. Right now I'm searching the database with select * from table_name where name like 'QUERY%'. But this of course is not so fast as I would like it to be - it works, but with noticable input lag (less or around a second I suppose).
In order to speed up things I looked at some memory caching options (like Redis or memcached), but they are key-value store, therefore I think do not fit my needs. I think possible option would be to make list of names (["Jane", "John", "Jack"], around 200k names total), load it into ram and do searches there. But how do I load something in memory in Flask?
Anyway, my question is: What is the best way to make such search (by first few letters) faster (in Python/Flask)? | 0 | python,sqlite,search,flask,typeahead | 2013-08-30T20:28:00.000 | 0 | 18,540,987 | You are looking for "partial matches". I would load all possible names into an array, and sort them. Then I would separately create a (26x26) lookup array that shows the index of the first element in the list of names that corresponds to a combination of the first two letters; you might also have a dict (rather than an exhaustive list) of all possible three letter combinations, which would speed up your search (because it limits it to a much smaller slice of the array).
In other words - you would not really be searching at all (for the two and three letter combo's); you would be returning a slice of the array. Once you have a match of more than three, you probably can search the slice (not worth creating tables beyond three characters). | 0 | 1,008 | false | 1 | 1 | Fastest text search in Python | 18,541,075 |
4 | 8 | 0 | 0 | 10 | 1 | 0 | 0 | I am a beginner so if this question sounds stupid, please bear with me.
I am wondering that when we write code for username/password check in python, if it is not compiled to exe ie script state, won't people will easily open the file and remove the code potion that is doing the password check?
I am assuming that the whole program is entirely written in python, no C or C++.
Even if I use a program like py2exe it can be easily decompiled back to source code. So, does that mean it is useless to do a password check?
How do professional programmers cope with that? | 0 | python,passwords,password-protection,password-encryption | 2013-09-02T09:27:00.000 | 0 | 18,569,784 | To protect data stored on the client machine, you have to encrypt it. Period.
If you trust an authorized user, you can use a password-based encryption key (many other answers on Stack Exchange address this), and hope that he is smart enough to protect his computer from malware.
If you don't trust the authorized user (a.k.a. DRM) you are just plain out of luck -- find another project.;-) | 0 | 40,722 | false | 0 | 1 | Python Password Protection | 19,035,745 |
4 | 8 | 0 | 2 | 10 | 1 | 0.049958 | 0 | I am a beginner so if this question sounds stupid, please bear with me.
I am wondering that when we write code for username/password check in python, if it is not compiled to exe ie script state, won't people will easily open the file and remove the code potion that is doing the password check?
I am assuming that the whole program is entirely written in python, no C or C++.
Even if I use a program like py2exe it can be easily decompiled back to source code. So, does that mean it is useless to do a password check?
How do professional programmers cope with that? | 0 | python,passwords,password-protection,password-encryption | 2013-09-02T09:27:00.000 | 0 | 18,569,784 | On a server only server administrators should have the right to change the code. Hence, to change the code you have to have administrator access, and if you do, then you can access everything anyway. :-)
The same goes for a client program. If the only security is the password check, you don't need to get around the password check, you can just read the data files directly.
In both cases, to prevent people that has access to the files from reading those files a password check is not enough. You have to encrypt the data. | 0 | 40,722 | false | 0 | 1 | Python Password Protection | 18,571,072 |
4 | 8 | 0 | 4 | 10 | 1 | 0.099668 | 0 | I am a beginner so if this question sounds stupid, please bear with me.
I am wondering that when we write code for username/password check in python, if it is not compiled to exe ie script state, won't people will easily open the file and remove the code potion that is doing the password check?
I am assuming that the whole program is entirely written in python, no C or C++.
Even if I use a program like py2exe it can be easily decompiled back to source code. So, does that mean it is useless to do a password check?
How do professional programmers cope with that? | 0 | python,passwords,password-protection,password-encryption | 2013-09-02T09:27:00.000 | 0 | 18,569,784 | If you are doing the checking on a user's machine, they can edit the code how they like, pretty much no matter what you do. If you need security like this then the code should be run somewhere inaccessible, for instance a server. "Don't trust the client" is an important computer security principle.
I think what you want to do is make a server script that can only be accessed by a password being given to it by the client program. This server program will function very much like the example code given in other answers: when a new client is created they send a plaintext password to the server which puts it through a one-way encryption, and stores it. Then, when a client wants to use the code that is the main body of your program, they send a password. The server puts this through the one-way encryption, and sees if it matches any stored, hashed passwords. If it does, it executes the code in the main body of the program, and sends the result back to the user.
On a related topic, the other answers suggest using the md5 algorithm. However, this is not the most secure algorithm - while secure enough for many purposes, the hashlib module in the standard library gives other, more secure algorithms, and there is no reason not to use these instead. | 0 | 40,722 | false | 0 | 1 | Python Password Protection | 18,570,957 |
4 | 8 | 0 | 0 | 10 | 1 | 0 | 0 | I am a beginner so if this question sounds stupid, please bear with me.
I am wondering that when we write code for username/password check in python, if it is not compiled to exe ie script state, won't people will easily open the file and remove the code potion that is doing the password check?
I am assuming that the whole program is entirely written in python, no C or C++.
Even if I use a program like py2exe it can be easily decompiled back to source code. So, does that mean it is useless to do a password check?
How do professional programmers cope with that? | 0 | python,passwords,password-protection,password-encryption | 2013-09-02T09:27:00.000 | 0 | 18,569,784 | One way would be to store the password in a hash form of any algorithm and check if the hash of the password given is equal to the stored password hash.
The second way might be to take a password like "cat" and convert them to ascii and and add them up and store the sum. Then you can compare the given password's ascii sum to the one you stored.
OR you can combine them both! Maybe also hash the ascii sum and compare the given pass word's ascii sun's hash.
These are the three ways I know at least. And you can use chr or ord default function in python to convert to and back repeatedly to ascii. And you can use hashlib to hash. | 0 | 40,722 | false | 0 | 1 | Python Password Protection | 62,569,006 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 0 | since about two years ago, I did find my interest in code (Hardware/Sytems/Web) and now, I've found a project which motivates me a lot (It takes all my free time indeed).
Starting this point and because my project could soon switch from a free time project to a daily job, I'm currently developing a mockup of this project based on PHP/MySQL and JQuery.
Even if I'm a true Python/MongoDB lover and a System Engineer, I did prefer those technologies to build up my mockup because of their simplicity to build a complete functional private stack at home.
I'm pretty advanced on my mockup and it seems to work as I want it.
Now I'm wondering if, about your point of view, would have been better to start to build my mockup using directly the targeted technologies (Python/MongoDB) rather than to use the easy PHP/MySQL couple to do it?
Obviously, because I plan to made this project my daily job, I had to have something visually functionnal to be able to raise a little bit of money, and about me, using an easier stack it's more easy, but I would like to have your feedback on this kind of question. | 0 | php,python,mysql,mongodb,startup | 2013-09-02T15:44:00.000 | 0 | 18,576,838 | The idea that PHP/MySQL is easier or simpler than say Python/MongoDB is just inconsistent.
If you compare for example, Django (the most popular python web framework) with symfony(PHP) you will find that they are almost identical in terms of features and architecture (symfony is actually slightly more complex but also has more very advanced features).
For mockups, if I were you, I would use solely HTML/jQuery/CSS.
Build your pages just like you would like to have them in your beta version, use jQuery to load sample data written in json.
That's all you need. You can even find WYSIWYG application to speed up the process.
Later on, you can build the back-end application using either python or php, it won't matter.
The integration process will be identical, create your models, create the controllers, and use the HTML you already have as templates.
Building your app in php/mysql then convert it to python/mangodb will make you rewrite almost all the code simply because python is so much different from php (easier I would say too, but that's just my opinion) and because mangodb is not a relational database meaning you will have also to rethink partially your architecture. | 0 | 70 | false | 1 | 1 | Startup building and mokup | 18,577,154 |
3 | 5 | 0 | 1 | 0 | 0 | 0.039979 | 0 | Recently I've read that we can code C/C++ and from python call those modules, I know that C/C++ is fast and strongly typed and those things but what advantages I got if I code some module and then call it from python? in what case/scenario/context it would be nice to implement this?
Thanks in advance. | 0 | c++,python,c | 2013-09-02T16:27:00.000 | 0 | 18,577,413 | Profile your application. If it really is spending time in a couple of places that you can recode in C consider doing so. Don't do this unless profiling tells you you really need to. | 0 | 437 | false | 0 | 1 | What are the advantages of extending Python with C or C++? | 18,577,516 |
3 | 5 | 0 | 1 | 0 | 0 | 0.039979 | 0 | Recently I've read that we can code C/C++ and from python call those modules, I know that C/C++ is fast and strongly typed and those things but what advantages I got if I code some module and then call it from python? in what case/scenario/context it would be nice to implement this?
Thanks in advance. | 0 | c++,python,c | 2013-09-02T16:27:00.000 | 0 | 18,577,413 | Another reason is there might be a C/C++ library with functionality not available in python. You might write a python extension in C/C++ so that you can access/use that C/C++ library. | 0 | 437 | false | 0 | 1 | What are the advantages of extending Python with C or C++? | 18,577,711 |
3 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | Recently I've read that we can code C/C++ and from python call those modules, I know that C/C++ is fast and strongly typed and those things but what advantages I got if I code some module and then call it from python? in what case/scenario/context it would be nice to implement this?
Thanks in advance. | 0 | c++,python,c | 2013-09-02T16:27:00.000 | 0 | 18,577,413 | The primary advantage I see is speed. That's the price paid for the generality and flexibility of a dynamic language like Python. The execution model of the language doesn't match the execution model of the processor, by a wide margin, so there must be a translation layer someplace at runtime.
There are significant sections of work in many applications that can be encapsulated as strongly-typed functions. FFTs, convolutions, matrix operations and other types of array processing are good examples where a tightly-coded compiled loop can outperform a pure Python solution more than enough to pay for the runtime data copying and "environment switching" overhead.
There is also the case for "C as a portable assembler" to gain access to hardware functions for accelerating these functions. The Python library may have a high-level interface that depends on driver code that's not available widely enough to be part of the Python execution model. Multimedia hardware for video and audio processing, and array processing instructions on the CPU or GPU are examples.
The costs for a custom hybrid solution are in development and maintenance. There is added complexity in design, coding, building, debugging and deployment of the application. You also need expertise on-staff for two different programming languages. | 0 | 437 | false | 0 | 1 | What are the advantages of extending Python with C or C++? | 18,577,861 |
1 | 2 | 0 | 2 | 3 | 1 | 1.2 | 0 | Are there some set of reasons that make it impossible for dynamic languages such as Python or Ruby to be compiled instead of interpreted without losing any of his dynamics characteristics?
Of course one the requirements to that hypothetical compiler is that those languages doesn't lose any of his characteristics like metaprogramming, extend objects, add code or modify type system in runtime.
Summarizing, it is possible to create a Ruby or Python compiler without losing any of his characteristics as dynamic programming languages? | 0 | python,ruby,compiler-construction,compilation,dynamic-languages | 2013-09-07T12:21:00.000 | 0 | 18,673,258 | Yes, it is definitely possible to create compilers for dynamic languages. There are a myriad of examples of compilers for dynamic languages in the wild:
CPython is an implementation of the Python programming language which has a Python compiler.
PyPy is an implementation of the Python programming language which has a Python compiler.
Jython is an implementation of the Python programming language which has a Python compiler.
IronPython is an implementation of the Python programming language which has a Python compiler.
Pynie is an implementation of the Python programming language which has a Python compiler.
YARV is an implementation of the Ruby programming language which has a Ruby compiler.
Rubinius is an implementation of the Ruby programming language which has a Ruby compiler.
MacRuby is an implementation of the Ruby programming language which has a Ruby compiler.
JRuby is an implementation of the Ruby programming language which has a Ruby compiler.
IronRuby is an implementation of the Ruby programming language which has a Ruby compiler.
MagLev is an implementation of the Ruby programming language which has a Ruby compiler.
Quercus is an implementation of the PHP programming language which has a PHP compiler.
P8 is an implementation of the PHP programming language which has a PHP compiler.
V8 is an implementation of the ECMAScript programming language which has an ECMAScript compiler.
In general, every language can be implemented by a compiler, and every language can be implemented by an interpreter. It is also possible to automatically derive a compiler from an interpreter and vice-versa.
Most modern language implementations use both interpretation and compilation, sometimes even several compilers. Take Rubinius, for example: first Ruby code is compiled to Rubinius bytecode. Rubinius bytecode is then interpreted by the Rubinius VM. Code which has been interpreted several times is then compiled to Rubinius Compiler IR, which is then compiled to LLVM IR, which is then compiled to "native code" (whatever that is). So, Rubinius has one interpreter and three compilers.
V8 is a different example. It actually has no interpreter, but two different compilers: one very fast, very memory-efficient compiler which produces unoptimized, somewhat slow code. Code which has been run multiple times is then thrown away, and compiled again with the second compiler, which produces aggressively optimized code but takes more time and uses more memory during compilation.
However, in the end, you cannot run code without an interpreter. A compiler cannot run code. A compiler translates a program from one language into a different language. That's it. You can translate all you want, in the end, something has to run the code, and that thing is an interpreter. It might be implemented in software or in silicon, but it still is an interpreter. | 0 | 509 | true | 0 | 1 | It is possible to create compilers for dynamic languages without losing his dynamic characteristics? | 18,674,023 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | So i bought a cheap mifare NFC chip reader from ebay for 5$. I wanted to play around with chips, and ultimately copy some NFC chips i have here. I have a NFC chip that I use to open my locker with on school, but the card we have is really big and inconvient, so I want to copy it on a smaller NFC chip and put it on my keyring.
So i hooked it up with my raspberry pi, and first off, there is NOTHING on internet about connecting this card with your raspberry pi. Ohwell, a challange, fun.
I found some basic code from a spanish website(im dutch so itwas kinda hard to understand :P) but it can only read the UID of a NFC. So i tried to understand it, and eventually i did, and I added the code to calculateCRC and read some blocks.
However, I have no clue how the NFC data structure actually works, all i did was find some arduino code samples that were in C, translated them to python, and I think it works.
So i setted it up that it reads block 0 to 8 and prints themn all. On all the NFCs i have I can only read block 0, rest is giving an error. And block 0 consists of one byte, that's a 0x04.
If anyone has any clue what is happening, please tell me. And are there any links were NFC data structure is actually explained. I found a bunch of Android stuff, but i dont have a smartphone, and i want to do it with this MFRC522 card. I read somewhere you need to auth a block or something? I saw some code for that too, but how does that work? How do I know the keys?
thanks | 0 | python,nfc,raspberry-pi,mifare | 2013-09-07T12:41:00.000 | 0 | 18,673,426 | I built an application on the Raspberry Pi to read and write RFid cards. The easiest way to do it is to use the pcscd library and java. There are good examples on the Oracle website and the pcscd library is broadly supported.
Be aware though. The usb cardreader I used did not seem to work initially. The cardreader uses to much power for the USB ports of the raspberry. When I used a powered usb hub, things worked smoothly.
Fred | 0 | 10,270 | false | 0 | 1 | NFC tag reading in python on raspberry pi | 20,600,986 |
1 | 1 | 0 | 41 | 27 | 1 | 1.2 | 0 | What is the difference (if any) between ".pyc" and ".py[cod]" notation in ignoring files. I am noticing I have both on my git ignore file. Thanks | 0 | python,git | 2013-09-10T22:03:00.000 | 0 | 18,729,510 | You are safe to remove the .pyc entry from your .gitignore, since .py[cod] will cover it. The square brackets are for matching any one of the characters, so it matches .pyc, .pyo and .pyd.
Since they are all generated from the .py source, they should not be stored in source control. | 0 | 8,304 | true | 0 | 1 | What is the difference between "py[cod]" and "pyc" in .gitignore notation? | 18,729,541 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I want to program beaglebone black(linux) from PC(linux/windows) using any Python IDE (scientific IDE's like anaconda/python(xy) is preferred).
How can I do that? How I can configure the systems?
Sincerly. | 0 | linux,windows,python-2.7,ide,remote-debugging | 2013-09-11T21:09:00.000 | 1 | 18,751,328 | Beaglebone can do whatever we want to do in a a linux PC.
But it is slower than PC so compile in pc and run in beaglebone via SSH is better way. | 0 | 237 | true | 0 | 1 | Python remote programming / debugging | 18,763,976 |
1 | 1 | 1 | 3 | 2 | 0 | 1.2 | 0 | Currently i need to transfer data between C++ and Python applications.
As long as Thrift doesn't work with unsigned int, what's the best way to transfer unsigned?
Is there only way like:
assign unsigned to signed
serialize -> send -> receive -> deserialize this signed
assign signed to unsigned
Should i do it manually all the time or there are already some 3rd party libraries?
How do i do it in the case of C++/Python applications? In C++/C++ applications i can just static_cast<signed/unsigned>(unsigned/signed) for conversion, but what about Python? | 0 | c++,python,serialization,thrift | 2013-09-12T07:51:00.000 | 0 | 18,758,535 | There are two options that make sense (and a bunch of others):
Use the next largest signed integer with Thrift. Of course, with UINT64 this is not possible, as there is no i128, but it works up to UINT32
Cast the unsigned bits into signed. Not very clean and requires documentation, but it works.
The "bunch of others" include
Convert it into a string and back. And watch your performance going down.
Use binary type. Ok, that's a bit far out, but still possible and can be done by just reinterpreting the bits as with 2. above
But again, I'd recommend 1. or 2. | 0 | 1,934 | true | 0 | 1 | apache thrift, serialize unsigned | 18,772,101 |
1 | 3 | 0 | 4 | 0 | 0 | 0.26052 | 0 | What does a secure code actually mean? Is it that you cannot make that code do something else, it wasn't supposed to do?
Many of my peers say to migrate to c++ or java as they are more secure because of oops, but when I ask why, they just say, "it's just.......it is".
An example would be so much appreciated. And I am fairly noob in C language, super-noob in c++. (just in case you wonder what complexity of answer would make me understand.) | 0 | java,c++,python,objective-c,oop | 2013-09-12T08:10:00.000 | 0 | 18,758,854 | Below are the parameter which makes any code Secure :
1. Code should do only what was intended.
Eg: "select * from tablename where id='" + txtUserInputId + "'"
In above query it is vulnerable to SQL injection.
2. Code must validate all the user inputs.
3. Authorization should be implemented properly other than Authentication.
4. User input data should be sanitized before processing.
5. Session should be managed properly. It also affect the security of code how sessions are managed in .Net or Java or any programming language.
6. Memory must be managed property. One process should not be able to access memory of other process.
7. Database constraints must be validated before any database operation.
8. Configurations must be protected from outside world. For eg: .Net framework does not allow users to see Web.config file. Web.config file may contain sensitive information like DB credentials.
Note: You can say that C#.Net is secure when it comes to query execution. Because it provides CommandParameter which automatically handles user input data for you. | 0 | 247 | false | 0 | 1 | How would you explain a layman person or a beginner in programming, the bold point of object oriented approach - the SECURITY? | 18,759,136 |
1 | 1 | 0 | 1 | 2 | 0 | 1.2 | 0 | I've got problem concerning me a long time. I either run tests from eclipse (Python unittest) using Pydev or Nose test runner. That way it's possible to debug tests and watch them in PyUnit view. But that way test database is not created, manage.py is not used.
Or I run them via manage.py test - test db is being created, but above features not available that way.
Is that possible to debug tests in eclipse which are being run on test db?
Regards,
okrutny | 0 | python,django,eclipse,unit-testing,pydev | 2013-09-12T15:58:00.000 | 0 | 18,769,092 | You can create a new PyDev django debug configuration in eclipse and set the program arguments to 'test'.
In this case, the debug configuration will execute the following command: `python manage.py test' and your breakpoints inside test cases will get hit. | 0 | 1,473 | true | 1 | 1 | How to run django tests in Eclipse to make debugging possible, but on test database | 19,786,166 |
1 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | Sometimes I need to test my python code in shell, so I have to edit the code, save and quit and run the code. Then reopen the file to modify my code if anything goes wrong. Then save and quit .... I am wondering is there a handy feature in VI to easily test the code inside VI? | 0 | python,linux,shell,vi | 2013-09-13T17:43:00.000 | 0 | 18,792,228 | I might be interpreting your questions incorrectly but this is my suggestion. Maybe you can open more than one terminal. On one terminal, write/edit your code and save it. I'm assuming with ':w' and leave the terminal open. Then on the other terminal, compile your code. | 0 | 250 | false | 0 | 1 | Python Test Code inside VI | 18,792,440 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am developing leap motion with the help of python.
I have downloaded eclipse and installed python plugin
Now I need to add library of leap motion , that is LeapPython.pyd
How to add this library on eclipse ?
Any help would be appreciated. thank you | 0 | python,eclipse,eclipse-plugin,leap-motion | 2013-09-14T15:43:00.000 | 0 | 18,803,444 | Hi everyone now i can compile successed .
i'm find how to solve this problem in here.
how to solve this problem is copy msvcp100.dll , msvcp100d.dll , msvcr100.dll msvcpr100d.dll
into your project folder.
the problem is we call leap.py and leap.py call LeapPython.pyd and LeapPython.pyd must to use .dll file then we must to include 4 .dll file into my project
thank you everyone for answer | 0 | 18,027 | false | 0 | 1 | How to add library on eclipse (python) ? | 18,819,594 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | A Raspberry Pi (raspian wheezy) has a cronjob, created as user pi with "sudo crontab -e" so it should has root grants.
ps aux | grep /home/.../myscript.py
...say it's owner is user "pi"!? (is this correct?)
The python script called from crontab is working fine if i call it from the terminal.
It's reading data from UART (serial port), and save it into a mysql database.
My python script got 'chmod 777' grants.
The crontab file:
@reboot sudo python /home/pi/pythonprogram/myscript.py & > /home/pi/pythonprogram/myscript.log
crontab log file:
Error mysql: 2002 Can't connect to local MYSQL server throught socket '/var/run/mysqld/mysqld.sock' (2)
May it be that my script is called first, before the server (mysql and apache) are running during the boot up process? Is there a way to prevent this?
What else could be the reason for those error? | 0 | python,mysql,crontab,raspberry-pi | 2013-09-14T19:18:00.000 | 1 | 18,805,490 | solved the problem in quite ugly way, but it's working now.
Just added:
time.sleep(5)
before trying to connect to mysql db.
I would be pleased if someone has a better solution. | 0 | 1,257 | true | 0 | 1 | Raspberry Pi crontab starts py script at bootup -> logging: error mysql 2002 (can't connect to local server) | 18,811,200 |
3 | 3 | 0 | 1 | 1 | 1 | 0.066568 | 0 | I understand that unit tests must be as isolated as possible, i.e. are self-contained and do not have to rely on outside resources like databases, network access or even the execution of previous unit tests.
However, suppose I want to test class Y. Class Y uses class X. However, I have already a number of unit tests that test class X.
I think that in the unit tests of class Y, I could just assume that class X works properly and use instantiations of it to test class Y (instantiated in the class Y unit tests, so no leftovers or other pollution).
Is this correct? Or do I need to mock class X when testing class Y or do something else entirely? If so or if I should mock class X, what are the reasons for that? | 0 | python,unit-testing,testing | 2013-09-15T23:40:00.000 | 0 | 18,818,608 | It comes down to how much of a purist you want to be. I would not go crazy and mock class X if it's just another class free of dependencies on external resources like a database etc..
The important thing is that you have full test coverage for your code. IMO it's not a problem if already tested code runs as "trusted code" in other tests. | 0 | 66 | false | 0 | 1 | Is assuming that another unit test has tested the input of the unit code breaking isolation? | 18,818,630 |
3 | 3 | 0 | 2 | 1 | 1 | 0.132549 | 0 | I understand that unit tests must be as isolated as possible, i.e. are self-contained and do not have to rely on outside resources like databases, network access or even the execution of previous unit tests.
However, suppose I want to test class Y. Class Y uses class X. However, I have already a number of unit tests that test class X.
I think that in the unit tests of class Y, I could just assume that class X works properly and use instantiations of it to test class Y (instantiated in the class Y unit tests, so no leftovers or other pollution).
Is this correct? Or do I need to mock class X when testing class Y or do something else entirely? If so or if I should mock class X, what are the reasons for that? | 0 | python,unit-testing,testing | 2013-09-15T23:40:00.000 | 0 | 18,818,608 | Your unit tests for class Y should only test class Y's code. You should assume that everything class Y relies on is already working (and tested). This is standard unit testing. You want to reduce external dependencies, and try to isolate your tests so that you're really only testing class Y's functionality in class Y's tests, but in the real world, everything is connected.
In my opinion it's much better to use class X and assume it works than it is to mock out class X to provide purer unit isolation. Either way, you should assume that class X is a black box and that it works. | 0 | 66 | false | 0 | 1 | Is assuming that another unit test has tested the input of the unit code breaking isolation? | 18,818,629 |
3 | 3 | 0 | 1 | 1 | 1 | 1.2 | 0 | I understand that unit tests must be as isolated as possible, i.e. are self-contained and do not have to rely on outside resources like databases, network access or even the execution of previous unit tests.
However, suppose I want to test class Y. Class Y uses class X. However, I have already a number of unit tests that test class X.
I think that in the unit tests of class Y, I could just assume that class X works properly and use instantiations of it to test class Y (instantiated in the class Y unit tests, so no leftovers or other pollution).
Is this correct? Or do I need to mock class X when testing class Y or do something else entirely? If so or if I should mock class X, what are the reasons for that? | 0 | python,unit-testing,testing | 2013-09-15T23:40:00.000 | 0 | 18,818,608 | I'll play devil's advocate here and recommend that unless this is an integration test of some kind, you don't use class X in your class Y tests, but use a Mock (or even a stub) instead.
My reasoning behind this is:
If your test of Y relies on some side-effect or state from X being invoked by Y, then by definition it is not a unit test.
Therefore all you want in your Unit Tests for class Y is something that looks and behaves like a class X whilst at the same time being fully defined by, and under the control of, the test method driving class Y.
Since assumptions are antitheses to unit testing, if you want to ensure that when X.SomeMethod is invoked during a test of Y that nothing explodes, the only way to be 100% certain (and therefore have 100% confidence in your test) is to provide via a Mock or Stub an implementation of X.SomeMethod that you can guarantee won't fail because it does nothing and therefore cannot possibly fail.
Since your class X is already written and doesn't contain methods that do nothing, you therefore cannot use it for a unit test of class Y.
Another point to consider is how you can simulate failure when using a "real" class X. . How do you provide X to Y such that X always causes an exception, in order to test how Y behaves when faced with a dodgy X dependency? The only sane solution is to use a Mock/Stub of X. (of course you might not be going to this level of detail with your unit tests so I mention it just as an example)
Consider what may happen 6 months down the line when a change in class X which you did not unit test properly (omission of test, genuine error in designing the test, etc) causes an exception to be thrown when X.SomeMethod is invoked during a test of class Y.
How can you know immediately that the problem is class X? Or indeed class Y? You can't, and therefore have lost the primary benefit of isolated unit tests.
Of course when you move on to Integration tests you will use class X to test how class Y behaves in a production context but that's a whole different question... | 0 | 66 | true | 0 | 1 | Is assuming that another unit test has tested the input of the unit code breaking isolation? | 18,818,856 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am working on having APScheduler upload a data file periodically using pexepect.run('scp ...'). The scp command works fine from the command line without password authentication (keys have been shared). However, when running in a python script on Beaglebone Black (started from a remote machine using pexpect), scp fails because blackbear (which replaces openssh on the BBB) doesn't load the private key properly. When I add -i ~/.ssh/id_rsa, then I get an error from /usr/bin/dbclient: Exited: String too long; dbclient is part of blackbear and this appears to be bug. When trying to convert my private key using >dropbearconvert openssh dropbear id_rsa id_rsa.db, I get the error: Error: Ciphers other than DES-EDE3-CBC not supported. I tried to install openssh, but this didn't work due to a conflict with blackbear. Just before I give up on Angstrom and go to Ubuntu, are there any suggestions.? I have already added a lot to Angstrom so changing operating systems at this time is painful. Thanks. Bit_Pusher | 0 | python,ssh-keys,pexpect,apscheduler,beagleboneblack | 2013-09-17T18:26:00.000 | 1 | 18,857,206 | As a temporary workaround, I found I could schedule pulling from the server using APScheduler and pexpect.run along with scp. This is less than ideal, as I prefer to have the always running processes on the beaglebones, rather than the server, but it will suffice until I can schedule enough time to switch to Ubuntu. Still, if anyone has suggestions on how to get blackbear working, I would much like to hear them. Bit_Pusher | 0 | 641 | false | 0 | 1 | Automating scp upload without password | 18,880,805 |
2 | 4 | 0 | 0 | 17 | 1 | 0 | 0 | I found all the other modules in Python33/Lib, but I can't find these. I'm sure there are others "missing" too, but these are the only ones I've noticed. They work just fine when I import them, I just can't find them. I checked sys.path and they weren't anywhere in there. Are they built-in or something? | 0 | python,python-3.x,python-module | 2013-09-17T18:34:00.000 | 0 | 18,857,355 | The modules like math, time, gc are not written in python and as rightly said in above answers that they are somewhere (written or moduled) within python interpreter. If you import sys and then use sys.builtin_module_names (it gives tuple of module names built into this interpreter). math is one such module in this list. So, we can see that math comes from here and is not separately written in library or any other folder as a python code. | 0 | 17,367 | false | 0 | 1 | Where are math.py and sys.py? | 47,367,621 |
2 | 4 | 0 | 2 | 17 | 1 | 0.099668 | 0 | I found all the other modules in Python33/Lib, but I can't find these. I'm sure there are others "missing" too, but these are the only ones I've noticed. They work just fine when I import them, I just can't find them. I checked sys.path and they weren't anywhere in there. Are they built-in or something? | 0 | python,python-3.x,python-module | 2013-09-17T18:34:00.000 | 0 | 18,857,355 | These modules are not written in Python but in C.
You can find them (at least on linux) in a subfolder of the lib-folder called lib-dynload.
The math module is then in a file math.cpython-33m.so (on windows probably with .dll instead of .so). The cpython-33m part is my python version (3.3). | 0 | 17,367 | false | 0 | 1 | Where are math.py and sys.py? | 18,857,424 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have cut and pasted a python code sample from Google API to access You Tube video viewing data of my companies videos. The application will be scheduled and get usage data then write to a database on the server ( CENTOS ). I have tried both Simple API and installed application types. Is there a solid samle type that you know of or anyone else having issues with the API calls? My latest error is that the JSON file is not organized correctly ( which I got from the API page unaltered ). | 0 | python,youtube-api | 2013-09-17T18:48:00.000 | 1 | 18,857,599 | Most google api code snippet require the user to input their personal API KEY. Please be sure you have appropriately updated the code snippet to use your api key. | 0 | 63 | false | 0 | 1 | You Tube API Calls | 18,857,640 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
@unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
@pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this? | 0 | python,unit-testing,jenkins,pytest | 2013-09-18T08:25:00.000 | 0 | 18,867,280 | I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do. | 0 | 1,231 | false | 1 | 1 | Py.test skip messages don't show in jenkins | 18,894,932 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 1 | I use selenium, python webdriver to run my test application. I also have some selenium html tests that I would like to add to my application. This html tests are changing quite ofen so I can not just convert those tests to python webdriver and add it to my app. I think I need somehow run those tests without changes from my python webdriver app. How can I do it? | 0 | python,selenium,webdriver | 2013-09-19T07:12:00.000 | 0 | 18,888,367 | Use pySelenese module for python - it parses html test and let you run it. | 0 | 141 | true | 1 | 1 | Execute selenium html tests from webdriver tests | 18,890,907 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | What I want to achieve is as follows:
I have a python package, let's call it foo, comprising a directory foo containing an __init__.py and, under normal use a compiled extension library (either a .so or a .pyd file), which __init__.py imports into the top level namespace.
Now, the problem is that I wish the top level namespace to contain a version string which is available to setup.py during building and packaging, when the extension library is not necessarily available (not yet built), and so would cause an ImportError when trying to import foo.version.
Now, clearly, I could have an exception handler in __init__.py that just ignores failures in importing anything, but this is not ideal as there may be a real reason that the user cares about why the package can't be imported.
Is there some way I can have the version string in a single place in the package, have it importable, yet not break the exceptions from attempts to import the extension? | 0 | python,distutils | 2013-09-19T19:39:00.000 | 0 | 18,903,516 | As opposed to ignoring failures when importing print out a trace message or a warning so that the user will still get the negative feedback.
As for importing a specific subfile if you are using python 3.3+ (or python 2.7) you can use imp.load_source which accepts a pathname of a file you want to import. | 0 | 60 | true | 0 | 1 | Import from a package with known import failure | 18,904,242 |
1 | 1 | 0 | 3 | 2 | 1 | 1.2 | 0 | Is there any data that visualizes just how much performance can be gained by using the Python C API when writing functions directly in C to be used as python modules?
Besides the obvious fact that "C is faster"; is there any data that compares Python C API vs C? | 0 | python,c,python-c-api | 2013-09-22T22:13:00.000 | 0 | 18,949,298 | I'm not sure there is any easy way to get such "data". It really depends on what you are doing, and you have to take into account that transferring the data from the Python side to the C side and back again will be an extra load on the system, compared to simply perform the operations in Python directly.
If you are doing a lot of calculations, and those calculations are complicated but can't be done in an existing library (such as "numpy"), then it may be worth doing. And of course, calculation doesn't necessarily have to be "mathematics", it could be shuffling data in a large array, or making if (x > y) z++; type operations. But you really need to have a large amount of stuff to do before it makes sense to convert the data from "python" to "C" type of data, and back again.
It's a bit like asking "How much faster is this sporty car than that not-so-sporty car", and if you drive in a big city with lots of congestion, the difference may not be any at all - but if you take them to a race-track, where the sporty car gets to stretch its legs properly, the winner is quite obvious.
In the "car" theme, the "congestion" is lots of calls to a very small function in C, that doesn't do much work - the "convert from Python to C and back to Python data" will be the congestion/traffic lights. If you have large lumps of data, then you get a bit more "race-track" effect. | 0 | 402 | true | 0 | 1 | Python C API performance gains? | 18,949,369 |
1 | 1 | 0 | 6 | 13 | 0 | 1 | 0 | When writing tests I usually name the modules prefixed with test_ for example spam.py and test_spam.py. This makes finding the tests easy. When testing classes in a module I create a unittest.TestCase derivative with a similar class name, postfixed with Test. e.g. Spam becomes SpamTest (not TestSpam as this sounds like it is a test implementation of Spam). Then class functions are tested by test functions that are prefixed with test_ and postfixed with _testcondition or some other descriptive postfix. I find that this works brilliantly as the original object names are included.
The problem occurs when I want to test module level functions. Following my regular structure I would create a unittest.TestCase derivative with the same name as the function, postfixed with Test. The problem with this is that class names are camel cased and function names are lower cased with underscores to separate words. Ignoring the naming convention some_function becomes SomeFunctionTest. I cannot help but feeling that this is ugly.
What would be a better fit? What is common practice? Is there a 'standard' like pep8 for this? What do you use? | 0 | python,unit-testing,naming-conventions | 2013-09-24T08:03:00.000 | 0 | 18,976,073 | The way you are doing it is the cleanest approach - as long as there is a clear location where people would expect to find the tests for a module level function then I think you are good. The stylistic difference between the function name and the test class name - although an annoyance - isn't significant enough to worry about. | 0 | 5,209 | false | 0 | 1 | Python unit test naming convention for module functions | 27,770,151 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | For users of our Django app that receive their emails through gmail they are finding that emails are getting grouped into conversations that shouldn't be.
I'm not sure what gmail expects in an email to consider it unique enough to not group as a conversation but when I send plain text emails with DIFFERENT subjects using send_mail or even try a multipart/alternative with EmailMultiAlternatives with an html body gmail still assumes they are part of the same conversation.
Obviously this creates confusion when our application sends emails with different subjects and bodies to the same user and they are all grouped and gmail only shows the subject of the first message in the conversation.
I have 100% confirmed by looking at the raw original email messages to make sure the emails are different subjects and bodies.
I just want to know if I can change anything in how django creates the email message so it can play nice with gmail conversations.
I am using python 2.7.4, and can replicate the "issue" with Django 1.4 and 1.5. | 0 | python,django,email,python-2.7,gmail | 2013-09-24T15:49:00.000 | 0 | 18,986,354 | Make sure messages on different subjects have a different 'From' address | 0 | 261 | true | 1 | 1 | Django's send_mail messages get grouped into a gmail converation | 18,986,649 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I have a python script which gathers 10,000's of 'people' from an API and then goes on to request two other APIs to gather further data about them and then save the information to a local database, it takes around 0.9 seconds per person.
So at the moment it will take a very long time to complete. Would multi-threading help to speed this up? I tried a multi-threading test locally and it was slower, but this test was just a simple function without any API interaction or anything web/disk related.
thanks | 0 | python,multithreading | 2013-09-24T21:16:00.000 | 0 | 18,992,186 | How many cores do you have?
How parallelizable is the process?
Is the problem CPU bound?
If you have several cores and it's parallelizable across them, you're likely to get a speed boost. The overhead for multithreading isn't nearly 100% unless implemented awfully, so that's a plus.
On the other hand, if the slow part is CPU bound it might be a lot more fruitful to look into a C extension or Cython. Both of those at times can give a 100× speedup (sometimes more, often less, depending on how numeric the code is) for much less effort than a 2× speed-up with naïve usage of multiprocessing. Obviously the 100× speedup is only for the translated code.
But, seriously, profile. Chances are there are low hanging fruit that are much easier to access than any of this. Try a line profiler (say, the one called line_profiler [also called kernprof]) and the builtin cProfile. | 0 | 89 | true | 0 | 1 | Should i use multi-threading? (retrieving mass data from APIs) | 18,992,283 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | I want to generate a signature in Node.js. Here is a python example:
signature = hmac.new(SECRET, msg=message, digestmod=hashlib.sha256).hexdigest().upper()
I have this:
signature = crypto.createHmac('sha256', SECRET).update(message).digest('hex').toUpperCase()
What am I doing wrong? | 0 | python,node.js,hmac,digest | 2013-09-24T23:15:00.000 | 0 | 18,993,675 | Checked the node manuals as well. It looks correct to me. What about the ; in the end of the chain? | 0 | 1,076 | true | 0 | 1 | Nodejs equivalent of Python HMAC signature? | 18,993,775 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I have a hashed password using the Hash::make() function of laravel when a user is created. I eventually need to take that hashed password, and pass it to a python script to perform a login and download of site resources. I know the hash is a one-way action, but I'd like to keep the password hashed to be security conscious if at all possible.
Any suggestions on how to accomplish this task while keeping security intact would be helpful!
Thanks,
Justin | 0 | php,python,hash,laravel,laravel-4 | 2013-09-26T18:44:00.000 | 0 | 19,036,197 | you cant the best you can do is encrypt it with a reversible encryption ... but then you need to store the key somewhere ... eventually you will have some plain text somewhere (or encoded at best) that will allow decryption ... you could store the hash and do a query against a db that maps hashes to pw's but you still have the password in plaintext somewhere ... you cannot login with just a hash anywhere ... (because the hash ends up getting hashed and then no longer matches the expected hash)
an option may be to use rainbow tables to find something that results in an identical hash and use that instead ... but if they are adding salts or anything you are once again out of luck | 0 | 331 | true | 1 | 1 | Laravel 4 passwords and python | 19,036,400 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I am developing a php based web app but I have a python script that I already had that I want to integrate into my system. Is it possible to embed/include the python script within the main content area of my web app on a specific page? | 0 | php,python | 2013-09-27T19:09:00.000 | 0 | 19,058,338 | There's this implementation of python called python server page that you can use to embed python into the web app directly just like php but it would have file extension .psp it is not actively developed, Google it up | 0 | 1,812 | false | 0 | 1 | embedding python script in php website | 19,058,834 |
2 | 2 | 0 | 7 | 2 | 1 | 1.2 | 0 | After a bit of googling around, I see this issue is pretty common but has no direct answers.
Trying to use Pycrypto on my Mac 10.8.5. Installed it through Pip, Easy_install, and manually with setup.py yet when I try to import it, it says it can't find the module.
Anyone else have an issue like this? | 0 | python,macos,pycrypto | 2013-09-28T03:39:00.000 | 0 | 19,062,968 | For those having this issue on Mac, for something reason Pip, easy_install, and even doing it manually installs Crypto with a lowercase 'c' in to site-packages. By browsing in to site-packages and renaming 'crypto' to 'Crypto', it solves the issues with other libaries. | 0 | 4,442 | true | 0 | 1 | Python - Crypto.Cipher/Pycrypto on Mac? | 19,102,883 |
2 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | After a bit of googling around, I see this issue is pretty common but has no direct answers.
Trying to use Pycrypto on my Mac 10.8.5. Installed it through Pip, Easy_install, and manually with setup.py yet when I try to import it, it says it can't find the module.
Anyone else have an issue like this? | 0 | python,macos,pycrypto | 2013-09-28T03:39:00.000 | 0 | 19,062,968 | I've had this problem before, and this is because you probably have different versions of Python. So, in fact, the package is installed, but for a separate version. What you need to do is see which executable file is linked to when python or pip is called. | 0 | 4,442 | false | 0 | 1 | Python - Crypto.Cipher/Pycrypto on Mac? | 19,063,101 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Is there any way to find MAC address of a device (in chat system) using Python?
except uuid library | 0 | python,macos,chat | 2013-09-29T09:34:00.000 | 1 | 19,076,464 | In general, it's not possible to get a MAC address of another host (computer) on the internet without running your own program on that host, and asking.
It's possible to get the MAC addresses of the active hosts on the local network (up to the next router) from the ARP cache. It's possible to get your own MAC address(es). All this is OS-dependent. | 0 | 221 | false | 0 | 1 | Find MAC address of system, using python (in chat system) | 19,076,713 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | Windows Machine, Python 2.4:
When I run my script in Abaqus' "Run Script...", I get an ImportError saying that xlwt module does not exist. The same script runs perfectly well in my Eclipse IDE or Python IDE. I made sure that I gave the right path to the Python Library.
Any help in this regard would be appreciated. Thanks! | 0 | python,importerror,xlwt | 2013-09-30T04:23:00.000 | 0 | 19,086,425 | I think raw_input command is just not supported within CAE environment.
You can use getInput() or getInputs() instead. | 0 | 627 | true | 1 | 1 | Running xlwt module in Abaqus | 19,756,366 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | I'm using gitlab-ci to automatically build a C++ project and run unit-tests written in python (it runs the daemon, and then communicates via the network/socket based interface).
The problem I'm finding is that when the tests are run by the GitLab-CI runner, they fail for various reasons (with one test, it stalls indefinitely on a particular network operation, on the other it doesn't receive a packet that should have been sent).
BUT: When I open up SSH and run the tests manually, they all work successfully (the tests also succeed on all of our developers' machines [linux/windows/OSX]).
At this point I've been trying to replicate enough of the build/test conditions that gitlab-ci is using but I don't really know any exact details, and none of my experiments have reproduced the problem.
I'd really appreciate help with either of the following:
Guidance on running the tests manually outside of gitlab-ci, but replicating its environment so I can get the same errors/failures and debug the daemon and/or tests, OR
Insight into why the test would fail when ran by GitLab-CI-Runner
Sidetrack 1:
For some reason, not all the (mostly debugging) output that would normally be sent to the shell shows up in the gitlab-ci output.
Sidetrack 2:
I also played around setting it up with jenkins, but one of the tests fails to even connect to the daemon, while the rest do it fine. | 0 | python,unit-testing,continuous-integration,gitlab | 2013-10-01T18:46:00.000 | 1 | 19,123,609 | -i usually replicate the problem by using a docker container only for the runner and running the tests inside it, dont know if you have it setup like this =(.
-Normally the test doesnt actually fail if you log in the container you will see he actually does everything but doesnt report back to the Gilab CI, dont freak out it does it job it simply does not say it.
PS: you can see if its actually running by checking the processes on the machine.
example:
im running a gitlab ci with java and docker:
gitlab ci starts doing its thing then hangs at a download,meanwhile i log in the container and check that he is actually working and manages to upload my compiled docker image. | 0 | 1,059 | false | 0 | 1 | Tests fail ran by gitlab-ci, but not ran in bash | 27,779,548 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | I have a graph object created with the igraph R package.
If I understand the architecture of the igraph software package correctly the igraph R package is an interface to use igraph from R. Then since there is also a igraph Python interface I wonder if it is possible to access my igraph object created with R via Python directly. Or if the only way to access and igraph R object from Python is to export the igraph R object with write.graph() in R and then import it with the igraph R package. | 0 | python,r,igraph | 2013-10-02T02:22:00.000 | 0 | 19,129,052 | The two interfaces use different data models to store the graph attributes, so I think there is no safe and sane way to access an igraph object in R from Python or vice versa, apart from saving it and then loading it back. Using the GraphML format is probably your safest bet as it preserves all the attributes that are basic data types (numbers and strings). | 0 | 249 | true | 0 | 1 | Access igraph R objects from Python | 19,132,512 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | How do you run a Unit test on your class creation in aptana studio 3 on a python format class.
I am wondering if I am supposed to add something to my code or is there a function in aptana studio that does it for you. | 0 | python,class,unit-testing,testing,aptana | 2013-10-03T02:32:00.000 | 0 | 19,149,840 | Just righ-click the test file and select Run As -> Python unit-test for the first time, on subsecuent runs just press CTL + F11 | 0 | 461 | false | 0 | 1 | Unit test in aptana studio 3 | 21,667,402 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I just setup PyDev with Eclipse, but I'm a little confused. I thought that in the console I would be able to type certain commands such as print("Hello World") and directly observe the result without having to incorporate that in any sort of file.
The reason I would like this is because it would allow me to test functions real quick before using them in scripts, and I'm also following a tutorial which tells me to check if NumPy is installed by typing import NumPy in the command line.
Thanks! | 0 | python,eclipse,command-line,pydev | 2013-10-04T13:02:00.000 | 0 | 19,181,895 | Open up terminal and type python then it should load python shell then type
import numpy
I have used pydev and find its easier just to use terminal to run small commands | 0 | 121 | false | 0 | 1 | How can I test commands in Python? (Eclipse/PyDev) | 19,182,148 |
1 | 2 | 0 | 1 | 9 | 0 | 0.099668 | 0 | Strange question this, I know.
I have a code base in fortran 77 that for the most part parses large non-binary files, does some manipulation to these files and then does a lot of file writing. The code base does not do any matrix manipulation or number crunching. This legacy code is in fortran because a lot of other code bases do require serious number crunching. This was originally just written in fortran because there was knowledge of fortran.
My proposal is to re-write this entirely in python (most likely 3.3). Maintenance of the fortran code is just as difficult as you would expect, and the tests are as poor as you can imagine. Obviously python would help a lot here.
Is there any performance hits (or even gains) in terms of the file handling speed in python? Currently the majority of run time of this system is in reading/writing the files.
Thanks in advance | 0 | python,io,migration,fortran,legacy | 2013-10-04T13:59:00.000 | 0 | 19,183,172 | In general, unless your particular compiler and available toolset does especially counter-productive things, one programming language is able to do IO as fast as another. In many programming languages, a naive approach may be sub-optimal - like all performance-related aspects of programming, this is something that is solved by appropriate design, and appropriate use of the available tools (such as parallel processing, use of buffered, threaded IO, for example).
Python isn't especially bad at IO, offers buffered IO and threading capabilities, and is easy to extend with C (and therefore probably not that hard to interact with Fortran). Python is likely to be a completely reasonable technology to incrementally replace parts of your codebase - indeed, if you can first make IO fast in python, you can probably compile an extension which ultimately calls your Fortran code. | 0 | 515 | false | 0 | 1 | File handling speed of python 3.3 compared to fortran 77 | 19,229,356 |
1 | 13 | 0 | 3 | 95 | 0 | 0.046121 | 0 | How can I know if a certain port is open/closed on linux ubuntu, not a remote system, using python?
How can I list these open ports in python?
Netstat:
Is there a way to integrate netstat output with python? | 0 | python,port,netstat | 2013-10-05T09:24:00.000 | 1 | 19,196,105 | Netstat tool simply parses some /proc files like /proc/net/tcp and combines it with other files contents. Yep, it's highly platform specific, but for Linux-only solution you can stick with it. Linux kernel documentation describes these files in details so you can find there how to read them.
Please also notice your question is too ambiguous because "port" could also mean serial port (/dev/ttyS* and analogs), parallel port, etc.; I've reused understanding from another answer this is network port but I'd ask you to formulate your questions more accurately. | 0 | 151,151 | false | 0 | 1 | How to check if a network port is open? | 20,727,394 |
1 | 4 | 0 | 1 | 1 | 0 | 0.049958 | 1 | I am trying to access historical google page rankings or alexa rankings over time to add some weightings on a search engine I am making for fun. This would be a separate function that I would call in Python (ideally) and pass in the paramaters of the URL and how long I wanted to get the average over, measured in days and then I could just use that information to weight my results!
I think it could be fun to work on, but I also feel that this may be easy to do with some trick of the APIs some guru might be able to show me and save me a few sleepless weeks! Can anyone help?
Thanks a lot ! | 0 | python,google-api,google-search-api,pagerank,alexa | 2013-10-07T01:17:00.000 | 0 | 19,215,815 | Alexa (via AWS) charges to use their API to access Alexa rankings. The charge per query is micro so you can get hundreds of thousands of ranks relatively cheaply. I used to run a few search directories that indexed Alexa rankings over time, so I have experience with this. The point is, you're being evil by scraping vast amounts of data when you can pay for the legitimate service.
Regarding PageRank... Google do not provide a way to access this data. The sites that offer to show your PageRank use a trick to get the PageRank via the Google Toolbar. So again, this is not legitimate, and I wouldn't count on it for long-term data mining, especially not in bulk quantities.
Besides, PageRank counts for very little these days, since Google now relies on about 200 other factors to rank search results, as opposed to just measuring sites' link authority. | 0 | 3,663 | false | 1 | 1 | Possible to get alexa information or google page rankings over time? | 19,393,738 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I would like a way to list all of the python methods which were touched between two mercurial changesets. Is there a tool available which will easily do this?
Clarification based on comment:
I am not looking for something 100% comprehensive. If the tool could identify each line changed in the diff, then which method/class it falls within, that would be great. | 0 | python,mercurial | 2013-10-07T16:22:00.000 | 0 | 19,229,841 | Identifying the "current method" obviously depends on the file's language, so you're not going to find a ready solution in the mercurial commandset. But it's not too hard to scan a python file manually, and track the current class and method (as long as the code doesn't play games with the syntax). You did say you don't need it to be bullet-proof, right?
If one of the changesets being compared is an ancestor of the other, you should get pretty good mileage out of hg annotate (a.k.a. hg blame), which tells you when each line in your file was last touched. You can then scan files for recent changes, while at the same time keeping track of the current class and method or function.
If the changesets have a more complex relationship, you may have to do some work: Run a diff between the two versions, and parse the diff output for a list of file-line pairs that have changed; then scan the source files to figure out the class and method that contains each change. (Alternately, you could pre-process the source files to build an index, then review the diffs). | 0 | 59 | false | 0 | 1 | How can I identify the methods touched in HG changset? | 19,232,319 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have 2.3.3 version of pytest running on windows. I have a test folder which contains bunch of test files like test1.py, test2.py, test3.py etc. If i open command prompt and navigate to this folder to run a particular test
pytest test1.py
Instead of just running test1.py, it is running all the tests in the folder. Like test1.py, test2.py, test3.py etc.
So pytest is not taking arguments and parsing them. I am seeing this only on windows. Does anyone know what is happening here?
Thanks a bunch in advance. | 0 | python,command,pytest | 2013-10-08T19:30:00.000 | 1 | 19,256,629 | I can't check this but what I'd do first would be check PATH for the pytest executable. I'd except a Windows batch script, and continue investigation in the code, maybe that's where the args are lost or passed (quoted?) incorrectly. | 0 | 552 | false | 0 | 1 | pytest is not parsing command line arguments on windows | 19,256,758 |
1 | 1 | 0 | 2 | 1 | 1 | 1.2 | 0 | I have a c# class that implements an interface and I also have some more public methods on this class, what I want is to expose to python code only the methods belonging to this interface and not the whole object.
Is there a simple way to do this without to create a new class ? | 0 | c#,ironpython | 2013-10-08T22:41:00.000 | 0 | 19,259,809 | Use the [PythonHidden] attribute on methods you do not want to expose.
IronPython will always make calls based on the original object, not the interface type. Creating a wrapper class, which maintains a reference to your Interface implementor , forwarding the calls as required, is also a good approach. | 0 | 198 | true | 0 | 1 | How to expose only the interface implementation to IronPython | 19,260,358 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am trying to embed various PDF documents into my ReportLab canvas. It seems that maybe you can hack in support for SVG (but I really need PDF).
If you want pure python, the proper way is to pay for the commercial ReportLab-PLUS addons, which includes PageCatcher, a mighty powerful artwork/PDF toolset.
Im not ready for the PLUS upgrade just yet, but I have one other potential solution: Adobe Acrobat. I use Acrobat quite often, but I have never attempted to automate it (using python+COM I suppose).
I dont want to just slam PDFs together, because it will ruin indexing and Table Of Contents generated by ReportLab. What I would need to do is set some type of placeholder in ReportLab that simply takes up space, yet, it would need to leave some type of identifier for Acrobat to look for and replace. I will plan to fill in entire pages in Acrobat.
Any idea how I can create this placeholder from the ReportLab side? It almost seems like I would want to embed metadata in the PDF that gives Acrobat exact instructions for the insertion. I also suppose adding actual entities could work, and then Acrobat will need to remove them or cover them up.
I am try to merge AutoCAD drawings, Vector illustrations, and assorted reStructuredText snippets (using rst2pdf). | 0 | python,acrobat,reportlab | 2013-10-09T20:49:00.000 | 0 | 19,282,409 | There is a python module, pyPDF, that can also be used to slice-and-dice PDF's.
This could be used if you had already exported you Assets using the native program (example printing an AutoCAD drawing as a PDF, from within AutoCAD itself). Acrobat is pretty good at magically guessing how this should be done when using those difficult Proprietary applications with specialized formats.
The disadvantage (from an automation point-of-view): is that now we probably need to script AutoCAD to output the PDF in an organized way, so that we can pass it on to pyPDF. (Or we do these kinds of things by hand, but that is not very scalable). | 0 | 561 | false | 1 | 1 | ReportLab import PDF, Acrobat | 19,321,562 |
1 | 2 | 0 | 1 | 0 | 1 | 1.2 | 0 | I have a PuTTY terminal running emacs 23. I just installed python-mode.el-6.1.2 and pinard-Pymacs-5989046. The IPython shell looks like this:
IPython 1.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
^[[0;32mIn [^[[1;32m2^[[0;32m]: ^[[0m
Whereas when I run ipython from bash, I get
IPython 1.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]:
Does this look like a charset issue in my PuTTY setup or should I try to find the issue within emacs/python-mode? | 0 | emacs,ipython,putty | 2013-10-10T23:19:00.000 | 0 | 19,307,827 | Solution: add this line to ~/.emacs.d/init.el:
(ansi-color-for-comint-mode-on) | 0 | 330 | true | 0 | 1 | Ugly ipython prompt on emacs 23 | 19,351,887 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 | How can I find child process pid after the parent process died.
I have program that creates child process that continues running after it (the parent) terminates.
i.e.,
I run a program from python script (PID = 2).
The script calls program P (PID = 3, PPID = 2)
P calls fork(), and now I have another instance of P named P` (PID = 4 and PPID = 3).
After P terminates P` PID is 4 and PPID is 1.
Assuming that I have the PID of P (3), how can I find the PID of the child P`?
Thanks. | 0 | python,linux,process | 2013-10-14T13:46:00.000 | 1 | 19,361,740 | The information is lost when a process-in-the-middle terminates. So in your situation there is no way to find this out.
You can, of course, invent your own infrastructure to store this information at forking time. The middle process (PID 3 in your example) can of course save the information which child PIDs it created (e. g. in a file or by reporting back to the father process (PID 1 in your example) via pipes or similar). | 0 | 947 | false | 0 | 1 | How to find orphan process's pid | 19,361,844 |
1 | 1 | 0 | 6 | 4 | 0 | 1.2 | 0 | Is this possible to extract the header and/or footer from a PDF document?
As I tried a few options (including PDFMiner, the Ruby gem pdf-extract, study the PDF format specs), I'm starting to suspect that the header/footer information is not available whatsoever.
(I would like to do this from Python, if possible, but any other alternative is viable.) | 0 | python,pdf,document | 2013-10-15T09:15:00.000 | 0 | 19,377,427 | Page headers and footers are not (at least not necessarily) located in some content part separate from the rest of the page content. Thus, in general there is no way to reliably extract headers and footers from PDFs.
It is possible, though, to try and use heuristics which look at the whole PDF contents and try to guess what parts are headers and/or footers.
If the PDFs you want to analyze are fairly homogeneous, e.g. all produced by the same publisher and looking alike, this might be feasible. The more divers your source PDFs are, though, the more complex your heuristics likely will become and the less accurate the results will be. | 0 | 4,166 | true | 1 | 1 | Extract header/footer from PDF (programmatically) | 19,401,254 |
1 | 1 | 0 | 2 | 6 | 1 | 0.379949 | 0 | I'm developing a python package that contains a C++ extension. When I install the package using the setup.py script or using pip, the C++ source files are all compiled and linked to obtain a single .so library, which can then be imported in the Python source code. During development, I need to make multiple changes to the source code (testing, debugging, etc). I find that re-installing the package involves rebuilding all the C++ source files, even if only a small part of one file was changed. Obviously, this takes up quite a bit of time.
I'm aware of the development mode (python setup.py develop or pip install -e) that places a link to the source files, so that changes made are immediately seen when the module is re-imported. However, this applies only to the .py source files and not the C++ extension, which has to be re-compiled after every change.
Is there a way to have setup.py look at the .o files in the build directory (while in development mode) and use their timestamps to figure which ones need to be re-compiled? I'm thinking of the way GNU Make performs selective compilation based on timestamps. Thanks | 0 | c++,python,setup.py | 2013-10-15T23:54:00.000 | 0 | 19,393,098 | I would recommend to use Make (other other build systems like CMake) for development and setup.py only for the final installation / deployment. I have done similar Python + C++ projects and it works great that way. | 0 | 702 | false | 0 | 1 | Python C++ extension: compile only modified source files | 19,427,165 |
1 | 2 | 1 | 0 | 1 | 0 | 1.2 | 0 | I have an hybrid c# object, with some instance properties and methods, and I pass it to IronPython. What I want is to syncronize the dispatch to the c# members, both static and dynamics, from Py code.
I implemented the IDynamicMetaObjectProvider on the c# object, and I noticed that when Py invokes the static methods of my object, and with instance methods, defined at compile time vs dynamics, the method BindInvokeMember is never used, but it is always called the method BindGetMember.
I'm a little confused, probably this thing can't be done? | 0 | c#,ironpython,dynamic-language-runtime | 2013-10-16T07:24:00.000 | 0 | 19,397,436 | IronPython will always use BindGetMember and then Invoke the result because that's how Python works - get the attribute from the object, then call it. Your BindGetMember implementation should return another dynamic object that implements BindInvokeMember, which will have the arguments you need. | 0 | 216 | true | 0 | 1 | Intercepting method invocation to c# objects | 19,430,707 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Here's a short description of what I have:
I have to raspberry pi's in a local net work. I one of them I have a .py script named watchdog.py that starts a stream and then uses a sshpass command to the other pi to display the video stream.It also has some signaling LEDs a some push buttons for control
the problem is:
If I open a terminal and run the watchdog.py script in the GUI everything runs as it should be. So I thought of running it as a service as boot and installed upstart and made it run as a service (successfully I think). The thing is. If I boot the pi and then press the button to start the streams,they wont play on the other Pi, the LEDs ligh up and all the buttons work. And even the CPU load behaves the same way, but i still don't video nor audio. I have thought of trying automatically open a terminal (LXterminal) widow and run the python scrip on that window. but I didn't want the streaming raspberry pi also booting into gui (tough I guess I would mind if that makes the whole thing work).This little thing i making the whole project useless. | 0 | python,linux,terminal,raspberry-pi | 2013-10-16T14:42:00.000 | 1 | 19,406,444 | What are you using to play the streams? Depending on how you boot up the second Raspberry it might not have started some daemons for audio/video playback?!
You should (if you're not already doing) write a log (import logging ;)) and write a logfile which you can track for errors. | 0 | 631 | false | 0 | 1 | Run terminal command as startup reacts different from manually (linux raspberry pi) | 19,407,980 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Here's a short description of what I have:
I have to raspberry pi's in a local net work. I one of them I have a .py script named watchdog.py that starts a stream and then uses a sshpass command to the other pi to display the video stream.It also has some signaling LEDs a some push buttons for control
the problem is:
If I open a terminal and run the watchdog.py script in the GUI everything runs as it should be. So I thought of running it as a service as boot and installed upstart and made it run as a service (successfully I think). The thing is. If I boot the pi and then press the button to start the streams,they wont play on the other Pi, the LEDs ligh up and all the buttons work. And even the CPU load behaves the same way, but i still don't video nor audio. I have thought of trying automatically open a terminal (LXterminal) widow and run the python scrip on that window. but I didn't want the streaming raspberry pi also booting into gui (tough I guess I would mind if that makes the whole thing work).This little thing i making the whole project useless. | 0 | python,linux,terminal,raspberry-pi | 2013-10-16T14:42:00.000 | 1 | 19,406,444 | answer moved from OP's question itself:
I found a way that seems to work so far. instead of running the python script as a service I tried running it as cron job at reboot, and it worked. now it all works straight from reboot and I have Audio and video. | 0 | 631 | false | 0 | 1 | Run terminal command as startup reacts different from manually (linux raspberry pi) | 42,814,929 |
Subsets and Splits