Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 2 | 1 | 1 | 1.2 | 0 | I was wanting to use some non-standard python packages in my project and was wondering how to add them. What is the benefit of using the AWS eb config files (.ebextensions and requirements.txt) rather than just downloading and including the package in my actual project under a lib directory like you would with a java application? | 0 | python,amazon-web-services,amazon-elastic-beanstalk | 2013-12-16T07:31:00.000 | 0 | 20,605,544 | by including it in the requirements.txt, you can include only the packages you are calling. Pip then takes care of installing the dependencies and checking the versions.
This has the additional advantage of when you are changing or upgrading your project, you can specify a new version of the library you are using and all the dependent libraries will also be updated. | 0 | 289 | true | 0 | 1 | Adding external packages to elastic beanstalk python app | 20,607,574 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | My application polls an API repeatedly and spawns processes to parse any new data resulting from these calls, conditionally making an API request based on those data. The speed of that turnaround time is critical.
A large bottleneck seems to be related to the setup of the actual spawned processes themselves -- a few module imports and normal instantiation code, which take up to 0.05 seconds on a middling Amazon setup†. It seems like what it would be helpful to have a batch of processes with those imports/init code already done††, waiting to process results. What is the best approach to create/communicate with a pool (10-20?) of warm, reusable, and extremely lightweight processes in Python?
† - yes, I know throwing better hardware at the problem will help, and I'll do that too.
†† - yes, I know doing less will help, and I'm working on making the code as streamlined and minimal as possible | 0 | python | 2013-12-16T17:48:00.000 | 0 | 20,617,337 | Well, you're in for a learning curve here, but multiprocessing.Pool() will create a pool of any number of processes you specify. Use the initializer= argument to specify a function each process will run at the start. Then there are several methods you can use to submit work items to the processes in the pool - read the docs, play with it, and ask questions if you get stuck.
One caution: "extremely lightweight processes" is impossible. By definition, processes are "heavy". "How heavy" is up to your operating system, and has approximately nothing to do with the programming language you use. If you're looking for lightweight, you're looking for threads. | 0 | 442 | true | 0 | 1 | Pool of warm, reusable Python processes | 20,617,564 |
1 | 3 | 0 | 3 | 4 | 1 | 1.2 | 0 | This might be a more broad question, and more related to understanding Python's nature and probably good programming practices in general.
I have a file, called util.py. It has a lot of different small functions I've collected over the past few months that are useful when doing various machine learning tasks.
My thinking is this: I'd like to continue adding important functions to this script as I go. As so, I will want to use import util.py often, now and in the future, in many unrelated projects.
But Python seems to feel like I should only be able to access the code in this file if it lives in my current directly, even if the functions in this file are useful for scripts in different directories. I sense some reason behind the way that works that I don't fully grasp; to me, it seems like I'll be forced to make unnecessary copies often.
If I should have to create a new copy of util.py every time I'm working from within a new directory, on a different project, it won't be long until I have many different version / iterations of this file, scattered all over my hard drive, in various states. I don't desire this degree of modularity in my programming -- for the sake of simplicity, repeatability, and clarity, I want only one file in only one location, accessible to many projects.
The question in a nutshell: What is the argument for Python to seemingly frown on importing from different directories? | 0 | python,data-structures,import | 2013-12-17T01:18:00.000 | 0 | 20,624,211 | If your util.py file contains functions you're using in a lot of different projects, then it's actually a library, and you should package it as such so you can install it in any Python environment with a single line (python setup.py install), and update it if required (Python's packaging ecosystem has several features to track and update library versions).
An added benefit is that right now, if you're doing what the other answers suggested, you have to remember to manually have put util.py in your PYTHONPATH (the "dirty" way). If you try to run one of your programs and you haven't done that, you'll get a cryptic ImportError that doesn't explain much: is it a missing dependency? A typo in the program?
Now think about what happens if someone other than you tries to run the program(s) and gets those error messages.
If you have a library, on the other hand, trying to set up your program will either complain in clear, understandable language that the library is missing or out of date, or (if you've taken the appropriate steps) automatically download and install it so things are ready to roll.
On a related topic, having a file/module/namespace called "util" is a sign of bad design. What are these utilities for? It's the programming equivalent of a "miscellaneous" folder: eventually, everything will end up in it and you'll have no way to know what it contains other than opening it and reading it all. | 0 | 83 | true | 0 | 1 | What is the argument for Python to seemingly frown on importing from different directories? | 20,624,583 |
1 | 1 | 0 | 1 | 2 | 1 | 0.197375 | 0 | Does anyone know the least hacky way of determining if Python code is being run by a unit test?
Thanks! | 0 | python,unit-testing,runtime | 2013-12-17T03:34:00.000 | 0 | 20,625,440 | I agree with all the comments.
Don't do this.
Your function/class/component should NOT behave differently under testing. | 0 | 42 | false | 0 | 1 | How to tell at runtime if you're inside of a unit test in Python? | 20,625,573 |
1 | 1 | 0 | 1 | 4 | 0 | 0.197375 | 0 | I'm writing Python code and use a library that provides a Python interface through SWIG; the library itself is written in C++, and everything is run in Linux.
I would now like to profile my code and not only get information about which if my library calls are taking the most time, but also what the situation is inside the library. (I'm suspecting a performance problem there.)
The library is open-source and if necessary I could build it with profiling flags enabled.
What are my options? | 0 | c++,python,profiling,swig | 2013-12-18T11:01:00.000 | 0 | 20,656,322 | It's been a while since I've built anything on Linux, but from memory you can build your C++ lib with the profiling switches on, run the script via the profiler on python.exe, and the profile data will be captured for your lib only, not for the whole process. You can then view your profile data just as you would any other application. You might need the debug version of python, I can't remember. Sorry I can't be more specific, maybe post more info about your dev env. | 0 | 257 | false | 0 | 1 | Profiling SWIG Python code | 20,661,702 |
1 | 7 | 0 | 1 | 13 | 0 | 0.028564 | 0 | Is it possible in python to kill a process that is listening on a specific port, say for example 8080?
I can do netstat -ltnp | grep 8080 and kill -9 <pid> OR execute a shell command from python but I wonder if there is already some module that contains API to kill process by port or name? | 0 | python,python-2.7 | 2013-12-19T20:40:00.000 | 1 | 20,691,258 | First of all, processes don't run on ports - processes can bind to specific ports. A specific port/IP combination can only be bound to by a single process at a given point in time.
As Toote says, psutil gives you the netstat functionality. You can also use os.kill to send the kill signal (or do it Toote's way). | 0 | 19,468 | false | 0 | 1 | Is it possible in python to kill process that is listening on specific port, for example 8080? | 20,691,487 |
1 | 3 | 0 | 7 | 5 | 1 | 1.2 | 0 | I am attempting to write a program that determines whether a specific license plate is one of 10,000 that I have stored. I want to write a fast response algorithm first and foremost, with memory usage as a secondary objective. Would a balanced binary search tree or a hash table be more sufficient in storing the 10,000 license plate numbers (which also contain letters)? | 0 | python,hashtable,binary-search-tree | 2013-12-20T00:48:00.000 | 0 | 20,694,492 | A hash table takes O(1) time to look up any given entry (i.e. to check whether or not it is in the data structure), whereas a binary search tree takes O(logn) time. Therefore, a hash table will be a more efficient option in terms of response speed.
Binary search trees are more useful in scenarios where you need to display things in order, or find multiple similar entries. | 0 | 1,592 | true | 0 | 1 | Best Data Structure for storing license plates and searching if a given license plate exists | 20,694,525 |
1 | 1 | 0 | 1 | 5 | 1 | 0.197375 | 0 | I'm trying to work out a method for dynamically generating image files with PIL/Pillow that are a certain file size in order to better exercise certain code paths in my unit tests.
For example, I have some image validation code that limits the file size to 100kb. I'd like to generate an image dynamically that is 150kb to ensure that the validation works. It needs to be a valid image and within given dimensions (ie 400x600).
Any thoughts on how to add sufficient "complexity" to an image canvas for testing? | 0 | python,unit-testing,python-imaging-library,pillow | 2013-12-20T03:00:00.000 | 0 | 20,695,601 | Does it have to be exactly 150kb, or just somewhere comfortably over 100kb?
One approach would be to create a JPEG at 100% quality, and insert lots of (random) text into all the available EXIF and IPTC headers. Including a large thumbnail image will also push the size up.
(And like Bo102010 suggested, you could also use random RGB values to minimise the compression.) | 0 | 652 | false | 0 | 1 | Dynamically generate valid image of a certain filesize for testing | 23,596,144 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | Am using python2.7. My task is to create a web page say login screen in browser using python. I tried CGI. My code is showing HTML file(all the html code from to ) in CMD screen. Whereas i want them in browser as web page. Any help?. | 0 | python,http,webserver,cgi | 2013-12-20T09:50:00.000 | 0 | 20,700,738 | To use it as CGI you must move your script into cgi-bin or similar directory of HTTP server. Then point your browser into http://127.0.0.1/cgi-bin/my_scipt.py and see results. In case of problems see at HTTP server error log.
In case of strange errors show us what HTTP server and OS you use, example "Apache 2.2 on WinXP". | 0 | 326 | true | 1 | 1 | CGI with Python2.7 | 20,701,061 |
1 | 1 | 1 | 4 | 2 | 0 | 1.2 | 0 | I am using Google Protobuf in my Python application. Experimenting with the protobufs I found that Protobuf Message Creation is much slower in CPP based python implementation as compared to Python based Python implementation.
Message creation with PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp was upto 2-3 times slower as compared to pure python based Protobuf Message Creation.
Is this expected? I found that SerializeToString and ParseFromString both are faster in the cpp verison. The difference in each case widens as the size of the Message increases.
I am using python's standard timeit module to time these tasks.
(Using Version 2.4.1 of google protobuf) | 0 | c++,python,protocol-buffers | 2013-12-20T10:52:00.000 | 0 | 20,701,971 | Yes, I believe this is expected. The pure-Python implementation stores all the fields in a dict. To construct a new message, it essentially just creates an empty dict, which is very fast. The C++ implementation actually initializes a C++ DynamicMessage object under the hood and then wraps it. DynamicMessage actually initializes all of the fields upfront, so even though it's implemented in C++, it's "slower" -- but this upfront initialization makes later operations faster.
I believe you can improve performance further by compiling C++ versions of your protobuf objects and loading them in as another extension. If I recall correctly, the C++-backed Python protobuf implementation will then automatically used the compiled versions rather than DynamicMessage. | 0 | 4,148 | true | 0 | 1 | Performance of C++ based python implementation of Google Protobuf | 20,713,817 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | The following commands (to get a small screen working) execute just fine if I type them in from the LXTerminal window while running Raspian on a raspberry Pi once my desktop is loaded:
sudo modprobe spi-bcm2708
sudo modprobe fbtft_device name=adafruitts rotate=90
export FRAMEBUFFER=/dev/fb1
startx
I'm new to Pi and Python, and after piecing together several forum posts, the best way I thought to do this would be to run a python script from the /etc/xdg/lxsession/LXDE/autostart config file- I just don't know what the python script should say to automaticlaly open a LXTerminal window and type in the commands?
Any help would be much appreciated, thanks! | 0 | python,linux,raspberry-pi,raspbian | 2013-12-22T07:21:00.000 | 1 | 20,727,189 | Don't try to open a terminal window from python. Just use the os.system() command to run the three commands you show, if you insist on using python. Even easier would be a bash script into which you can write the three commands just as you have written them above.
Even better, and to get rid of the need to somewhere type the sudo password, add the three commands without sudo to /etc /rc.local just before the exit 0. | 0 | 1,022 | false | 0 | 1 | Raspbian Run 4 commands from terminal window after desktop loads Python script? | 20,727,288 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | How do I write an IMAP query to get all messages with INTERNALDATE higher than a given datetime? | 0 | python,imap | 2013-12-23T19:21:00.000 | 0 | 20,749,840 | SEARCH SINCE. Only works at day resolution though. | 0 | 620 | false | 0 | 1 | IMAP query to get messages with `INTERNALDATE` higher than given datetime | 20,751,370 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Having failed to find an answer to this elsewhere, I am opening this question more widely.
I need to execute a bash shell command when a properly constructed email is received (I'm using GMail) using Python. I have previously used Python to send emails, but the only solution I have yet found is to use feedparser and Google Atom, which I don't like. I would suggest that a keyword could exist in either the subject or body of the email; security is not an issue (I don't think) as the consequence is benign.
The bash command to execute will actually call another scripts to send the latest jpg from my Python motion detection routine which runs independently. | 0 | python,bash,shell,email | 2013-12-24T13:06:00.000 | 1 | 20,761,523 | procmail does this kind of thing trivally - checking the format of an incoming email and running a shell script that is. There's no need to reinvent the wheel
I'm not entirely clear from your description how python fits in to what you want to do..
Hope this helps! | 0 | 817 | false | 0 | 1 | Execute shell command when receiving properly constructed email | 20,764,497 |
2 | 2 | 0 | 0 | 1 | 1 | 0 | 0 | I am writing a Python code and it is requisite to include a password of one of my online account. I want to cover it in some way as keeping its functionality in the code. Is there nay way to masquerade this kind of credential information as keeping its use in the source code? | 0 | python,security,hide | 2013-12-25T13:59:00.000 | 0 | 20,773,538 | You should tell us what kind of protection you want. Do you want to make everybody able to execute you script without knowing the password? Do you want to be the only able to execute you script but you want to protect the password from people who can read the source? There may be different solution.
However every solution will require you to insert another password to get access to the stored password. So I think that the best solution would be not to save the password in the source at all. | 0 | 329 | false | 0 | 1 | How could I securely embed a required password into source code? | 20,775,488 |
2 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I am writing a Python code and it is requisite to include a password of one of my online account. I want to cover it in some way as keeping its functionality in the code. Is there nay way to masquerade this kind of credential information as keeping its use in the source code? | 0 | python,security,hide | 2013-12-25T13:59:00.000 | 0 | 20,773,538 | I would recommend two levels to secure passwords. 1 encrypt, 2, protect the key used for encrypting in key store.
Details- Encrypt the password using aes 256 or similar based on the risk. Key used for encrypting should be in key store and you can hard code the key store password.
You can also choose number of levels based on risk, usually at least two is recommended. | 0 | 329 | false | 0 | 1 | How could I securely embed a required password into source code? | 20,775,416 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 1 | I wrote an auth module for FreeRADIUS with Python.
I want to manage a NAS with it.
Is there way except generating a client.conf file and restarting? | 0 | python,radius,freeradius | 2013-12-26T04:58:00.000 | 0 | 20,779,360 | Not directly no. You can use sqlite if you want an easily modifiable local data store for clients definitions. | 0 | 961 | false | 0 | 1 | Can I define RADIUS clients with Python? | 20,792,780 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Lets say I have directory with subdirectories where are projects stored.
How to lock Python script inside that subdirectory ? That it can not scan parent directories, read files, import etc. Is it possible with mod_wsgi ?
And how to disable any python functions ?
Thank | 0 | python,apache,mod-wsgi | 2013-12-27T22:52:00.000 | 1 | 20,808,909 | There are two options. Use mod_wsgi daemon mode and run the daemon process as a distinct user. Then lock down all your file permissions appropriately to deny access from that user. The second is that mod_wsgi daemon mode also supports a chroot option. Using a chroot is obviously a lot more complicated to set up however. | 0 | 50 | false | 0 | 1 | Apache: python directory restriction | 20,809,616 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I have a web2py application where I have written various modules which hold business logic and database related stuff. In one of the files I am trying to access auth.settings.table_user_name but it doesn't work and throws and error as global name 'auth' is not defined. If I write the same line in controller, it works. But I want it to be accessed in module file. Please suggest how do I do that. | 0 | python,web2py | 2013-12-31T11:44:00.000 | 0 | 20,856,854 | I was getting a very similar error ("name 'auth' is not defined"). Had to add from django.contrib import auth at the top of views.py and it worked. | 0 | 2,882 | false | 1 | 1 | auth is not available in module | 54,032,848 |
1 | 1 | 0 | 3 | 0 | 0 | 0.53705 | 0 | I'm interested in writing a program in C++ to automate the instructions necessary to run a certain python script I've been experimenting with. I was hoping someone could tell me where to look to find information on sending instructions to the command line from a C++ application, as I don't know what to google to find info on that. | 0 | c++,python | 2014-01-02T13:53:00.000 | 1 | 20,884,520 | Hate to be the one to post it, but a nasty solution is the system function - dare I speak its name. Call it with code that you want executed in the command prompt and it will run. If you want to start task manager like this, call it like this:
system("C:\\Windows\\System32\\taskmgr.exe");
Fair warning that nobody really likes to see system in live code. | 0 | 724 | false | 0 | 1 | Writing a program to output instructions to Windows command line | 20,884,754 |
1 | 2 | 1 | 3 | 0 | 0 | 1.2 | 0 | I installed the SimpleCv on my raspberryPi and the driver to use the camera board with it (uv4l driver) and now I'd like to play with it.
When I type on simpleCV shell "Camera(0).getImage().save("foo.jpg") " , on the screen appears the camera preview and I am not able to type other command because this preview covers the shell
What Have I to do to remove the camera preview ?
Thanks a lot !
Filippo | 0 | python,linux,opencv,raspberry-pi,simplecv | 2014-01-02T21:07:00.000 | 0 | 20,891,900 | Try the nopreview option
pkill uv4l
uv4l --driver raspicam --auto-video_nr --encoding yuv420 --width 320 --height 240 --nopreview
export LD_PRELOAD=/usr/lib/uv4l/uv4lext/armv6l/libuv4lext.so
Hope that helps | 0 | 3,614 | true | 0 | 1 | How remove camera preview to raspberry pi | 20,910,789 |
3 | 3 | 0 | 4 | 9 | 0 | 0.26052 | 0 | I am using py.test (version 2.4, on Windows 7) with xdist to run a number of numerical regression and interface tests for a C++ library that provides a Python interface through a C module.
The number of tests has grown to ~2,000 over time, but we are running into some memory issues now. Whether using xdist or not, the memory usage of the python process running the tests seems to be ever increasing.
In single-process mode we have even seen a few issues of bad allocation errors, whereas with xdist total memory usage may bring down the OS (8 processes, each using >1GB towards the end).
Is this expected behaviour? Or did somebody else experience the same issue when using py.test for a large number of tests? Is there something I can do in tearDown(Class) to reduce the memory usage over time?
At the moment I cannot exclude the possibility of the problem lying somewhere inside the C/C++ code, but when running some long-running program using that code through the Python interface outside of py.test, I do see relatively constant memory usage over time. I also do not see any excessive memory usage when using nose instead of py.test (we are using py.test as we need junit-xml reporting to work with multiple processes) | 0 | python,pytest | 2014-01-03T09:26:00.000 | 0 | 20,900,380 | py.test's memory usage will grow with the number of tests. Each test is collected before they are executed and for each test run a test report is stored in memory, which will be much larger for failures, so that all the information can be reported at the end. So to some extend this is expected and normal.
However I have no hard numbers and have never closely investigated this. We did run out of memory on some CI hosts ourselves before but just gave them more memory to solve it instead of investigating. Currently our CI hosts have 2G of mem and run about 3500 tests in one test run, it would probably work on half of that but might involve more swapping. Pypy is also a project that manages to run a huge test suite with py.test so this should certainly be possible.
If you suspect the C code to leak memory I recommend building a (small) test script which just tests the extension module API (with or without py.test) and invoke that in an infinite loop while gathering memory stats after every loop. After a few loops the memory should never increase anymore. | 0 | 3,742 | false | 0 | 1 | Py.test: excessive memory usage with large number of tests | 20,934,950 |
3 | 3 | 0 | 1 | 9 | 0 | 0.066568 | 0 | I am using py.test (version 2.4, on Windows 7) with xdist to run a number of numerical regression and interface tests for a C++ library that provides a Python interface through a C module.
The number of tests has grown to ~2,000 over time, but we are running into some memory issues now. Whether using xdist or not, the memory usage of the python process running the tests seems to be ever increasing.
In single-process mode we have even seen a few issues of bad allocation errors, whereas with xdist total memory usage may bring down the OS (8 processes, each using >1GB towards the end).
Is this expected behaviour? Or did somebody else experience the same issue when using py.test for a large number of tests? Is there something I can do in tearDown(Class) to reduce the memory usage over time?
At the moment I cannot exclude the possibility of the problem lying somewhere inside the C/C++ code, but when running some long-running program using that code through the Python interface outside of py.test, I do see relatively constant memory usage over time. I also do not see any excessive memory usage when using nose instead of py.test (we are using py.test as we need junit-xml reporting to work with multiple processes) | 0 | python,pytest | 2014-01-03T09:26:00.000 | 0 | 20,900,380 | We also experience similar problems. In our case we run about ~4600 test cases.
We use extensively pytest fixtures and we managed to save the few MB by scoping the fixtures slightly differently (scoping several from "session" to "class" of "function"). However we dropped in test performances. | 0 | 3,742 | false | 0 | 1 | Py.test: excessive memory usage with large number of tests | 42,722,815 |
3 | 3 | 0 | 1 | 9 | 0 | 0.066568 | 0 | I am using py.test (version 2.4, on Windows 7) with xdist to run a number of numerical regression and interface tests for a C++ library that provides a Python interface through a C module.
The number of tests has grown to ~2,000 over time, but we are running into some memory issues now. Whether using xdist or not, the memory usage of the python process running the tests seems to be ever increasing.
In single-process mode we have even seen a few issues of bad allocation errors, whereas with xdist total memory usage may bring down the OS (8 processes, each using >1GB towards the end).
Is this expected behaviour? Or did somebody else experience the same issue when using py.test for a large number of tests? Is there something I can do in tearDown(Class) to reduce the memory usage over time?
At the moment I cannot exclude the possibility of the problem lying somewhere inside the C/C++ code, but when running some long-running program using that code through the Python interface outside of py.test, I do see relatively constant memory usage over time. I also do not see any excessive memory usage when using nose instead of py.test (we are using py.test as we need junit-xml reporting to work with multiple processes) | 0 | python,pytest | 2014-01-03T09:26:00.000 | 0 | 20,900,380 | Try using --tb=no which should prevent pytest from accumulating stacks on every failure.
I have found that it's better to have your test runner run smaller instances of pytest in multiple processes, rather than one big pytest run, because of it's accumulation in memory of every error.
pytest should probably accumulate test results on-disk, rather than in ram. | 0 | 3,742 | false | 0 | 1 | Py.test: excessive memory usage with large number of tests | 70,989,275 |
1 | 3 | 0 | 1 | 2 | 1 | 0.066568 | 0 | I have project written in python which i would like to upload to my github repo. In my source directory in laptop, there are other compiled python scripts (.pyc) residing as well which i would like to avoid uploading to github. The documentation avaiable on the internet shows uploading entire source directory to github repo.
Is there a way to avoid uploading certain file type, specifically *.pyc, to github repo? | 0 | python,git,github | 2014-01-05T12:30:00.000 | 0 | 20,933,562 | When you upload you files to github only what is in your git repo gets uploaded. pyc files should not have been added to your git repo anyways. If you did, remove them before pushing your repository.
You can use the .gitignore files to not let pyc files show up in your git status view. | 0 | 2,544 | false | 0 | 1 | How to upload only source files to github? | 20,933,608 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I want to use disutils to make a .msi for my python library. Before installation, the user can choose the destination path of the installation. Depending on this path, I want to generate a .pth file that will contain the chosen path. For this to be possible I need to run a post-installation script that will place the .pth in the correct place.
My question is: Is there a way of getting that installation path that was selected by the user, during run-time? | 0 | python,python-2.6,distutils | 2014-01-05T15:13:00.000 | 1 | 20,935,204 | Can’t you use a relative path in the .pth file? Or avoid using a .pth file at all? (They’re used for module collections that pre-date packages in Python, or horrible import hacks.) | 0 | 91 | false | 0 | 1 | Get user's installation path from distutils | 20,959,516 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | I don't know much about writing operating systems, but I though this would be a good way to learn. There are tutorials for raspberry pi operating systems, but they're not linux-based or made with python. I'm just looking for a general tutorial here. | 0 | python,operating-system,raspberry-pi,bare-metal | 2014-01-06T00:46:00.000 | 1 | 20,941,211 | Operating systems generally use "low level" languages like c/c++/d in order to have proper access to system resources. The problems with writing one in python are first, you need something to run an interpreter below it (defeating the purpose of having the OS be written in python) and second, there aren't good ways to manage resources in python. Furthermore, you said you want it to be linux based, however, linux is written in c (for the reasons listed above and a few more) and therefore writing something in python will not be very productive. If you want to stick with python, maybe you could write a window manager for linux instead? It would be much easier than an OS and python would be a fine language for such a project. | 0 | 548 | false | 0 | 1 | Programming a linux-based Raspberry Pi operating system with python | 20,941,275 |
1 | 6 | 0 | 9 | 23 | 1 | 1 | 0 | coverage.py will include init.py in its report and show it as 0 lines, but with 100% coverage.
I want to exclude all blank files from coverage report. I can't just add */__init__.py to omit as some of my __init__.py files have code. | 0 | python,code-coverage | 2014-01-06T14:18:00.000 | 0 | 20,951,914 | This feature doesn't exist in coverage.py. Does it help that you can sort the HTML report to move 100% files to the bottom, or files with 0 statements to the bottom?
UPDATE: As of coverage.py 4.0, the --skip-covered option is available to do exactly what is requested. | 0 | 5,229 | false | 0 | 1 | Ignoring empty files from coverage report | 24,137,339 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | Is there a place that I can post code to have it looked over by others? Where they can help edit it and post suggestions on what they think would make it more efficient. You would think that I am asking about the site I am currently posting to (SO). However, I mean where people are just willing to look it over and help debug. Not where you have to have a specific question about a certain piece of your code.
Back in the day it would just be a group of buddies all working on one project in the living room of someone's house where they all brought their computers over to. My friends have lost interest in programming though. So I am looking for something that can hook me up with other people so we can critique each other. Is it out there? Or do I need to build it? | 0 | javascript,php,python,html | 2014-01-06T21:57:00.000 | 0 | 20,960,030 | Yes. Koding.com, which is currently in beta, offers you free space and basically a development server, and it's much like stack overflow. You can share code snippets and work with multiple people there. | 0 | 56 | true | 0 | 1 | Code Editing over the internet | 20,960,259 |
1 | 1 | 0 | 1 | 2 | 1 | 1.2 | 0 | I was using Pydev on Eclipse. I understand if I have a Eclipse folder with 5 files containing unit-tests, I can run these tests once by right-clicking on the name of the folder in Eclipse, choosing "Run-As" --> "Python unit-test". This works fine for me.
What would be the recommended way to run these tests for the fixed number of times? For example, if I wanted to run the 5 tests in the folder 10 times each?
I would be very grateful if you could help me out.
Thanks! | 0 | python,eclipse,pydev,python-unittest | 2014-01-07T07:01:00.000 | 0 | 20,965,764 | I think that the problem is in the way you are constructing your tests. There are a two problems I see:
If tests are failing because of poor image recognition, then surely they indicate either a bug in Sikuli, or a badly designed test. Unit tests should be predictable and repeatable, so requiring that they run several times indicates that they are not well set up.
If you really do need to run the UI tests mutliple times, then this should be done in the code, not in the IDE, since you can't guarantee that they will always be run in that environment (e.g. what if you want to move to CI?). So you need something like this in your code:
def test_ui_component(self):
for i in range(1):
# Test code here
You could probably abstract the pattern out using a decorator or class inheritance if you really want to. | 0 | 795 | true | 0 | 1 | Running unit-tests using PyDev | 20,967,486 |
1 | 3 | 0 | 7 | 2 | 1 | 1.2 | 0 | I am just starting learning Pyramid using Pycharm. I have been reading tutorials but unfortunately there don't seem to be many out there.
My problem is that whenever I make a change to the source I have to run python setup.py install before I can test my changes. This step seems unnecessary and I am confused why this is the case.
I am developing in Pycharm on Windows. I would like to be able to change the code, restart the server, and see my changes reflected on the site immediately (skipping the distutils step). | 0 | python,configuration,pyramid,pycharm | 2014-01-07T08:34:00.000 | 0 | 20,967,112 | You should remove all the installed bits in Python site-packages and run python setup.py develop to create a symlink (or .egg-link) to your project in site-packages, instead of the actual installed package. This should make your changes work as usual, without running install all the time. | 0 | 1,905 | true | 0 | 1 | Pyramid - I have to run python setup.py install before changes register | 20,967,674 |
1 | 2 | 0 | 3 | 11 | 0 | 0.291313 | 0 | I'm hooking a python script up to run with cron (on Ubuntu 12.04) -- easy enough. Except for authentication.
The cron script accesses a couple services, and has to provide credentials. Storing those credentials with keyring is easy as can be -- except that when the cron job actually runs, the credentials can't be retrieved. The script fails out every time.
As nearly as I can tell, this has something to do with the environment cron runs in. I tracked down a set of posts which suggest that the key is having the script export DBUS_SESSION_BUS_ADDRESS. All well and good -- I can get that address and, export it, and source it from Python fairly easily -- but it simply generates a new error: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11. Setting DISPLAY=:0 has no effect.
So. Has anybody figured out how to unlock gnome-keyring from Python running on a cron job on Ubuntu 12.04? | 0 | python,ubuntu,cron | 2014-01-07T18:23:00.000 | 1 | 20,978,982 | I'm sorry to say I don't have the answer, but I think I know a bit of what's going on based on an issue I'm dealing with. I'm trying to get a web application and cron script to use some code that stashes an oauth token for Google's API into a keyring using python-keyring.
No matter what I do, something about the environment the web app and cron job runs in requires manual intervention to unlock the keyring. That's quite impossible when your code is running in a non-interactive session. The problem persists when trying some tricks suggested in my research, like giving the process owner a login password that matches the keyring password and setting the keyring password to an empty string.
I will almost guarantee that your error stems from Gnome-Keyring trying to fire up an interactive (graphical) prompt and bombing because you can't do that from cron. | 0 | 2,578 | false | 0 | 1 | Python, Keyring, and Cron | 22,439,701 |
1 | 1 | 0 | 4 | 1 | 0 | 1.2 | 0 | Having a layer of caching for static web pages is a pretty straight forward concept. On the other hand, most dynamically generated web pages in PHP, Python, Ruby, etc. use templates that are static and there's just a small portion of dynamic content. If I have a page that's hit very frequently and that's 99% static, can I still benefit from caching when that 1% of dynamic content is specific to each user that views the page? I feel as though there are two different versions of the same problem.
Content that is static for a user's entire session, such as a static top bar that's shown on each and every page (e.g. top bar on a site like Facebook that may contain a user's picture and name). Can this user specific information be cached locally in Javascript to prevent needing to request this same information for each and every page load?
Pages that are 99% static and that contain 1% of dynamic content that is mostly unique for a given viewer and differs from page to page (e.g. a page that only differs by indicating whether the user 'likes' some of the content on the page via a thumbs up icon. So most of the content is static except for the few 'thumbs up' icons for certain items on the page).
I appreciate any insight into this. | 0 | javascript,php,python,ruby-on-rails,caching | 2014-01-07T20:46:00.000 | 0 | 20,981,545 | You can load the page as a static page and then load the small amount of dynamic content using AJAX. Then you can cache the page for as long as you'd like without problems. If the amount of dynamic content or some other aspect keeps you from doing that, you still have several options to improve performance.
If you're site is hit very frequently (like several times a second) you can cache the entire dynamically generated page for short intervals, such as a minute or thirty seconds. This will give you a tremendous performance improvement and will likely not be noticeable to the user, if reasonable intervals are used.
For further improvements, consider caching database queries and other portions of the application, even if you do so for short intervals. | 0 | 839 | true | 1 | 1 | Caching dynamic web pages (page may be 99% static but contain some dynamic content) | 20,981,669 |
1 | 1 | 0 | 3 | 1 | 0 | 0.53705 | 0 | I have a collection of EC2 instances whit a process installed in them that makes use of the same SQS Queue, name it my_queue. This queue is extremely active, writing more than 250 messages a minute and deleting those 250 messages consecutively. The problem I've encounter at this point is that it is starting to be slow, thus, my system is not working properly, some processeses hang because SQS closes the connection and also writing to remotes machines.
The big advantage I have for using SQS is that 1) it's very easy to use, no need to install or configure local files, 2) it's a reliable tool, since I only need a key and key_secret in order to start pushing and pulling messages.
My questions are:
What alternatives exist to SQS, I know of Redis, RabbittMQ, but both need local deployment, configuration and that might lead to unreliable functionality, if for example the box that is running it suddenly crashes and other boxes are not able to write messages to the queue.
If I choose something like Redis, to be deployed in my in my box, is it worth it over SQS, or I should just stay with SQS and look for another solution ?
Thanks | 0 | python,redis,amazon,message-queue | 2014-01-07T22:13:00.000 | 0 | 20,983,047 | You may have solved this by now since the question is old; I had the same issues a while back and the costs ($$) of polling SQS in my application were sizable. I migrated it to REDIS successfully without much effort. I eventually migrated the REDIS host to ElasticCache's REDIS and have been very pleased with the solution. I am able to snapshot it and scale it as needed. Hope that helps. | 0 | 739 | false | 0 | 1 | Alternative to Amazon SQS for Python | 29,395,022 |
1 | 1 | 0 | 5 | 0 | 1 | 1.2 | 0 | I have Python code that works on a 32bit intel machine running Ubuntu, and I need to run this code on Raspberry Pi. Would I need some sort of cross compiling? I have 32bit .so files included in python. | 0 | python,raspberry-pi,cross-compiling,ctype | 2014-01-08T11:25:00.000 | 1 | 20,994,285 | Python is an interpreted bytecode language, so the actual python code does not need to be cross compiled in any way;
Your shared libraries, files ending in .so are not python, however. You will need to obtain versions of those compiled for the correct architecture. It might well be that those are ordinary C extensions for python, which can be built via setuptools or other means, which works equally well on ARM as it does on i386. | 0 | 947 | true | 0 | 1 | Would I need to cross compile Python to ARM? | 20,994,609 |
1 | 1 | 1 | 2 | 1 | 1 | 1.2 | 0 | I downloaded the sources for Android NDK from the git repository, I noticed that the sources for perl and python are bundled with the other dependencies: what are this 2 interpreters for ?
Does this means that I can build python for Android with the NDK ? Or that if I have a python application I can port it to Android with the NDK ? | 0 | android,python,git,perl,android-ndk | 2014-01-08T21:12:00.000 | 0 | 21,006,620 | Python and perl are used internally by NDK tools to make the cross-compile environment more friendly. You only need them on the host. NDK can be built for Windows, Mac, or Linux. So the git repository contains all opensource that is required to compile NDK for any of these platforms. | 0 | 155 | true | 0 | 1 | Why do I get python and perl with the NDK sources? | 21,015,004 |
1 | 1 | 1 | -2 | 1 | 0 | -0.379949 | 0 | At the moment I'm using Visual Studio 2012 Professional with Python Tools to program application for my Raspberry Pi. For the moment this is a brilliant combination, because the application can also run on a Windows computer and debug it while in development. After I'm at a point that the application can run on my Pi then I will move the files to the Pi and run it there.
Although today I received a GPIO cable and this open new possibilities to use buttons and controle lightswitches, thus fun stuff. But! Now the problem, on my Windows machine I can use the GPIO library and not see the results of the application, what happens if I push this button, what happens in the code, I really want to debug this and also when using in a bigger application. Everytime moving the files to the Pi and testing them there is not a option.
Is there a application that can simulate the GPIO interface of the Pi on my Windows machine so I can test/debug the application while developing? | 0 | python,visual-studio,raspberry-pi,gpio | 2014-01-11T16:45:00.000 | 0 | 21,064,985 | The nusbio device can give 8 gpios for your Windows machine directly avaialble for .NET languages. | 0 | 3,513 | false | 0 | 1 | Program Raspberry PI GPIO on Windows | 34,752,703 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm working on a project with Raspberry Pi. I have two identical keyboard emulator devices as inputs. In my program, I need to know which one gave the input. Is there a way to do this in Python? Thank you! | 0 | python,input,usb,raspberry-pi | 2014-01-11T21:15:00.000 | 0 | 21,067,976 | Do you have control over these devices? Could you change the USB protocol to something more reasonable, like a USB CDC ACM virtual serial port?
Do they have to by identical? If not, I would do something simple like have one of the devices only send capital letters and have the other device only send lower-case, but I guess that doesn't extend so well if you need to send a number.
With two keyboard emulators, you have to worry about what happens if the messages overlap. For example, if device 1 tries to type "banana" and device 2 tried to type apple, there is nothing to prevent your python program from reading something like "applbaneana". | 0 | 323 | false | 0 | 1 | Determine which USB device gives the input in Raspberry Pi | 21,079,003 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | I fiddled around with calling a python script from a Java program for a little while and was finally able to get it working. However, When I called it I noticed that there is a certain call in the python script that creates an object that takes a couple of seconds (which is longer than I'd like). So in an essence every time the script runs it has to re-import a few libraries and create a new object. I'm not sure if this is even possible, but is there any way to keep the python script in a state where it wouldn't have to completely re-run from the start every single time?
Any help would be greatly appreciated. I do not have much experience with the integration of programs with different languages.
Thank you very much!!! Any suggestions are welcome. | 0 | java,python | 2014-01-13T00:31:00.000 | 0 | 21,082,196 | I'm not sure if this is even possible, but is there any way to keep the python script in a state where it wouldn't have to completely re-run from the start every single time?
The correct and most obvious way to do this is to re-implement (if you can) the Python script and turn it into some kind of Remote Serivce and use some kind of Interface:
Examples:
Web Service over JSON
Web Service over RPC, JSON-RPC, XML-RPC
You would then access the service(s) remotely over a network connection from your Java program and serialize parameters passed to the Python program and theh results back to Java via something both can speak eaisly. e.g: JSON | 0 | 69 | false | 1 | 1 | Integration between a Python script and Java program | 21,082,222 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | Somebody said to me 'python does not do automation for android app, as the python stack does not exist in android OS'.
Is it true?
Is Appium based on Android instrumentation framework?
Are there any drawbacks of using Python for writing my test cases? Should I use some other language? | 0 | python,android-testing,appium | 2014-01-13T08:37:00.000 | 0 | 21,086,872 | Appium for Android is based on the UIAutomator framework. Selendroid is based on instrumentation.
There are no drawbacks to using python, Appium works with all languages with Selenium/WebDriver bindings which includes python, node.js, objective-c, java, c#, ruby, and more. | 0 | 751 | false | 1 | 1 | Android automation using APPIUM framework | 21,126,944 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Somebody said to me 'python does not do automation for android app, as the python stack does not exist in android OS'.
Is it true?
Is Appium based on Android instrumentation framework?
Are there any drawbacks of using Python for writing my test cases? Should I use some other language? | 0 | python,android-testing,appium | 2014-01-13T08:37:00.000 | 0 | 21,086,872 | I believe appium dose not have any drawback if python is used. I suggest to use JAVA as a lot of examples and Q/A can be found on web easily. | 0 | 751 | false | 1 | 1 | Android automation using APPIUM framework | 36,628,768 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I want to build a webcam based 3D scanner, since I'm going to use a lot of webcams I doing tests before.
I have orderer 3 exact camera that I will drive in python to take snapshot at the same time.
Obviously the bus is going to be saturated when there will be 50 of them.
What I want to know is if the camera are able to hold the picture until they are transfered to the computer.
To simulate this behavior I'd like to slow down the USB bus and make a snapshot with 3 camera,
I'm under windows 7 pro, is this possible?
Thanks.
PS : couldn't I saturate the USB BUS by pluggin some USB external harddrive and doing some file transfert? | 0 | python,usb | 2014-01-13T22:26:00.000 | 0 | 21,102,635 | What I want to know is if the camera are able to hold the picture until they are transfered to the computer.
That depends on the camera model, but since you mention in your post you are using "webcams", then the answer is almost certainly no. You could slow down the requests you make to the camera to take a picture though.
This sequence of events is possible:
wait
request camera takes picture
camera returns picture as normal
wait
This sequence of events is not possible (with webcams at least)
wait
request camera takes picture
wait
camera returns picture at a significantly later time that you want
to have control over
wait
If you need the functionality displayed in the last sequence I provide (a controllable time between capture and readout of the picture) you will need to upgrade to a better camera, such as a machine vision camera. These cameras usually cost considerably more than webcams and are unlikely to interface over USB (though you might find some that do).
You might be able to find some other solution to your problem (for instance what happens if you request 50 photos from 50 cameras nd saturate the USB bus? Do the webcams you have buffer the data well enough so that it achieves your ultimate goal, or does this affect the quality of the picture?) | 0 | 89 | false | 0 | 1 | Can you slow down your USB bus? | 21,103,307 |
1 | 2 | 0 | 3 | 1 | 0 | 1.2 | 0 | We have a database that contains personally-identifying information (PII) that needs to be encrypted.
From the Python side, I can use PyCrypto to encrypt data using AES-256 and a variable salt; this results in a Base64 encoded string.
From the PostgreSQL side, I can use the PgCrypto functions to encrypt data in the same way, but this results in a bytea value.
For the life of me, I can't find a way to convert between these two, or to make a comparison between the two so that I can do a query on the encrypted data. Any suggestions/ideas?
Note: yes, I realize that I could do all the encryption/decryption on the database side, but my goal is to ensure that any data transmitted between the application and the database still does not contain any of the PII, as it could, in theory, be vulnerable to interception, or visible via logging. | 1 | python,postgresql,encryption | 2014-01-14T20:01:00.000 | 0 | 21,122,847 | Imagine you have a Social Security Number field in your table. Users must be able to query for a particular SSN when needed. The SSN, obviously, needs to be encrypted. I can encrypt it from the Python side and save it to the database, but then in order for it to be searchable, I would have to use the same salt for every record so that I can incorporate the encrypted value as part of my WHERE clause, and that just leaves us vulnerable. I can encrypt/decrypt on the database side, but in that case, I'm sending the SSN in plain-text whenever I'm querying, which is also bad.
The usual solution to this kind of issue is to store a partial value, hashed unsalted or with a fixed salt, alongside the randomly salted full value. You index the hashed partial value and search on that. You'll get false-positive matches, but still significantly benefit from DB-side indexed searching. You can fetch all the matches and, application-side, discard the false positives.
Querying encrypted data is all about compromises between security and performance. There's no magic answer that'll let you send a hashed value to the server and have it compare it to a bunch of randomly salted and hashed values for a match. In fact, that's exactly why we salt our hashes - to prevent that from working, because that's also pretty much what an attacker does when trying to brute-force.
So. Compromise. Either live with sending the SSNs as plaintext (over SSL) for comparison to salted & hashed stored values, knowing that it still greatly reduces exposure because the whole lot can't be dumped at once. Or index a partial value and search on that.
Do be aware that another problem with sending values unhashed is that they can appear in the server error logs. Even if you don't have log_statement = all, they may still appear if there's an error, like query cancellation or a deadlock break. Sending the values as query parameters reduces the number of places they can appear in the logs, but is far from foolproof. So if you send values in the clear you've got to treat your logs as security critical. Fun! | 0 | 2,355 | true | 0 | 1 | Encryption using Python and PostgreSQL | 21,128,178 |
1 | 1 | 0 | 0 | 4 | 0 | 1.2 | 0 | I'm making an application and I would like to load and execute llvm bitcode using the ExecutionEngine. I have managed to do this with really simple C code compiled via clang so far.
My thought is, if I use llvm for this project then it could be more language agnostic than say, specifically picking lua/python/javascript. But I'm confused about how this might work for managed or scripting languages since they are often times tied to a platform with resources such as a GC. So I'm not sure how it would actually work through the ExecutionEngine.
So as an example scenario, suppose a user wanted to write some python code that runs in my application. I then want them to deliver to me bitcode representing that python code, which I will then run in my C++ application using llvm's ExecutionEngine.
Is this possible? Can python be simply compiled into bitcode and then run later using the ExecutionEngine? If not, what do I need to know to understand why not? | 0 | python,c++,llvm,llvm-ir | 2014-01-16T17:15:00.000 | 1 | 21,168,440 | After some reading and some conversations I believe the answer is that the ExecutionEngine essentially executes code as if it was native C code. Which means if you wanted to execute lua/python/javascript code ontop of llvm you would need to actually send the bitcode for that runtime. Then the runtime could parse and execute the script as usual.
As far as I know none of these runtimes have the ability to compile their script directly into llvm bitcode (yet). | 0 | 214 | true | 0 | 1 | Can llvm execute code from managed languages? | 21,189,967 |
1 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | A quick question that may seem out of the ordinary. (in reverse)
Instead of calling native code from an interpreted language; is there a way to compile Java or Python code to a .dll/.so and call the code from C/C++?
I'm willing to accept even answers such as manually spawning the interpreter or JVM and force it to read the .class/.py files. (is this a good solution?)
Thank you. | 0 | java,python,c++,c,dll | 2014-01-16T19:58:00.000 | 0 | 21,171,607 | You can also look into Lua, while not as widely used as a lot of other scripting languages, it was meant to be embedded easily into executables. It's relatively small and fast. Just another option. If you want to call other languages from your c/c++ look into SWIG. | 0 | 253 | false | 0 | 1 | Dynamic Link Library for Java/Python to access in C/C++? | 21,177,883 |
1 | 2 | 0 | -1 | 4 | 0 | -0.099668 | 0 | Preface: I am fully aware that this could be illegal if not on a test machine. I am doing this as a learning exercise for learning python for security and penetration testing. This will ONLY be done on a linux machine that I own and have full control over.
I am learning python as my first scripting language hopefully for use down the line in a security position. Upon asking for ideas of scripts to help teach myself, someone suggested that I create one for user enumeration.The idea is simple, cat out the user names from /etc/passwd from an account that does NOT have sudo privileges and try to 'su' into those accounts using the one password that I have. A reverse brute force of sorts, instead of a single user with a list of passwords, Im using a single password with a list of users.
My issue is that no matter how I have approached this, the script hangs or stops at the "Password: " prompt. I have tried multiple methods, from using os.system and echoing the password in, passing it as a variable, and using the pexpect module. Nothing seems to be working.
When I Google it, all of the recommendations point to using sudo, which in this scenario, isnt a valid option as the user I have access to, doesnt have sudo privileges.
I am beyond desperate on this, just to finish the challenge. I have asked on reddit, in IRC and all of my programming wizard friends, and beyond echo "password" |sudo -S su, which cant work because the user is not in the sudoers file, I am coming up short. When I try the same thing with just echo "password"| su I get su: must be run from a terminal. This is at a # and $ prompt.
Is this even possible? | 0 | python,linux,security | 2014-01-17T06:29:00.000 | 1 | 21,179,274 | If you just want to do this for learning, you can easily build a fake environment with your own faked passwd-file. You can use some of the built-in python encrypt method to generate passwords. this has the advantage of proper test cases, you know what you are looking for and where you should succeed or fail. | 0 | 230 | false | 0 | 1 | Learning python for security, having trouble with su | 21,179,425 |
1 | 2 | 0 | 3 | 1 | 0 | 0.291313 | 0 | i have use funcargs in my tests:
def test_name(fooarg1, fooarg2):
all of them have pytest_funcarg__ factories, which returns request.cached_setup, so all of them have setup/teardown sections.
sometimes i have a problem with fooarg2 teardown, so i raise exception in here. in this case ignore all the others teardowns(fooarg1.teardown, teardown_module, etc) and just goes to pytest_sessionfinished section.
is there any option in pytest not to collect exceptions and execute all remaining teardowns functions? | 0 | python,pytest | 2014-01-17T09:09:00.000 | 0 | 21,181,830 | Are you using pytest-2.5.1? pytest-2.5 and in particular issue287 is supposed to have brought support for running all finalizers and re-raising the first failed exception if any. | 0 | 297 | false | 0 | 1 | pytest. execute all teardown modules | 21,206,053 |
1 | 4 | 0 | 0 | 0 | 0 | 0 | 1 | I have a python program that takes pictures and I am wondering how I would write a program that sends those pictures to a particular URL.
If it matters, I am running this on a Raspberry Pi.
(Please excuse my simplicity, I am very new to all this) | 0 | python,linux,curl | 2014-01-18T05:46:00.000 | 0 | 21,200,565 | The requests library is most supported and advanced way to do this. | 0 | 2,487 | false | 0 | 1 | Curl Equivalent in Python | 21,940,288 |
2 | 2 | 0 | 0 | 3 | 1 | 0 | 0 | Sorry if this is a really dumb question but I've been searching for ages and just can't figure it out.
So I have a question about unit testing, not necessarily about Python, but since I'm working with Python at the moment I chose to base my question on it.
I get the idea of unit testing, but the only thing I can find on the internet are the very simple unit tests. Like testing if the method sum(a, b) returns the sum of a + b.
But how do you apply unit testing when dealing with a more complex program? As an example, I have written a crawler. I don't know what it will return, else I wouldn't need the crawler. So how can I test that the crawler works properly without knowing what the method will return?
Thanks in advance! | 0 | python,unit-testing,web-crawler | 2014-01-18T11:47:00.000 | 0 | 21,203,648 | Unit testing verifies that your code does what you expect in a given environment. You should make sure all other variables are as you expect them to be and test your single method. To do that for methods which use third party APIs, you should probably mock them using a mocking library. By mocking you provide data you expect and verify that your method works as expected. You can also try to separate your code so that the part which makes API request and the part that parses/uses it are separate and unit test that second part with a certain API example response you provide. | 0 | 569 | false | 0 | 1 | Python - unit testing | 21,203,787 |
2 | 2 | 0 | 4 | 3 | 1 | 1.2 | 0 | Sorry if this is a really dumb question but I've been searching for ages and just can't figure it out.
So I have a question about unit testing, not necessarily about Python, but since I'm working with Python at the moment I chose to base my question on it.
I get the idea of unit testing, but the only thing I can find on the internet are the very simple unit tests. Like testing if the method sum(a, b) returns the sum of a + b.
But how do you apply unit testing when dealing with a more complex program? As an example, I have written a crawler. I don't know what it will return, else I wouldn't need the crawler. So how can I test that the crawler works properly without knowing what the method will return?
Thanks in advance! | 0 | python,unit-testing,web-crawler | 2014-01-18T11:47:00.000 | 0 | 21,203,648 | The whole crawler would be probably tested functionally (we'll get there). As for unit testing, you have probably written your crawler with several components, like page parser, url recogniser, fetcher, redirect handler, etc. These are your UNITS. You should unit tests each of them, or at least those with at least slightly complicated logic, where you can expect some output for some input. Remember, that sometimes you'll test behaviour, not input/output, and this is where mocks and stubs may come handy.
As for functional testing - you'll need to create some test scenarios, like webpage with links to other webpages that you'll create, and set them up on some server. Then you'll need to perform crawling on webpages YOU created, and check whether your crawler is behaving as expected (you should know what to expect, because you;ll be creating those pages).
Also, sometimes it is good to perform integration tests between unit and functional testing. If you have some components working together (for example fetcher using redirect handler) it is good to check whether those two work together as expected (for example, you may create resource on your own server, that when fetched will return redirect HTTP code, and check whether it is handled as expected).
So, in the end:
create unit tests for components creating your app, to see if you haven't made simple mistake
create integration tests for co-working components, to see if you glued everything together just fine
create functional tests, to be sure that your app will work as expected (because some errors may come from project, not from implementation) | 0 | 569 | true | 0 | 1 | Python - unit testing | 21,203,798 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | I want to read some data from a port in Python in a while true.
Then I want to grab the data from Python in Erlang on a function call.
So technically in this while true some global variables is gonna be set and on the request from erlang those variables will be return.
I am using erlport for this communication but what I found was that I can make calls and casts to the python code but not run a function in python (in this case the main) and let it run. when I tried to run it with the call function erlang doesn't work and obviously is waiting for a response.
How can I do this?
any other alternative approaches is also good if you think this is not the correct way to do it. | 0 | python,while-loop,erlang,request | 2014-01-18T14:37:00.000 | 1 | 21,205,508 | Ports communicate with Erlang VM by standard input/output. Does your python program use stdin/stdout for other purposes? If yes - it may be a reason of the problem. | 0 | 1,127 | false | 0 | 1 | Run python program from Erlang | 21,226,686 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I'm wondering if it is possible for an app to run in Python and call Java methods (and vice versa) through libffi? | 0 | java,python,c,libffi | 2014-01-19T12:23:00.000 | 0 | 21,216,706 | In general, things get complicated when you're talking about two managed runtimes (CPython and the JVM, for instance). libffi only really deals with a subset of the issues here. I would look more at remote method invocations as a way to integrate code written in different managed runtime environments. | 0 | 232 | false | 1 | 1 | Can libffi be used for Python and Java to communicate? | 21,247,326 |
2 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 1 | I got much anonymous questions that attack my friendship.
Is there a way to get the IP-Adresss of these Questions with a Python script?
I have little more than normal Python knowledge, so you mustn't show me complete Code, just 1-5 lines or just explain something.
I hope you'll help me! | 0 | python,ip | 2014-01-20T14:40:00.000 | 0 | 21,236,742 | If the IPs are not logged by ask.fm, there is not much you can do about it. And if it's logged, you probably don't need any script to extract it, as it should be presented somewhere along with the questions or separately in some list. | 0 | 1,205 | false | 0 | 1 | Python Ask.fm IP of Anonymous Questions | 21,237,260 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I got much anonymous questions that attack my friendship.
Is there a way to get the IP-Adresss of these Questions with a Python script?
I have little more than normal Python knowledge, so you mustn't show me complete Code, just 1-5 lines or just explain something.
I hope you'll help me! | 0 | python,ip | 2014-01-20T14:40:00.000 | 0 | 21,236,742 | In addition to @Michael's answer, even if you might be able to get the IP you won't be able to do much. Most of people also use dynamic IP addresses.
You may want to contact ask.fm to get more informations, it's very hard they will give you them though. | 0 | 1,205 | false | 0 | 1 | Python Ask.fm IP of Anonymous Questions | 21,237,392 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 0 | I have .so files that work well on my Ubuntu 32bit, would I need different version of them to work on my Raspberry Pi? I am loading them using python. If it wont work, What should I go through? | 0 | python,shared-libraries,raspberry-pi,ctype | 2014-01-20T19:24:00.000 | 1 | 21,242,436 | You'll need to recompile them from source. x86 and ARM are completely different microprocessor architectures, and programs/libraries compiled for one will not work on the other. | 0 | 346 | false | 0 | 1 | Shared Library files .so x86 would work on ARM? | 21,242,449 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 1 | I am working on some applets and whenever I'm trying to open the applets on IE using my python script, It stops for a manual input to enable the activex.
I tried doing it from the IE settings. but, I require a command line to do it by which I can integrate it in my python script only. | 0 | java,python,internet-explorer,activex | 2014-01-21T05:49:00.000 | 0 | 21,250,136 | I found one solution to this.
We can make the below modification to the registry and achieve running of applets automatically without pop-ups
C:\Windows\system32>reg add "HKCU\Software\Microsoft\Internet Explorer\Main\Feat
ureControl\FEATURE_LOCALMACHINE_LOCKDOWN" /v iexplore.exe /t REG_DWORD /d 0 /f | 0 | 923 | false | 1 | 1 | How can I enable activex controls on IE for auto loading of applets | 21,260,619 |
1 | 1 | 0 | 0 | 2 | 0 | 1.2 | 0 | I haven't found a suitable solution for this problem yet.
We haven't started the development, but we chose python for various reasons. So I don't wanna switch to PHP just because of an api-sdk.
Here are my thoughts for a possible solution:
Rewrite the api-sdk in Python. It's not extremely complex. I guess it will take 3-5 days. However we have to update the sdk by ourself. And the api, for what the sdk is made for, changes a lot.
Write a wrapper around the sdk. That enables us to call each single sdk-function by executing a php file in python like execfile(filename).
Or I use a wrapper to make the sdk-functions accessible via url.
The sdk returns result objects (like productResult).
The problem with solution 2 and 3 is that I can't to use these result objects in python. Solution 2 & 3 have to return a JSON. So I would lose some functionality of the api.
I happy to discuss your thoughts on this. | 0 | php,python | 2014-01-21T12:01:00.000 | 0 | 21,257,554 | Porting the "sdk" to Python is probably your best bet - using python-requests and a custom json deserializer should make it easy. You may also want to have a look at RestORM... | 0 | 105 | true | 0 | 1 | How to use an api-sdk (written in PHP) in a Python app | 21,291,038 |
1 | 1 | 0 | 2 | 2 | 0 | 1.2 | 0 | I am working on a project which is based on python windows version. Now the customer wants the project to be extended to linux platform also.
My project uses the package xlwt, xlrd for writing the results to the excel sheet.
So here, Will these packages are compatible with the linux platform also?
Can I use this package in Linux? Or Is there any equivalent package for Linux to write the result to a spreadsheet?
Since my code is very huge,Is there any tool to convert the whole code from windows platform to linux platform? | 0 | python,linux,excel | 2014-01-21T13:00:00.000 | 1 | 21,258,884 | Yes, xlrd/xlwt work fine on Linux. Most python code and libraries run the same on any platform. | 0 | 379 | true | 0 | 1 | Will XLWT work in linux platform? | 21,259,963 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I wrote a Python script which I need it to run every 5 mins. My server is running CentOS 6.4 Final. Here's what I did in detail.
After logging into the server with an account has root access, I did cd /var/spool/cron/, I can see a couple of files has different usernames on it. Edit my file (the one has my username on it) with nano myusername and I added this line at the end of the file.
*/5 * * * * /usr/bin/python /home/myusername/Dev/cron/python_sql_image.py
I waited a bit and the cronjob works now. But new question: this Python code will generate a png file after being executed. When I manually run it, the png file will be created under the same folder with the py script, but when cronjob runs it, the png file was created on /home/myusername. Is there anyway I can change the location? | 0 | python,cron | 2014-01-21T18:35:00.000 | 1 | 21,266,405 | Each line that contains a job must end in a newline character. | 0 | 3,668 | false | 0 | 1 | cronjob on CentOS running a python script | 21,266,639 |
1 | 1 | 1 | 3 | 3 | 1 | 1.2 | 0 | While doing a breaf research on IronPython I got confused about it's execution model and how it integrates with C#.
Can you please point, which of these assumptions are wrong:
IronPython is Not compiled Ahead of time ( into a clr exe|dll
with IL code)
IronPython is distributed as script
When executed, IronPython files are compiled at runtime into IL and then executed in a CLR AppDomain.
Thanks | 0 | python,clr,ironpython | 2014-01-22T05:45:00.000 | 0 | 21,275,131 | You can use pyc.py to create an exe/dll, but it's not well documented. Otherwise, you're basically right. | 0 | 185 | true | 0 | 1 | IronPython and Ahead-of-Time Compilation | 21,284,660 |
1 | 1 | 0 | 4 | 1 | 0 | 1.2 | 0 | I have a robot framework testcase file with the name 'mytestsuite.txt'. It has few test cases..I can run this suite using,
pybot mytestsuite.txt
But when I tried to execute it using --suite option,
pybot --suite mytestsuite.txt
getting the error ,
[ ERROR ] Expected at least 1 argument, got 0.
Is anything wrong in this ,or anyone can suggest how to execute the testsuite file.
Thanks in advance. | 0 | python,selenium,automation,automated-tests,robotframework | 2014-01-22T09:56:00.000 | 0 | 21,279,553 | pybot --suite mytestsuite /path/to/mytestuite-dir So drop the .txt and put path to the directory where the suite is at the end of the command. | 0 | 2,773 | true | 1 | 1 | Running a testsuite in Robotframework | 21,280,237 |
2 | 4 | 0 | 4 | 19 | 1 | 0.197375 | 0 | I need to write python scripts to automate time configuration of Virtual Machines running on a ESX/ESXi host.
I don't know which api to use...
I am able to find to python bindings for VMWare apis viz. PySphere and PyVmomi.
Could anyone please explain what is the difference between them, which one should be used?
Thanks! | 0 | python,vmware | 2014-01-24T07:02:00.000 | 0 | 21,326,448 | Also pyVmomi directly corresponds to the vsphere Managed Object browser.
So get to the MOB on the vcenter, figure out what properties you need, the methods as well and the 1 to 1 name convention from pyvmomi helps you achieve what you want.
(in short, you learn about vsphere api and are good to go with pyvmomi, no mapping in the head needed) | 0 | 25,180 | false | 0 | 1 | What is the Difference between PySphere and PyVmomi? | 22,400,004 |
2 | 4 | 0 | -1 | 19 | 1 | -0.049958 | 0 | I need to write python scripts to automate time configuration of Virtual Machines running on a ESX/ESXi host.
I don't know which api to use...
I am able to find to python bindings for VMWare apis viz. PySphere and PyVmomi.
Could anyone please explain what is the difference between them, which one should be used?
Thanks! | 0 | python,vmware | 2014-01-24T07:02:00.000 | 0 | 21,326,448 | Just as Josh suggested its a clean interface to VMWare API it also support a few versions of python which is nice as it will allow you to migrate from lets say python2.7 to python3.3. | 0 | 25,180 | false | 0 | 1 | What is the Difference between PySphere and PyVmomi? | 22,074,852 |
1 | 2 | 0 | -1 | 1 | 0 | -0.099668 | 0 | I'm writing python app which currently is being hosted on Heroku. It is in early development stage, so I'm using free account with one web dyno. Still, I want my heavier tasks to be done asynchronously so I'm using iron worker add-on. I have it all set up and it does the simplest jobs like sending emails or anything that doesn't require any data being sent back to the application. The question is: How do I send the worker output back to my application from the iron worker? Or even better, how do I notify my app that the worker is done with the job?
I looked at other iron solutions like cache and message queue, but the only thing I can find is that I can explicitly ask for the worker state. Obviously I don't want my web service to poll the worker because it kind of defeats the original purpose of moving the tasks to background. What am I missing here? | 0 | python,heroku,notifications,worker | 2014-01-24T16:59:00.000 | 0 | 21,338,216 | Easiest way - push message to your api from worker - it's log or anything you need to have in your app | 0 | 209 | false | 1 | 1 | Ironworker job done notification | 21,348,604 |
1 | 1 | 0 | 0 | 5 | 0 | 0 | 0 | We're using Heroku for historical reasons and I have this awesome ZeroRPC based server that I'd love to put up on the Heroku service. I'm a bit naive around exactly the constraints imposed for these 'cloud' based platforms but most do not allow the opening of an arbitrary socket. So I will either have to do some port-forwarding trick or place a web front-end (like Flask) to receive the requests and forward them onto the ZeroRPC backend. The reason I haven't just done Flask/ZeroRPC is that it feels awkward (my front-end experience is basically zero), but I'm assuming I would set up RESTful routes and then just forward stuff to ZeroRPC...head scratch....
Perhaps asking the question in a more opening-ended way; I'm looking for suggestions on how best to deploy a ZeroRPC based service on Heroku (btw I know dotCloud/Docker uses zeroRPC internally, but I'm also not sure if I can deploy my own ZeroRPC server on it). | 0 | python,heroku,flask | 2014-01-26T04:03:00.000 | 0 | 21,359,542 | According to Heroku spec you are supposed to listen to single PORT which is given to your app in env. variable.
In case your application needs only single port (for the ZeroRPC), you might be luck.
But you shall expect your ZeroRPC being served on port 80.
Possible problems:
not sure, if Heroku allows other than HTTP protocols. It shall try to connect to your application after it gets started to test, it is up and running. It is possible, the test will attempt to do some HTTP request which is likely to fail with ZeroRPC service.
what about authentication of users? You would have to build some security into ZeroRPC itself or accept providing the service publicly to anonymous clients.
Proposed steps:
try providing the ZeroRPC services on the port, Heroku provides you.
rather than setting up HTTP proxy in front of ZeroRPC, check PyPi for "RPC". There is bunch of libraries serving already over HTTP. | 0 | 464 | false | 1 | 1 | Best way to use ZeroRPC on Heroku server | 35,026,440 |
1 | 1 | 0 | 0 | 1 | 1 | 1.2 | 0 | I am making a tool in python to push and obviously I would want to push the last commit so if I am checking the diff of the last I want to push but if the diff is not of current and last HEAD, then git push should not work.
How to check if the git diff is between current head and last head i.e. git diff HEAD^ HEAD and not any other ?
why I need functionality?
because Diff I am seeing is the diff I am going to send in email. however would that make sense I see a different diff and push the last commit .
which is why I am trying to figure out if diff being displayed is of current and last commit only then i should push else not. | 0 | python,git,git-diff | 2014-01-26T05:00:00.000 | 0 | 21,359,906 | I sense a simple git status or git diff --cached would be enough to make sure that the last commit is indeed the last one, meaning there is no work in progress, some of it already added to the index, which could constitute a new commit.
If git status doesn't mention any file added to the index, then you can push your last commit. | 0 | 56 | true | 0 | 1 | how to confirm that current git diff is one with current and last and not with current commit and any other | 21,361,300 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I am familiar with the concept of Abstract Base Classes (ABC's), as providing sets of properties of the builtin objects, but I don't have really any experience working with them. I can see that there's a Mapping ABC, and a MutableMapping that inherits from it, but I don't see a .fromkeys() method (the only thing missing off the top of my head.)
Would I be able to craft a dict with purely ABC's? What would that look like? Would that amount to nearly the same thing as subclassing dict? Would there be any benefit to doing that? What would be the use-case? | 0 | python,dictionary,abc | 2014-01-26T07:33:00.000 | 0 | 21,360,937 | Would I be able to craft a dict with purely ABC's?
No. Subclassing an ABC requires you to implement its interface; for example, Mapping requires you to implement __getitem__, __iter__, and __len__. The mixin methods provide default implementations for certain things in terms of the parts you need to implement, but you still need to provide the core. Mapping won't automatically provide a hash table or BST implementation for you. | 0 | 82 | true | 0 | 1 | Is it possible to craft a Python dict with all (or most) of the properties of a dict with Abstract Base Classes? | 21,360,962 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I have a bunch of python objects with fields containing arrays of various dimensions and data types (ints or floats) and I want to write this to a file which I can read into C# objects elsewhere. I'm probably just going to write an XML file, but I thought there might be a quicker/easier way saving and reading it. Also, the resulting XML file will be rather large, which I would like to avoid if it is not too much hassle.
Is there a tried and tested file format that is compatible (and simple to use) with both languages? | 0 | c#,python,xml,serialization,data-storage | 2014-01-27T23:59:00.000 | 0 | 21,394,321 | What you are trying to do is called serialization. JSON is an excellent option for doing this with support in both languages. | 0 | 124 | false | 0 | 1 | Most painless way to write structured data in python and read in C# | 21,394,349 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module.
Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed??
Thanks in advance. | 0 | python-2.7,openerp,base | 2014-01-28T05:30:00.000 | 0 | 21,397,605 | why you going in installed modules and search for base module and update it?
you have to only update that module in which you have done changes in xml file not event py file.
if you have changes in xml file of those module you have to update only those module.
if you going to update base module it will update all module which installed in your databae,
because every module depend on base, we can call base is the kernal of our all modules, all module depend on this module, if you update base it will going to update all modules
if you have done some changes in sale then you have to search for sale and update the only
sale module not go to update base module
regards, | 0 | 155 | false | 1 | 1 | Why is it taking more time, When i upgrade a module in Openerp | 21,398,386 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module.
Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed??
Thanks in advance. | 0 | python-2.7,openerp,base | 2014-01-28T05:30:00.000 | 0 | 21,397,605 | As you have said that You are new to OpenERP, Let me tell you something which would be very helpful to you. i.e Never Do changes in Standard modules not in base. If you want to add or remove any functionality of any module, you can do this by creating a customzed module. in which inherit the object you want, and do the changes as per
your requirement.
Now regarding the time spent when upgrading base module, This is because when you update base module it will automatically update all the other modules which are already installed (in your case - Sales,Invoicing,Human-resource,Tools and Reporting) as base is the main module on which all the other modules are dependedent.
So, Better is to do your changes in customized module and upgrade that perticular module only, not the base.
Hope this will help you. | 0 | 155 | true | 1 | 1 | Why is it taking more time, When i upgrade a module in Openerp | 21,398,452 |
1 | 2 | 0 | 1 | 0 | 1 | 1.2 | 0 | Personally I think it's better to distribute .py files as these will then be compiled by the end-user's own python, which may be more patched.
What are the pros and cons of distributing .pyc files versus .py files for a commercial, closed-source python module?
In other words, are there any compelling reasons to distribute .pyc files?
Edit: In particular, if the .py/.pyc is accompanied by a DLL/SO module which is compiled against a certain version of Python. | 0 | python | 2014-01-28T05:42:00.000 | 0 | 21,397,757 | If your proprietary bits are inside a binary DLL or SO, then there's no real value in making an interface layer a .pyc (as opposed to a .py). You can drop that all together or distribute it as an uncompiled python file. I don't know of any reasons to distribute compiled python files. In many cases, build environments treat them as stale byproducts and clean them out so your program might disappear. | 0 | 1,333 | true | 0 | 1 | Distributing .pyc files versus .py files for a commercial, closed-source python module | 21,398,143 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I would like to integrate a Python application and PHP application for data access. I have a Python app and it stores data in its application, now i want to access the data from python database to php application database. For PHP-Python integration which methods are used?
Thanks | 1 | php,python,web-services,integration | 2014-01-28T07:43:00.000 | 0 | 21,399,625 | The easiest way to accomplish this is to build a private API for your PHP app to access your Python app. For example, if using Django, make a page that takes several parameters and returns JSON-encoded information. Load that into your PHP page, use json_decode, and you're all set. | 0 | 561 | false | 0 | 1 | Integration of PHP-Python applications | 21,410,252 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am trying to automate the report creation in Geomagics, using the create_report() function.
However, we have several sets of results, which need to be reviewed by a human operator (within the Geomagics interface) before the various reports can be created if the results are considered acceptable. Since create_report() works on the current ResultObject, I'd like to be able to set this to all my results in a loop.
(Alternatively, there might be a way to write a report for a specific object, not just the current result?) | 0 | python | 2014-01-28T13:55:00.000 | 0 | 21,407,492 | Can you break the problem down further ?
How should the operator see the results, as a spreadsheet or some other way ?
For example, can you script outside of GeoMagic to fetch result sets and display those to the operator, then write back approved results to another dataset
then at the end, create the report within GeoMagic from the "approved" dataset. | 0 | 133 | false | 0 | 1 | How can I specify the current ResultObject in Geomagics with a python script | 21,739,346 |
1 | 1 | 0 | 2 | 3 | 0 | 1.2 | 1 | I'm just starting exploring IAM Roles. So far I launched an instance, created an IAM Role. Everything seems to work as expected. Currently I'm using boto (Python sdk).
What I don't understand :
Does the boto takes care of credential rotation? (For example, imagine I have an instance that should be up for a long time, and it constantly have to upload keys to s3 bucket. In case if credentials are expired, do I need to 'catch' an exception and reconnect? or boto will silently do this for me?)
Is it possible to manually trigger IAM to change credentials on the Role? (I want to do this, because I want to test above example. Or if there is there an alternative to this testcase? ) | 0 | python,amazon-web-services,amazon-s3,boto,amazon-iam | 2014-01-28T14:27:00.000 | 0 | 21,408,290 | The boto library does handle credential rotation. Or, rather, AWS rotates the credentials and boto automatically picks up the new credentials. Currently, boto does this by checking the expiration timestamp of the temporary credentials. If the expiration is within 5 minutes of the current time, it will query the metadata service on the instance for the IAM role credentials. The service is responsible for rotating the credentials.
I'm not aware of a way to force the service to rotate the credentials but you could probably force boto to look for updated credentials by manually adjusting the expiration timestamp of the current credentials. | 0 | 265 | true | 0 | 1 | How to manually change IAM Roles credentials? | 21,409,299 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | I want to perform image processing on a low-end (Atom processor) embedded computer or microcontroller that is running Linux.
I'm trying to decide whether I should write my image processing code in Octave or Python. I feel comfortable in both languages, but is there any reason why I should use one over the other? Are there huge performance differences? I feel as though Octave may more closely resemble, syntax-wise, the domain of image processing than Python.
Thanks for your input.
Edit: The motivation for this question comes from the fact that I design in Octave and get a working algorithm and then port the algorithm to C++. I am trying to avoid this double work and go from design to deployment easily. | 0 | python,image-processing,embedded,octave | 2014-01-28T16:34:00.000 | 0 | 21,411,408 | I am bit surprised that you don't stick to C/C++ - many convenient image processing libraries exists. Even though, I have like 20 years of experience with C, 8 years of experience with Matlab and only 1 years of experience with Python, I would choose Python together with OpenCV, which is an extremely optimized library for computer vision supporting Intel Performance Primitives. Once you have a working Python solution, it is easy to translate this to C or C++ to get the additional performance or reduce the power consumption. I would start with Python and Numpy using matplotlib for displaying / prototyping, optimize using OpenCV from within Python and finally use C++ and test it against the Python reference implementation. | 0 | 812 | false | 0 | 1 | Octave/MATLAB vs. Python On An Embedded Computer | 21,412,797 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am using OpenCV on my Raspberry Pi to track circular objects. Then, I want to send the coordinate and radius values in an array of floats across the LAN to the Java program
I can send strings all fine with the code I have, but I'm having trouble trying to send numerical datatypes. What is the correct process for this? | 0 | java,python,sockets | 2014-01-29T16:24:00.000 | 0 | 21,436,787 | A meta-answer would be to use JSON, since JSON generators and parsers can be found for every major programming language. | 0 | 129 | true | 1 | 1 | How to send an array from Python (client) to Java (Server)? | 21,436,939 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I am using OpenCV on my Raspberry Pi to track circular objects. Then, I want to send the coordinate and radius values in an array of floats across the LAN to the Java program
I can send strings all fine with the code I have, but I'm having trouble trying to send numerical datatypes. What is the correct process for this? | 0 | java,python,sockets | 2014-01-29T16:24:00.000 | 0 | 21,436,787 | Have you looked at BSON? It's like JSON, but optimised for a little more speed. | 0 | 129 | false | 1 | 1 | How to send an array from Python (client) to Java (Server)? | 21,437,109 |
1 | 1 | 0 | 5 | 5 | 1 | 1.2 | 0 | Functions declared at module level never have a closure and access non-local variables via LOAD_GLOBAL.
Functions declared not at module level may have a closure and access non-local, variables via LOAD_DEREF if those variables are not global.
So basically we have three ways of storing and loading variables GLOBAL (global), FAST (local) and DEREF (non-local, enclosed, covered).
Why the GLOBAL? Wouldn't FAST and DEREF suffice, if you let all functions have their closures? Is there some important difference between a non-local variable and global variable I fail to spot? Is this maybe due to performance issues, as perhaps global variables (like all functions and classes (including their methods) defined at module level plus the builtins) are generally more common than non-local variables? | 0 | python,global-variables,closures | 2014-01-29T18:59:00.000 | 0 | 21,440,163 | Local and closed-over names are enumerated during compilation. At runtime, they're stored in C arrays and accessed using integers/indices. LOAD_FAST and LOAD_DEREF take a C integer and perform a C array lookup.
Global names cannot be enumerated at compile time, they can be added and removed during run time by any code in the whole process. This is similar to object attributes - because globals essentially are a module object's attributes. Therefore, they are stored in a dictionary and the implementation accesses them quite differently from local and closed-over names. LOAD_GLOBAL takes a string (constant) and performs a dictionary lookup. | 0 | 190 | true | 0 | 1 | Implementation of Global variables vs Dereferenced variables | 21,440,260 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am planning on doing a bit of home automation. I decided on going with the RPi, because it is cheap, and can connect to the internet wirelessly via a USB dongle. I was planning on controlling the system through a PHP webpage hosted on my webserver. I was wondering if I could make it so that when I click a button on the PHP site, it somehow sends a signal to the raspberry pi and makes it activate a GPIO pin. I realize that it would be easier to host the webpage on the actual Pi itself, but I plan to have multiple Pis and would like to be able to control all of them with one webpage.
Thanks In advance | 0 | php,python,raspberry-pi,home-automation | 2014-01-29T20:57:00.000 | 0 | 21,442,470 | Use a websocket (e.g., on Node.js) to open a channel of communication between the Raspberry Pi and the Web page. Run a socket server on the Web server and run clients on your Rasberry Pis. Then create a simple messaging protocol for commands that the Web server will send over the websocket and that the Raspberry Pis will listen for over the socket. They can even communicate when the task is done that it's been done successfully. | 0 | 525 | false | 0 | 1 | Python, PHP: Controlling RPi GPIO from website on a separate server | 31,495,240 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am planning on doing a bit of home automation. I decided on going with the RPi, because it is cheap, and can connect to the internet wirelessly via a USB dongle. I was planning on controlling the system through a PHP webpage hosted on my webserver. I was wondering if I could make it so that when I click a button on the PHP site, it somehow sends a signal to the raspberry pi and makes it activate a GPIO pin. I realize that it would be easier to host the webpage on the actual Pi itself, but I plan to have multiple Pis and would like to be able to control all of them with one webpage.
Thanks In advance | 0 | php,python,raspberry-pi,home-automation | 2014-01-29T20:57:00.000 | 0 | 21,442,470 | I don't think it would be as easy as 'sending a signal' to your Pi. What you could do, however, is set up a MySQL database on the server with your control signals input to the database and have the Pi poll it every so often to check the values.
For actually controlling, you would simply use UPDATE statements to set the values. There may be some lag involved, but this depends on your polling rate and network speed. | 0 | 525 | false | 0 | 1 | Python, PHP: Controlling RPi GPIO from website on a separate server | 21,864,500 |
2 | 4 | 0 | 75 | 53 | 0 | 1.2 | 0 | I can't find a way to list the tests which I can call with py.test -k PATTERN
How can I see the list of the available tests? | 0 | python,unit-testing,pytest | 2014-01-30T11:25:00.000 | 0 | 21,455,134 | You can also use --collect-only, this will show a tree-like structure of the collected nodes. Usually one can simply -k on the names of the Function nodes. | 0 | 18,525 | true | 0 | 1 | List available tests with py.test | 21,462,398 |
2 | 4 | 0 | 4 | 53 | 0 | 0.197375 | 0 | I can't find a way to list the tests which I can call with py.test -k PATTERN
How can I see the list of the available tests? | 0 | python,unit-testing,pytest | 2014-01-30T11:25:00.000 | 0 | 21,455,134 | -v verbose tells you which test cases are run, i.e. which did match your PATTERN. | 0 | 18,525 | false | 0 | 1 | List available tests with py.test | 21,455,508 |
1 | 1 | 0 | 3 | 0 | 1 | 1.2 | 0 | I have made a git command in python, want user to save email password in git config but don't want any user to understand if he opens .gitconfig file !! | 0 | python,git,git-config | 2014-02-02T04:56:00.000 | 0 | 21,507,215 | git has support for integrating with local keyring/password management utilities; search google for "git (name of your keyring program)". (These are called "credential helpers".)
Alternatively, if your remote is over SSH, you can use public key authentication, along with ssh-agent to remember the password to your private key.
If it's something else entirely that you're storing the password for (the "email password"?), you could consider a similar tactic: integrate with the local keyring manager. I'm not sure if git credential helpers can do this directly for you or not, but you might be able to implement the same side of the protocol as git, and thus use credential helpers that already exist. | 0 | 292 | true | 0 | 1 | how to save password to git config in encrypted format that is is not readable if .gitconfig is opened | 21,507,326 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 1 | I need to access my python programs through IP address for making it to do something in server. Creating Apache Server for only one python script is not good solution.
In server it works like: python script.py --arg
Now I need something like http://xxx.xxx.xxx.xxx:xxxx/script.py --arg or something else. Main idea is to send argument to program remotely without ssh.
PS. Main problem with framework and python simple HTTP server was block in firewall. | 0 | python,webserver | 2014-02-02T09:43:00.000 | 0 | 21,509,104 | With Flask you can do that in about ten lines of code. | 0 | 205 | false | 0 | 1 | Best way to create simple web server for python files | 21,509,226 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I am making a website for a sports team. there is a python 3.3 cgi script that allows a user to input match data. I would like to send an email to everyone on the mailing list to tell them that there has been a new match submitted, but I obviously don't want the person inputting the data to be waiting for ages while all the emails are sent. How can I start a python script in the background that will do this. I also need it to queue any match submits after while this is still processing. thanks for any help in advance. | 0 | python,python-3.x,cgi,fork | 2014-02-02T18:11:00.000 | 0 | 21,514,258 | Never run application from cgi scripts, It's very insecure. Use any DB and cron for this. Store data from your cgi script to DB and run from cron mail script that will sending all emails. | 0 | 217 | true | 0 | 1 | How to fork a python script in the background from cgi script | 21,541,263 |
1 | 1 | 0 | 2 | 3 | 0 | 1.2 | 0 | I just now realize what is causing the trouble:
Whenever the interpreter is busy, my Emacs buffer containing the python script buffer gets stuck, as I suspect that Emacs is trying to get the information of a function, and display it as a pop up. My usual solution is to spam C-g, but that gets old quickly.
It has been bothering me for months, did anyone find a solution (such as a separate thread for the python info)? Even simply ceasing Jedi work while the interpreter is busy really would save a lot of frustration.
I am using Jedi, auto-complete, Python 2.7 and Python 3.3 (the problems occur in both), on Ubuntu. | 0 | python,python-2.7,python-3.x,emacs,autocomplete | 2014-02-02T23:35:00.000 | 1 | 21,517,747 | Maybe disable auto-complete at all? BTW from my feeling relying on company, not jedi, the distraction from auto-complete in most cases is far over gain. Emacs comes with a lot of great tools making edits faster: abbrev, dabbrev etc. which seem much more efficient. Well, if jedi delivers really intelligent completions, it might be part of the game. | 0 | 673 | true | 0 | 1 | Emacs freezing when asking Jedi/Auto Complete information while Interpreter is busy | 21,522,470 |
1 | 2 | 0 | -1 | 1 | 0 | -0.099668 | 1 | I am using buildbot version 0.8.5 and need to send an HTTP post request from it as a step. After searching for it on internet, I found that the latest version 0.8.8 has a step called HTTPStep for doing so. Is there any similar step in the older version?
I know it can be done using batch file or python program using urllib2. but is there any other way to do it? | 0 | python,httprequest,buildbot | 2014-02-03T05:39:00.000 | 0 | 21,520,459 | Just my thoughts..As far as I know it is better to use a python script from a build step. Simple and easy to control. The logic being:
the entire buildbot is inside one http connection/session and sending another http request somewhere might have issues with the connection/session.
from the buildbot httpstep description, you need to install additional python packages which might be not be so convenient to do on multiple slaves/masters. | 0 | 268 | false | 0 | 1 | Sending http post request in buildbot | 22,557,265 |
1 | 3 | 0 | 1 | 1 | 0 | 0.066568 | 0 | Issues using SoundCloud API with python to get user info
I've downloaded the soundcloud library and followed the tutorials, and saw on the soundcloud dev page that user syntax is, for example /users/{id}/favorites.
I just don't know how to use python to query user information. Specifically, i would like to print a list of tracks that a given user liked, (or favorited, but liked would be better).
any help would be greatly appreciated. thanks! | 0 | python,api,soundcloud | 2014-02-03T18:36:00.000 | 0 | 21,535,003 | You can also do the following :
import soundcloud
token= 'user_access_token'
client = soundcloud.Client(access_token=token)
user_info = client.get('/me')
user_favorites = client.get('/me/favorites')
user_tracks = client.get('/me/tracks')
and so on... | 0 | 503 | false | 0 | 1 | soundcloud api python user information | 30,069,248 |
2 | 3 | 0 | 2 | 6 | 0 | 0.132549 | 0 | Our group is evaluating Robot Test Framework for our QA group, not just for BDD, but also to possibly cover a lot of our regular functionality testing needs. It certainly is a compelling project.
To what extent, if any, is Robot Framework based on xunit (unittest) architecture? I see that unittest asserts can be used, but I don't see that the RF testcases themselves are based on unittest.TestCase.
Ideally, our organization would like to be able to be able to write Robot Framework tests, as well as Python unittest testcases, run the testcases together from one runner and get integrated results, reuse RF's Selenium2 Library's "keywords" as functions used by our regular unittest testcases in order to share a common SE code-base.
Is this a solved problem? Does anybody do this kind of thing? | 0 | python,robotframework,python-unittest | 2014-02-03T18:37:00.000 | 0 | 21,535,028 | Robot is not at all based on xunit technologies. Personally I think it makes a great unit testing framework for python code, since you can create keywords that can directly import your modules. I use this technique for some projects I work on.
With robot, you can tag your unit tests or put them all in a separate hierarchy so that you can run them separate from acceptance tests if you like, or combine them and get statistics broken out separately. | 0 | 7,792 | false | 1 | 1 | Running unittest Test Cases and Robot Framework Test Cases Together | 26,558,782 |
2 | 3 | 0 | 10 | 6 | 0 | 1 | 0 | Our group is evaluating Robot Test Framework for our QA group, not just for BDD, but also to possibly cover a lot of our regular functionality testing needs. It certainly is a compelling project.
To what extent, if any, is Robot Framework based on xunit (unittest) architecture? I see that unittest asserts can be used, but I don't see that the RF testcases themselves are based on unittest.TestCase.
Ideally, our organization would like to be able to be able to write Robot Framework tests, as well as Python unittest testcases, run the testcases together from one runner and get integrated results, reuse RF's Selenium2 Library's "keywords" as functions used by our regular unittest testcases in order to share a common SE code-base.
Is this a solved problem? Does anybody do this kind of thing? | 0 | python,robotframework,python-unittest | 2014-02-03T18:37:00.000 | 0 | 21,535,028 | RobotFramework is not the right tool for unit testing.
Unit-tests should be written in the same language of the units (modules, classes, etc.)
The ability to describe scenarios in natural language (which is one of the strongest features of systems like RF) is worthless in unit tests. At this level of testing scenarios are for input x you get output y.
RF is best suited in Acceptance Testing and Integration Testing, the top-grained verification of your system.
Nevertheless you can integrate RF and xunit in your QA system together. And merge reports from RF and unit-test. | 0 | 7,792 | false | 1 | 1 | Running unittest Test Cases and Robot Framework Test Cases Together | 21,565,221 |
1 | 1 | 0 | 6 | 5 | 0 | 1.2 | 0 | I have a python script that sends out an email using win32com and Outlook. The script runs without a hitch when I run it through an interpreter or double-click on the script. However, when I run it through the Task Scheduler, I get the following message:
"Something went wrong. We couldn't start your program. Please try starting it again. If it won't start, try repairing Office from 'Programs and Features' in the Control Panel."
I'm using Office 365, and Python 2.6. I've tried running the script through the scheduler after killing the Outlook process, but I ran into the same issue. | 0 | python,scheduler,win32com | 2014-02-03T18:54:00.000 | 0 | 21,535,376 | Office isn't designed to run as a service, and needs to be run interactively. You'll need to change your task configuration in Task Scheduler to run the task as the currently logged-in user, on the current user's desktop, with the current user's privileges. | 0 | 2,057 | true | 0 | 1 | Python script involving Outlook through win32com runs when double-clicking, but not through task scheduler | 21,536,420 |
1 | 1 | 1 | 2 | 0 | 1 | 1.2 | 0 | I'm interested in creating my own programming language and I would like to use python. My question is, would a language written in Python using the PLY library be considerably slower than CPython or would they be about the same in terms of program execution speed?
Also in terms of performance how much better would it be if I implemented it in C?
Thanks,
Francis | 0 | python,c,programming-languages,yacc,ply | 2014-02-03T23:39:00.000 | 0 | 21,540,126 | If you are implementing a compiler in PLY, the compilation may take longer - but that's irrelevant the execution speed of your program.
For example, you could use PLY to write a C compiler. The compiler may or may not be faster than your other C compiler, but the resulting executable should run at a similar speed (unless you miss a lot of optimisations etc.) | 0 | 243 | true | 0 | 1 | Would a language written in Python using PLY be slow? | 21,540,275 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I have a executable file working in Ubuntu that runs a script in Python and works fine. I have also a shared directory with Samba server. The idea is that everyone (even Windows users) can execute this executable file located in this shared folder to run the script located in my computer.
But, how can I make an executable file that runs the python script of MY computer from both Linux and Windows remote users? | 0 | python,bash,exe,samba | 2014-02-04T16:32:00.000 | 1 | 21,558,022 | As you've said, this executable file would need to be something that runs on both Linux and Windows. That will exclude binary files, such as compiled C files.
What you are left with would be an executable script, which could be
Bash
Ruby
Python
PHP
Perl
If need be the script could simply be a bootstrapper that loads the appropriate binary executable depending on the operating system. | 0 | 262 | false | 0 | 1 | Executable shell file in Windows | 21,559,703 |
1 | 1 | 0 | 2 | 2 | 1 | 1.2 | 0 | I am working on a module (mypackage) using Eclipse/PyDev and Python 2.7. I have other packages and modules that need to use it. In order to make sure the other packages and modules are always using a working version of mypackage, I decided to deploy mypackage to site-packages using distutils (same computer), which I will only update if the development version of mypackage in PyDev has been debugged after making changes.
In order to get mypackage to work when deployed to site-packages, I had to write it using absolute imports. The problem with that is that now when I try to run the modules within the develoment version of mypackage from Eclipse for debugging, it is importing other modules in mypackage from site-packages rather than from the development version in Eclipse.
Is there a way to get around this? I would hate to have to rewrite my code with absolute-imports every time I want to update mypackage in site-packages, and then change it back if I want to make changes and debug my code in Eclipse. | 0 | python,eclipse,pydev | 2014-02-05T01:58:00.000 | 0 | 21,567,271 | Adding the project directory /${PROJECT_DIR_NAME} to the project's PYTHONPATH seems to have done the trick.
Before, I only had /${PROJECT_DIR_NAME}/mypackage in the project's PYTHONPATH. So I suspect that, when using absolute imports, Eclipse was unable to find /${PROJECT_DIR_NAME}/mypackage/mypackage/mymodule and would then proceed to search in site-packages. | 0 | 638 | true | 0 | 1 | PyDev imports from package in site-packages rather than package in development (absolute-imports) | 21,666,549 |
1 | 1 | 0 | 9 | 3 | 1 | 1.2 | 0 | Is there a way to backup Python modules? I installed lots of modules. If my system does not work properly, I will lose them all. Is there a way to do this? | 0 | python,linux,module,backup,linux-mint | 2014-02-05T14:37:00.000 | 0 | 21,580,200 | If you installed them with pip, you can use pip freeze to list the currently installed modules. Save this to a file and use pip install -r file on a new system to install the modules from the file. | 0 | 3,387 | true | 0 | 1 | Is there a way to backup Python modules? | 21,580,285 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | I'm new to Emacs and I'm trying to set up my python environment. So far I've learned that using "python-mode.el" in a python buffer C-c C-c loads the contents of the current buffer into an interactive python shell, apparently using what which python yields. In my case that is python 3.3.3. But since I need to get a python 2.7 shell, I'm trying to get Emacs to spawn such a shell on C-c C-c. Unfortunatly I can't figure out, how to do this. Setting py-shell-name to what which python2.7 yields (i.e. /usr/bin/python2.7) does not work. How can get Emacs to do this, or how can I trace back what Emacs executes when I hit C-c C-c? | 0 | python,python-2.7,emacs,emacs24 | 2014-02-05T21:03:00.000 | 1 | 21,588,464 | I don't use python, but from the source to python-mode, I think you should look into customizing the variable python-python-command - It seems to default to the first path command matching "python"; perhaps you can supply it with a custom path? | 0 | 1,019 | false | 0 | 1 | Using python2.7 with Emacs 24.3 and python-mode.el | 21,590,370 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have developed couple of extensions and never had any problem in deploying to the production server. I did try to installation a new extension today on my production server that works on my dev machine but doesn't work on the production server. I am suppose to see a new menu option as part on this new extension and I don't see that. To test I changed the extension name in the production.ini and I got an expected error (PlugInNotFoundError). I have restarted the apache and nginx. I am running CKAN 2.1.
I did ran the following command on the production server:
python setup.py develop
I got the message that the plugin was successfully installed.
I also included this new plugin in the production.ini file settings.
Restarted both the apache2 and nginx servers.
Still not seeing a new menu option to access the functionality provided by this newly installed extension.
Any help to sort this out be appreciated.
Thanks,
PK | 0 | python,ckan | 2014-02-07T00:15:00.000 | 0 | 21,616,883 | Do you need to clear your browser's cache? Are there any other settings (e.g. extra_public_paths) that are different between your dev and production machines? | 0 | 217 | false | 1 | 1 | CKAN extension deployment not working | 21,623,741 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | Is there any difference in performance between using the python API of Z3 instead of directly interacting with the C implementation through SMT-Lib files for instance?
Thanks! | 0 | python,performance,z3 | 2014-02-07T13:22:00.000 | 0 | 21,628,893 | Yes, there is measurable overhead of using the python API to build and traverse terms compared to the C/C++ APIs. | 0 | 237 | true | 0 | 1 | Performance of the python Z3 API | 21,629,657 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | Im working on a little project that running rabbitmq with python, I need a way to access the management api and pull stats, jobs, etc. I have tried using pyRabbit, but doen't appear to be working unsure why, hoping better programmers might know? Below I was just following the basic tutorial and readme to perform the very basic task. My server is up, I'm able to connect outside of python and pyrabbit fine. I have installed off the dependencies with no luck, at least I think. Also open to other suggestions for just getting queue size, queues, active clients etc outside of pyRabbit.
'Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\user>python
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
import nose
import httplib2
import mock
from pyrabbit.api import Client
import pyrabbit
cl = Client('my.ip.com:15672', 'guest', 'guest')
cl.is_alive()
No JSON object could be decoded - (Not found.) ()
Traceback (most recent call last):
File "", line 1, in
File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 48, in wrapper if self.has_admin_rights:
File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 175, in has_admin_right whoami = self.get_whoami()
File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 161, in get_whoami whoami = self.http.do_call(path, 'GET')
File "C:\Python27\lib\site-packages\pyrabbit\http.py", line 112, in do_call raise HTTPError(content, resp.status, resp.reason, path, body)
pyrabbit.http.HTTPError: 404 - Object Not Found (None) (whoami) (None)' | 0 | python,django,rabbitmq | 2014-02-07T23:38:00.000 | 0 | 21,639,733 | I was never able to solve this. But, this forced me to learn what json is, I used simplejson along with httplib2 and it worked like a charm... | 0 | 781 | false | 0 | 1 | Unable to get pyrabbit to run | 21,939,627 |
10 | 15 | 0 | 7 | 32 | 0 | 1 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | 0 | python,python-2.7,module,resolver | 2014-02-08T03:57:00.000 | 1 | 21,641,696 | You could also install the package with pip by using this command:
pip install git+https://github.com/rthalley/dnspython | 0 | 147,472 | false | 0 | 1 | Python DNS module import error | 36,287,320 |
10 | 15 | 0 | 0 | 32 | 0 | 0 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | 0 | python,python-2.7,module,resolver | 2014-02-08T03:57:00.000 | 1 | 21,641,696 | If you don't have (or don't want) pip installed there is another way. You can to solve this is to install package with native OS package manager.
For example for Debian-based systems this would be command:
apt install python3-dnspython | 0 | 147,472 | false | 0 | 1 | Python DNS module import error | 67,931,629 |
Subsets and Splits